var/home/core/zuul-output/0000755000175000017500000000000015136600275014532 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136625110015471 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000400035215136624751020264 0ustar corecore){ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD ~6ƋI_翪|mvşo#oVݏKf+ovpZj!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!frH_HI\:U}UE$J @ٚeZE0(8. ϟ'+'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;d+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-\39T|L /~p柿x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892sv*+{[or@x,))[o新#.͞.;=Fsg31zYYy[N 1m٢ڶEͦAc?-֋6rR)? I?ytwpC'P/9} ƘwXe就9bQQ!.(GNp$d(3 %רx%z(o6jp}vE_Bf\ZA$Ba-z|A-I @x70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:2JޚݶKd7UZihRk71VDqiގ\<:Ѓ3"gJJčE&>&EI|I˿k2ǯɘCGOa9C1L ={fm&'^tigk$DA' elW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?xt.o:`x^d~0u$ v48 0?(8E-DHa%LMg2:-ŷX(ǒ>,ݵ𴛾é5Zٵ]z"]òVgz梬-'~DW_vZ :jJ2^b_ F w#X6Sho禮<u8.H#',c@V8 iRX &4ڻ8zݽ.7jhvQ:H0Np: qfՋ40oW&&ף \9ys8;ӷL:@۬èvvn/sc}2N1DDa(kx)L(f"-Da +iP^]OrwY~fwA#ٔ!:*땽Zp!{g4څZtu\1!ѨW(7qZcpL)ύ-G~^rFD+"?_h)yh=x>5ܙQ~O_e琇HBzI7*-Oi* VšPȰһ8hBőa^mX%SHR Fp)$J7A3&ojp/68uK͌iΙINmq&} O L-\ n4f/uc:7k]4p8wWLeUc.)#/udoz$} _3V6UݎvxyRC%ƚq5Щ/ۅw* CVo-1딆~ZYfJ"ou1ϵ5E bQ2mOΏ+w_eaxxOq:ym\q!<'J[FJ,4N:=6. +;$v6"I7%#CLTLyi{+ɠ^^fRa6ܮIN ޖ:DMz'rx#~w7U6=S0+ň+[Miw(W6 ]6ȧyԋ4ԙ./_A9B_-Z\PM `iĸ&^Ut (6{\٢K 5X\*wٗYS%g,0\ Rk k8P>x7v21՚H :[Γd!E'a4n?k[A׈(sob 41Y9(^SE@7`KIK`kx& V`X0,%pe_ן >hd xе"Q4SUwy x<'o_~#6$g!D$c=5ۄX[ു RzG:_[ӏ[3frl ô ހ^2TӘUAT!94[[m۾\T)W> lv+ H\FpG)ۏjk_c51̃^cn ba-X/#=Im41NLu\9ETp^poAOO&Ack vz(vb$^Nyo$p[DtUCE9s".zɪ) ӓT)D:fci[*`cc&VhfFp佬)/Wdځ+ uR<$}Kr'ݔTW$md1"#mC_@:m P>DEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP__gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞t1Rx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)U?o': .,wؓaՁ_ 3]Q16ZYafuvrq^ѷQT},!H]6{Jw>%wK{)rH+"B4H7-]r}7v8|׾~Us?yWfv3>xpRҧH-EeJ~4YIozi:nq Vq8swHOzf ̙eX-4`TDGq G.tݻgq74ŠqBFf8 9Fk Afq#ϛa$!qNCJ4bnvB @W,v&- 6wCBjxk9ᤉ ,Asy3YޜZ4ΓVYf'h?kNg?҆8oC!IMo:^G10EY↘H:L@D+dˠUHs[hiҕ|֏G/G`' m5p|:9U8PZ7Yݷ/7cs=v{lLHqyXR iE^1x5/[O6rpP40ޢE_A͝ Z5 om2p)lbp/bj_d{R\' 礅_}=\:Nb{}IStgq$<$ilb)n&  $uT{wD]2cM(%YjDktByxVl巳1~jpd1O9Á%˧Byd}gs9QNʟ. /ӦxbHHAni5(~p>/O0vEWZ nY3 cU $O,iLacoW1/W=-kqb>&IL6i}^^XpCŋ݃k-$pxbڲ&6*9mg>{rtD)wQ`pkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% ?K~,#/0'NZ׽Kq^ėSJ6#j8GO[ PCbʍN^XS&}E9OZ]'t$=tnn&nu [}Ab4 +OLuU{0fIb { wml"Ms>\΋"?|NKfֱn !ڄ`[nUgu$ B6 [^7 |Xpn1]nr CC5`F`J `rKJ;?28¢E WiBhFa[|ݩSRO3]J-҅31,jl3Y QuH vΎ]n_2a62;VI/ɮ|Lu>'$0&*m.)HzzBvU0h} -_.7^nya+Cs 6K!x^' ^7 l 2Jj.S֔(*CjaS:vp/N6I*x8"EȿQa[qVM/)fpOj4r!:V_IG^nILVG#A7jF};qPU嗈M9VS;a+Ӧ8E8zmMs*7NM~@6 ' 8jp*:'SOANa0rӍ?DT%l)gvN}JT(Ȋqm|dc+lQai,|Dߟ|, d#EjZܴv]pEO7}&gbXԈedKX :+Z|p8"81,w:$TiVD7ֶ]cga@>\X=4OZSܿ* %xccDa.E h :R.qɱMu$ơI8>^V Y. ,BLq~z&0o- ,BLqfx9y:9244ANb n\"X>Y`bb*h%)(*_Gra^ sh6"BzƾH( ."e)B QlKlXt҈t9՚$ضz]'.!-r"1MCĦʸ"66pE{ =CNc\ESD[T4azry !5yY~ :3;Y[Iȧ q:i Ǟ/"8Wxç,vܰtX-LE7 |-D`JLw9|fb>4Nu ߏ3ap5k_JA+A.A~ C~`[KaQ-Ģn9ѧf q:cT >to^ X]j?-ȇlCf0hM`~ ó}0W@o  K[{d+`ze"l |d;L2k%x90ݙ^Oe ]nHfS+.4<#/5߁ݛǪ0q,7FeV/!; 瓠 Li% z}ɯww"O-]J`sdN$@"J`Y13K/9`VTElsX|D^c%֯T][$m;ԝ!,Z5f`XFzȁ=nrSA8; P=uY}r/27OUa%~0;үM3Tu ȩ*'3IC~LG,?.?C3tBYpm_g.~>3ʄ55[c&-Wgy_jVo,?s w*n\7[cpMY<~/"˘oV܉T6nn \_ߋV_}Z=k-nn sn.*upw pX\_ U-C_wS!|q?E-S_w$-#9?mh{R 4ѭm_9p -h2 dֲ 1"j {]]Nk"䁖%5'32hDz O\!f3KX0kIKq"H~%.b@:Oec6^:V8FDza5H`:&Q5 ^hI8nʁu EA~V O8Z-mYO!tO֠υ9G`6qmJc,Qh: ݢKNw2taC0Z' O > f-`:F_Ѫ2)sCj1THɩhS-^p b~?.>, `0!E%ҏ:H =VՑӄ| Ć.lL t1]}r^nʂI-|i*'yW='W6M$oeB,޳X$I6c>EK# 15ۑO2Jh)8Vgl0v/eNEU"Ik dRu˜6Uǖ xs%P ع omWl҈sApX!^ Ɩgv{Xn|$̇d`>1Ljn떚F+B9l"UP۾u2Ja>0c0Vvގj$]p^M+f~@9{bOe@7ȱ^%u~-B竟} |23 Z.`oqD>t@N _7c$h3`lg\)[h+pHBr^J |r\8czEnv@qZbRT1e8V Scc6:$[|a.fpU`ZR֩bKgTlѩynۢ, "1LӰW&jDkM~# (C>ϭQ3{ߤ%EN;?P%ٱm -{2k 8Vbv"wŏݙmn&O1^'}plM)0\n ή ?Cֲa9H] lX9^vCο -vd+OUgRy2Я\ B0!% #>bJPUck\Ul'F瘏Y4Ew`[x٘p,>9V"R1I>bJ` UL'5m1Ԥ:t6I >jz(:W֪Ƹ)!fꠗe[XLE4atGS1px#S]MF˦NJPYDX%ܠꡗhl}i9f?q>b-E'V"mNf""ŦK9kǍ-vU #`uVi<s)/=r=nlӗЩsdLyVIUI':4^6& t,O669Ȁ,EʿkڍfC58$5?DX 4q]ll9W@/zNaZf% >Ę_"+BLu>'Ɩ=xɮ[⠋X((6I#z)2S zp&m?e8 "(O+:Y EaSD]<^(]В|Ǚ8"oRs?]\McZ0ϕ!1hKS`h0O{!L-w]ln2&0Ǚ'0=.T4G7! H/ͺ|@lX)+{{^s1V63 ۗI"*al NJ`Q8B\pup6_3XqCXznL9:{o qcuו8`n{ave=}OR9~yL Z1=W8>É R$|L ]OfJl˪VVg:lDԒ͢Zu[kWۗw{{7st08`J0ꨴU1|z:9dX)z2!S:'q9| 76"Q;D*04Zٚ ?V¼r8/G:T6Fw/ɚ~h?lUc3MکEen壹n\殸,˛_uu.Jssu/*47U0)l?R_^Uon̝f-nnZTeuu nn/*0׷տ·sHH?Et _I`[>>0ւcz/Adh.$@bѨLtT=cKGX nݔ͆!`c|Lu_~ǴT?crO e9d ljB?K_z>p%'3JQK-͗R>KkΤOq,*I|0]Sj%|-Ԟ = Ʃ%>H&t;9`>$& nIdE Ͻq*nŘʰNҁV?mI_K(>Ğox3`nJ\ ٚ_UMJϖULXɋwv=U]T $%1HcI~DUarjSW%)4W5(Y4[p U*D%xRk*SiHc_)2-{k=*Fêf5u?+nϒOyTH?!4~${k)*,pkӇpפiBwNomlf.5怳Dq#yDq2X9ۚu/TZC!Cr(xls}zpa9cK7|쵫ͣ:fX{ ƀi \ozGy|M״<׳B!t/X<{yV_>`#`]?򤛮9OO+kdXbK"ElY~V7ludr9dqn}o옶o#+i4FO)\28AF%7HYdЈR,qu*_s9wr^q x1G-Kg3Os7ގ>GJ[8ɧ"gYIS+F'\d.Ni"mkvZ (]O;AۋFɗ.Ha0NnN>?MDtyi$`?}DMi3_w-D7D=.ےrX.4n}.Z~ܲ .Næjq;m6vIj4yP+Z'YS)t< ,h$.7GYUOSwghi!0Rg}cK@2u6mwU4jX>#1@o{/SZJpx. il_z~\Puc<ۻzhVAU[gϓ-HXS lÇf”&r^:|K?0kǗr&*!/GrMş%\^޴4[.Jl$y.Lds~,2c0j3)u"yS dxb#̪Et~ `:*tX` /X뽔 X s[F" #p"mG4 ?ZyjF<9VLIvcPbH<; C֎ +Gq-гl#uHPmjBd{7_~B>qcnJYN{+P,$.5kyiQ+B ) r|B:!gQo ot>w=q-ѧayr-&5RkJ$D:ir-ٟ9X:G gEf󈕝Icss)}.MBeqj&_&a#r i-x]E.~F),;d ,@6k70)(H \+6Y, iԤ@htMOA7/EVW0~QMŲLԒux-‹+ZdIsx`DY;NїA4ťȵ+Ί fiMzJ49^ӐIQރ\KBPW4_~Cд]<E%\>Oꢺp!?\i<}S@8E.c9`o=n /z4#oo[=* }hMuI|?bo DY&E">0Qv7B3+$[J/[-k+`?q#N .cfX&u6iں\Z!OFDKku|vmM4q:yVk64.TͶ p)2ESI5!nˁ{LH}wS?5L<^YMmk`7lJFհ9zoH37ݭ TT~ϪZWD> DJ%BK"h;غuQj"*Py=H6Nz 84W X?Qa7,e&Hv"uC5Gn覥܈fx`CʃyY"'E}j4*3!ղG4r%ɡX` tےFV`J Eiu  U4*#(}6`Ã-[HZ*9Q1z\[6~dsoiTGzltzsCyݶLNY6bƐyAO]ڂHk Nۺ{^a߲6 X8 Pk皲*BOaGA'v۩EnYt8'5;P(FfkWm_i40]H`]b x&oa%؍0f0q΀vs5F/r%ѭQ)D[8)l&wm&;k3 `@`:2X.M@*.S)trI5vd)b!ܝr93K(W>vBq `GsSD/2X06e\Gy6 bW؅qdƥ ߵꩂSi#R t,C]܁<ߕAq婉O-\㖥2oy6[Kʰo1 ; J<OHeU蚽ko}H}iS]QU|Z]9*I&]3J\yi3~c 5TG'9a~OkץX;!څTˈMP7 W֬C;nZ]P'E^V8´^̧zfGn&?.Чq tđ 0Zao5lS:& D#"|E=-@֋ǽ O.wQE1&9]r@Bd[(?w;a$1ܗ?6^D0 Ѹx`4@䲿F3E( VY@nm*jgݭcw( x]Pw(#q%2ݎ,Zppj)`A8:>v%/0t#!.8vx*`dCFVK SH# Y숅/ЈMo-Ax?{í-l 94 Z<3djd^GE\'s9qqi]NlL'JU>g7H-LùEtg31 y/kjɺw=h\kj%$_E;W_mKtŊ RNs8=)vn넣q^ ɚծw4wċgBKkؗv87ͮ~+1k76)k.u5nvJcw=JTq+dLnmm)t0x]qu5.Auur@`QmG8çuX'u\jDNt$2d0l;/GI): tdzs{y_[\Xk~:I%[ ) v$!ɀѶp:H#t \'xo͸9!Nd BV⤇~Iۯ'KMK=8$0R$8I ri):Li$2}#DsRH<9 '.z7AfH5`1;-^#yZҬP).yG%_;2 lJM'hXξ$Pr*Rxj 4͹i=~̥dW6ɩ\Wikj<#f $+,Qp+ cJq"6< ,kĖii-FLw,\|<- 1),oyM2ƙԵuxN8@Ez y~`oȴ.۟uH_IUVHj\ү_O)gz%~r$$J`> 2MD57Rf, UCvNu67VSރVXX`vևİm@!|嫵FVզZ R,z:&&5&g5]EA"wq U^ }vvn:%N,?~.@~O^?+`wѹ/eR-aNiߡGoaEU;G\7_Nq߫whmT8^% oN[$*>k{)prtK˷m ,0i]S3x 㐜KA"+ "Yf8#mAP,P3f8gb،mU7Ki>TIG 59-Hv=N~g5ɏAQ>AuXSu|K7l wm~)zԳ-ԻA'K;]6Çq7c'Uꏈe/;eXTXc?R{~MWy $ >:M?4`#CHg7E<*68xx#|&Y3WvV:|9&MpFֶA/+eh'.kRk@;~ZOځP{P{B #~"UB eʞH(ہPgPgB u#y"UB ʟH(߁PwPwB u#}"UB *H؁PoPoB #{"BtIsm간ГmJS)+[}mGS&i,o/x*a8@I,G=m2s^eh>;Uʙc.=cH:^wƹrL@Y2n+4o1dP:'M΀g;ṕ5'%|3̋54?U39yɻ$]|̇s¨xlhӢ&M A^cPO'Gg1': {tXNWc$n*/bԿĝ :`2~x\ y&q`0x..A O & Iϋ@ TaBG8ty<ɀg$YNd*}50(Z?Tx`^6 %G 1a'{ƌJJ9%R惌s/*dYL J쬴p!^˧'#;$.|ޡ'Qgg1՗9fFrVV0ā֐mۏ/VT6oAK޵${`N{.X 2-JbdodI =Ց3E_dK/;\.5=O6ӭ&H(-fKJM)my|9owx"Ț1mw6z3}A7DEpl=D~p&D޻s\$='>W~jy~Ow_PE!.bʝ0C7>o=?%="@(uJLM?/oyUŻ ~A ~An?q贖nށ2_g+鿾Im8CYl1z,V-M6 F =vaZb5%t,PrFp#޶Gcqʆ9dOLwIh G BCq4R+<8x+X7yV* Տp|)m4AhQXj \lDIxuh%d#UD1H0nU~)J8wqKr ]uՊB8C5#<>0bmH L,Gy0AEPJxוĶFHx}Q nI6pn"F8EIֺ\Bw:$8*bY5%av]p`o4a,ESn,Dܰn XhoSk\..$8zuM xzp}4>ˠ(9Ve"%#];; a>f nե1S熛bHbwさX(Qr*fY܆8W5P?3PO5ĥA4 ;c.QߺsBLԶ ˃F8gǦ]J*ϲ ,'V(Pc$]8҅m"q0RY,<#Oe'j+(ݫGR>KTi?t9 nJ]s=y#}FZHbM`tc*JDk\燇[2]Z֨Sŝ 5so#WYӡf,kX_ 8a3j3}}]Fw*3;/9809n;O-~|;G*/BZЦ]מرa'saV;! A$8*sT656 nFڛ(XNM"XFerٌo6_'dYڋ4| g4|7۾zCDX`h7 qڋ} N1烦d- K;fT!έt/$8*{}[>5&r03^>Uyg²PuaaMQ(cI|%Q"w6.ȽQ|&q0RsbqҶvOm ±bqiyĈ78P`@A1edBoʄ"mSR!Z5gOd~M^3#HB?b:ZG`h $s@C0#s1:2*sMRbyEq v䬹 FWLsW [W)yLs`}s3)[aTe C0%C=ͶF|(IrxV{Pc`H'1{ .WyO'O.6=mSfwKbV5/=~RZr" #Vek=@K'a )Sn"[d2Om|SWjPL:VR)mlZxk>ȖO7ۇw-k4x#dTY0_43ˎar;PB3?ÁHUﵤkv`~2Lt>!2S N-] 3l|odR{nOO?u `{8x`߰>F2fUlz>Il|qD]\B7Mld"ڱg"?U* y] Q(d2u<+L`6KgE NF/fd[hLdbdDzX)"<7kd%kUw:łlHxy>:XN9?30 .ӝE+նslzb_o6P߈ J%m@YJF:+bH o ǥx c/bhY6R(Dao#䫚Λл~O[WW!18E/x bκfp$F&t/7}|'$fC,)^P I|C/x>\n^JP?p<7;/X 9s2Q݄`Q1C1찹}6$D9pnz{zfʃ=ho@y "VS䴶VnuXM*, ! ?>8x_4qȶDJ7e?JB Lcgf21O @4OtSFp^(9'p\~T<;1\=rv>_S7&M+U.+ϋ11v*^9 [;^&C5(䫋_(2d7gG?ީ?h~\DƨlUJU\sy+~u?b 'ו>U{tO> ǿ_#GY8ZM"Mx ?O;gGo.>6I%$bchIKLtn8> iHDw(ҥ2 88pԠwj_3ױߋ(\5*ş֏gG[2s|앷77| {ɷŏ>U7M/>e|$žCj&4UX)F߱;wUVj(ڏ`7?(P/ EK#>n)^~t5i˃cx'4ɬt~q]L8B_ N g託9'Ď2N`q *Zx @ew(tL" Tdx<ϴs'3Ww DsF$EXDj;zh~}*NiSIEmRU?p̿mgAdaԒT^ h ^Gd70sNUz+u!S}>zi* q-[S9 ei__T޺fm%W/x{㲘WQُѰS3SbӿL[|0.B*_y:Envs}0r6wQZ; iUXҐSQ׳o@fQٴp$,q(8KCp*xXskp2`dnW'u\Ve LYM(jEI"#c}}A|}XUi^bp:9èlOY%}4#uͥZGBti*s#Z$Ax;Iqn^v{7U<膨3KgJ(Ai,㘗tp 9## ;*{wJ2uPߜJ g+z.;)nl;D* 3ꐳHsJME.uҊT!ʻCvG{@`kDb"ݜx.]Vs89Ρ:"R'rmq  :zO|Ozg]Χ+GVL(̓R\[) ΙJ wOìI+V̖+Aa<-Ǜ9J|B2,F't΍-͢iv15na!\_4]_?)ξky9J?!ЮEjB';q}nˆbõZC窵hUFO H<.tB+X )E8R5ߝUܨޝ,|}P;vN7^r>>Fz}'Y>7u$hhPZ5ZFDCn1 $:W}׫_^yzDR[N/}4{Q}\w/0o32/87n1fcAa/'0O.Xx>!FЧl_h}i"4 u8,s!}ehێ)Wlː$4KW4 QW<ģFb/HB>4F0ҵ_$eZU#:9j;cfk=KhxM]o/*;(ww6mk˹"`dg9N3C+/@+-վAo RFk<&B1e/y5Kā_sӠ9B=\qġ @~d2&-)KܙrÛcKN.6\nmqZx&>u监r*U}'e|:.".ݬ[n\\;95JO8ff. ',8&r%ePYcY΄50.37\&0=W9 -_ٶx 8C,-8wESjפtu d2߷f[ I5#o8 r.ƞQ(If^Q7֌Ҍ?1!_ٜ;d8T(y0l,6g/ s{Pr3^t05*ĻsI6gm+j5VB3*8XQj%;XKrBq M(^ǒ!ƚѶVѶ!3_k$R 4y+p019FLňD9%hl" 4[8+Cˍ5mE4m叞pMni"&Dvh/ ,Tg鹉mpS,(be_ʢ*@lY\ES"A;IZ}“7"P/ JZ2Di+CP{kaήn48[=w#~ HUEN|˱e֭}oUVP )j%^Fs]i4FeZm8]k6I+@AŖ0n Ml&'I ڙt7,IM.r9J"^:_iep7B Imaܙv~t[oεʑ'?Ro.F0̷ijCe0Z=b#+])%?Q62WuRrAMOWi}C'da<fB}|*}#ht0.V|g_`E$VZAFNVY)`x-CURMBD=ܼ݊}JYvI%ɩ- ^Wys~~nr|XBP!| έWZ%V:r%5B2ZUgԦZw\wyeHZ_"x|fXc9KeW|fEg-딵 &0B2"jrDf[NSXV2W<7@1l򜐜[$r#E9s,#F` N:HycD 3̰;K'o-g>.s[kjעF4W6QYwd((qǔ;=+PZ}s%K\ dDǹ6E zPtܬ"K ݔ%mFOHq^Ym VirxH01\J܅3n{ep掿Mؾnh"f6}L\.bzbpa$$P* 3;'IDŽx_[V> pL pLIXĥ3,){:NW!(1(G?L3P41_%ƶR. Byeu&3 \:M yɝ, tK y.@4eTH8E4\nX`D [fFlI!">-%kbwڄpW B$K%z!?k H'PTi?Yb`ٝľ"%ӵ%VKMgǠMV=NYgNFjpT |@EOq`JMRԞ7tmuEY>xEryIG,)"kuو*n_qFtg=/^ڱl~2֌fGhV#(o,?(tɌFsjX_ 0ٕSo%!-1Y#Ć`BDlclqޜaR:HJuG%$âE0Ek`#W%tQN"We WrcbpDhpdG6pp(I$>/%tc8si0b:/`$5I+2hY{9iüJd&;uVXI JjVi,ZFB-RmQ-ȗ1&q" W5K-.I-K=Ca{H g!\Հ]q|+!vǪP@FR#bBU{3ӥ;4{DkSռ'R`v5rm <j MUAfs)I-)фQ+O8|VWT D)l.g[I$n[F*:=: 8n6,Fa a@1IiuOD aű+oXtDE*Iu|`KHheP4P2$,"ҘZ#- .5W*,Xo++p[pP1i6h*mX[*}qJ_`ۧ[V(ٕ-qUɚ^o^xroy(3s pkt7n͎vX(JRhy޿vҐ?`ݿ#j]piWBH.Knn]~i+ c@|404GCr,ϑ#g%Qυpzi!ڽRm#MmT!SoQ<[`x=jfpTF8G}ͫ0]M$??^lɅ~ < $|H߂to]-Y݂I<ؕ;ƄHO~_?ԗYSՉ/̷g呇1-YARB!C#$I% yA={mʨd6&8Gd8QRʨ>Q;mj0,개1(le֖ױkNe(?][vǶr̛e7܀;<_͘8. f8L >>n`&HLr!zVܼa1wBn2'7w4|t~^r\Oc6t.WC2v4y%4f =G/v "qPun: 0ulR^?Ҩ ol$ל`dg/Y>HN^xԞ,?>~|ߑ!NvwNaf<8 zz{:; W7^Bf<,Xv:%lw0oo<(ҡ[Q. U˓}f%v4'v2KoYOl3te˷o̔shbn+N2݃wb2"#q~L,E%g~1j x=7/5O5K[cR,Y7;j9h 1)ό6QPPD@T 剸cՙx2ī1$eH4,!#rT(_7z _6&i"]dGd懦Ng&CF?YI8kuWЫpj4xl?s|\8x\;g+%Zo{[jI<b<`: rS|gW*|K we%Zm8n[sf4g=///Aze>?* Qg q? qaVRL}ߟٛWo^ ߦK?һfv&Q^?N\e+$ɴwFnI_7k/x>mΏ׶(JzI.!b_a?j9KϾǰz<9><|ڿg62._h/4a:$7mܜ =I[]K`qgNq'/Eqޙ%5'Inڮx<i2CDE;3u .KJ.r@۩+G^@rip1XjK]4rF e<PKqYcziǛ@ ЌF~d՗/iw0{hK1>@KF/?燈_`.\J*PM#Z8vhOmVfoCl?b("^vF:'چUk1LB/avЀ7* SXFΣKix"$h|:G}cc(T_!|f J/0"~G1bjO݆]{!8iTQĤQ#%6GcP$%\2N 0g6 ) &$֩u V 䈳Icrfo=uH)!CQv}q朓4{}Roy|vd5Dm+#B @y=~@riT3W\jׅ7@Nh4mYxv@h;bXG@h&AQc*(@Cj=: KmBƂõ!adB!C- t" s:1g1 NVQjv+^֞9Nz Ỿf?.ĵL\lEUbv$&0²;!Jwt =uGW+ѕ9B5)OwGW:]鎮tx;]鎮Ѯ4twtC:ѕJwt;]鎮tS;]LEwt儮F\nS;`䆅ZD)3?0Q ?1M%jvNQ1jc#sA3,b xHPRaL@D hܡ(mSZv(y)x$ E>qoqP>%aqìPY]87;򾤊#~<# k5]< I-Y[ݐ_ kuCja#ʫ@uWLּtxuP{p !߲e7 ⊰|xj մ^G_Qr>0EVuVMu6*t0%szOť-A6ȿBC$v>$-mIJCUs mIU=U$b/8WW{G{|y?n̟#Li9oW@)XrS\bOqS0i-~7Kk̙c`=5&_/ἀxa(ⷋw_xdքz/f1iοϦ72ɗwHѫbP-[Ќ;իM:ZԂY{qVϲ&~eO֏o/S 8Dĝ[a0˻.X?yqԹ_6pVG\92?f%|གྷl QQ`Ve;6bQij_k31r32../fS*i?B0eoA6n%yzu!Կ8x?Kpz+&~;w 䚾z3z;w[ߡC4%ru02@bbHf1kdG3QVsO%^=hZzA,l KJ3R.5vdөSմa%i+ˬ "dR)*0R1*{'`9d|3:oTXr+sf$I2xaBe2W0C2U$[ OgjW{MXKyXvV,O: -ޗL##8_Wbe8bv/zun߷8t(7]s`kdNs.Ze,gc>8pb+EbZu\e־( ]_xP{Zp:Ent(ˬZY&a?NcVD#5qbrǕB^Y˘Ȯm6+FA }A9peÅbatţ~Sx&9No˜W3^.yp,c.Ig. eo ibߡtrE,rP9,kmUD 7r1pGF m1 E TfZ=_Ƹy1ǯW-^1w $'r̘5l\_%FuV{Ϝ X4eLF(/*ޜWqypQ}[$Sē={1y˹_ZR񙿝97΋|Fds/˚<%4(aT5/%-sK_$k3J#|nV>>Jw +u?Vוn.;$T+䰜jvxb}iMKdE#2DN̴U:z7r|cyB\Y7EIr%[Q$TZo*K#-46,hޑBX^00+EH_eRZGyv[oqy+9L?/?ī3LCOH"J( ia9G8,׶7?]gAm4{gl>$yw]wqLuf<x%f@~m2g",*r&6cF7،f`\xLp}Mdpp`rc-(mo+pfhdy?ÕmRciyNL 'SDkD1Ec%ZO2KL3UM0CI9r?믆^ Co+D0vӸSN6[hp=>T\ hX+s" `*Z ep &K^5Zz[)KE`3ZӻDy%ZC%( +z1j.R\K}z _Sۋ7gm68橜/ڎf,JОi$N'JP.+ 03TxWV\T3s0%2bhS)&`RKp3]xW`c߂92&%<;ذrx7\/>zBΉeqPZIꉭ`AG'Z _1|Łw2}XY&>~y{tT Kq )lyW~1>R֜TçV`A{$C/fl8e;Φ$Lg~^ox$lq=|yJRU֖*bط xW62v& Ow?p U)֣$O58 X%ڦ#&I(xG&sF嘸$ѐ<%bDД3yy1 l0ޑ` f|qNPdBcľ?k.yZ+v-GrYHX*)D` -BkmGbUzo*`0ʥ`GqʼnW9 Z 5k-P(qNAC* -#=x^C>m[)Ǥw!G0kL, Q_yGHv_Ak+(KEK R*x^ۅn[O_`-87\Fp:#TC" qXb%AV"^ׄ Z4NO WO-(.IP2Ͳr:G"7(ǜ1wwFbE E/֙,4Q:0y:ù(c.axfMc[<,Aweyr}vk&XǜɲvOsx}\)K—wVL.H io2'1Y?2-YN#$Gl$$+=)qKZh,q<1]YĘJPw=ۂ:ۂ#4o(q^ƴ8N{ؼ?OT8"1neGQErT1 熫a rղTژAmM1Eg!(7*Tb5bxG;pR Q:F`$@#3ۮ=&k+XUHPr2Z+adMc[ʁ,@w J3PRoP6ŀ~mM~Lսˬ[c{hnAcܚڛs.{~ :[C~ 4.څ '3c[E!V(6ѿl;VUn;[[ pv_]Z$t핐ůe-]>.@ϓq/G# SO=0j$ZatH9M1q#c\7PםQ=UO#, R"^k&273Fwc$Zd DKL:.INli0q1]u~K?/?f'6Թ1=Gs4IZB#J%/7&C&j$'KNfPw +S0^0>ݵƸ*[_f<ՙDݓl1>-j3jc7Ghg[X9U N[#9Yco. YѷqWzi;s3ՔTހ&y v2T&[i;\ۄ;pBI00l1P1Btx[obtR #' `eHaJ'iVq[ƚ8&xG98LBc̻4jju>׺_bND}q 9\Bo!gqUX#K< (ETSj4GAwdr|%,i*.wTE8XҁS3ӆf'T^tQ]ܘ<TRMوGZ7@w1wQmw/y4hD;OЎeg61~fs4J h+ 'ZU)Bc&eO$c;\hl^N))o^9oJpEO}oX\@9vb|} a_ p`Cff)}rityZC9ӣ+MGU:ˬM|X|鬯GI/>#w;^5q:(`lziWNvh (V(`Di\F=QN[MB]B pk-toU '`nσk1j8!b6?#( mEg'2>,!5ڮ5GPg1xrY[Mz&ǧqS zk#V>;hZw&ggs26=i ­7v1؅[O 2kFfX*}M \dE*9qMBqc1}.m:@9=}=1NT |({P/G| ue wJ\Ly`3 jQ#䏛_֢XE+dvI79]MKO^L n˒EL82Ǹ?frpzeg *'_π׉vn>hSzVOtˊKM{O>L&Vo9W˻u^N'u|$ͧy>GRۛLg<. /ρFA^#]tOpd U Vg7CUd={FvW;kr)I˒FZK[hL.hHXrE f-w7846bo7'PyOxܢ[pF IjPNqDxas{?Y.| M-ԻS6 4X.۾^k8˝Xdzx չH{I].VNQK_-vw'/dY'3J?Xn-D#XjM_Q&Q:f] m/LJ2ffؿWKf3q.NP6PG!2렵JxL'۪1!w ǀ6=E w$Kdy:t" ^)8Ĉk'F~D[@;_[᯾ - m6ufMSAp[$5E5#M+ɓ8uRr BC RW[}[4ʝå`i= \%(PR%k$v|(wGl9|ፔ #ey9JL=4`u-#pEL&K\!SɈ}O PrË}u# ՠ]_):3bft'13bͱӛ͈T 6.8&Y<˺W?'U xQӼ@mn OAC;a D@.p;#JqMy)zG44OrqB!_P@CqMwQNd~-Qpit۰DBxƝ8=gQ.x:qG#PZ%@`oJ(̕ݿ|]@ >=l9$t%G W7 Ɩhr ,,_FGzixSQ #{Hmwt*Qk &GlMٞ&/4_+5ј|m>lLӬYƆ'䶱+و:ۂsl[`丨LLc OO:3VVbW`i{#8h=d?ۆth,(M@H.YY.fv6;NP8Mw??5}3ix̦XI`[nq@‚`n։v~Q)qT#DPb'-&zW >8ӁC~1FьCP\]L&:"Ii8F2xqDUumuJ6qiK1)E(4XbJL6#ɑSFH(wQUID}D!% (%Ӻ3zQkCg?1j2J|ɜ ,=AzDAP1D&$ʹL$4su:NebCLXI}UV+D3z3`D˘H" ipUSi hl]0TeL"QqS_[ ]:d^>p7wM˘3z@NY\aW3>B&/ NɧKAe`R=.Trx>\ݣ}o{HH',n N0/c.ƹ^6@O~`o.)+B2 4>NuMœ b:S 1p TNz뷟`>NNS0T1F59C᠙4Wp6?`k$ Jti$sy8N{G s B-6[;VVsۏ"{{)S5el%I{ #PE9cQFoԅ~[xw }|;`@zSVA)ɚ=VGkC9cg" /!}7xN w˘ *E8pP4YFBBT$(r2mne t%K70E@.]*  u,_57}%ޯ3tGTO B6wID+i5K5{U܄x}v_HxS%Ub`[ۂ9z'o">AFy:YvѧP8>߬Wi)Rv %i<MUFsT/R@5ZZ}ƊfD)D}uq_KnE뷹? JF `{_$U:cqC$ H ~Hd*$8Dug6REAX23!TC!  43Λ{o'pޛ"9kHAz.U6~JC'IBs)ΟbW<Ћ{p˾[wϡTݕO@.oxuLp\d XL;Ž y!,@Uκ|edbre RWMHV4B\yi}K`oK8CȰvLLw A:ψŎ(}=FN4pgҎgL$]:Qmޞ:{eLi,(iܿSg@ٟ%ΨO"6E3z>ޏG]u }$^#H0^$&[Á#Ӳ&{Woژ''E%660G, :^\P.i`L73wO >&l\1 aFvЧ)7at۝3 3Si3J֙W-Mhw 5IzԝK iEҿ+[QE=С"lyoƾak#a!m yDU܆hq>xLF'>(y8]xOa51 QHgMA8ڗ|0Ea㹭! ]JO7Foyr3I 㡾4<҂ޘьheyA()^:k &FRRA+Q0LUlLn(Ațy[@HY3!qQ9L(&JܲZLe +泰{ 3]Bx=)7Ir#Uuh xX1ÇNFM)׭ nBg?9= @c%AvӶxR̘z +6_6OGX;P}߻xx"sIµ l.7|Ηf{5*P#_!ߵ $_F{+n*.0(@5lI9ܖU:6 qaMD9au8rPg|]{`DiXfIA:h{=h=d#&W̞i3($m5nTrI]-jUDghe5O$a6zڢ4ԑ"Xm> RYZ]wHgeIN,"b ֏t[ȗ-v4SA$r~u##bTQ rVs87@w6oNڥ 'Ei:QwJF)Gd(> 7BZQ ENJwUQP&d]88rPr^A3tI /6{=/[ddC[^feLQ/øX ^>"\{˘Fd7c=5x*v$2b2N#ny-&zW)Zx]Tj4C'@^?~JdbM-[m⬲>W|{NJp ,&CkGR$ͱFY|ٻħdv(؀g+CLQpX-}z||V@,))]^*,kU&#!Za\nfWV@f2Qm]@JJIa mͫEU >lĝ44v`>t+UˇI>%Gˈ1HO|uc }ޚc}pL! nͱ[)ge{=ozXLi6NmjnU4_tI'7ZPV.4), k, Q)i,{zaF!t)gW%FkpƦhT ,W#K_g) gBqm9~|Y~^: %Iϻ`ؾ//q}2,Ƶ“'Q% gPSF(OZ7ζ0ɿȧs v&ϚQ,n:>H|}]={@:A%d[y;gϿ'9Nr+Ow7^nKu}?te.p_ST*$Ӷ@|yҎD~ܥ:Hf2.0啑]d9pLu2ܠ6⣴fuuS%V*xn&ۤEAϩr c=LȜcsaU𺁴F(DLoA|(E!AeLZv^MQܦ?q4*|x~{K˜ 1NK,Y ؿ_ПIs6PH \i6*쪖{rdbKq#H~?=ZpEY i,,CQ*_0£ ܗ.#"jmvV —_tg#I"(q^/vHHt]6Կ{dy%H4P(Q5M5KUϡzAu$IɮmAh~)4?OfA3$;4@1N*EJ->ykx=)zQIj]}AǨqK2>^ٲI<@ {Un!􂬸pe2;h&z2@H{y|铱R|xJ'1t_)JWc̥<(6F_QqoJ| [ mWA=,)LpÒa`G"(A_\޹ufn9es^SEWm{$*lrCdqUC.::h}o 9cSEtJՑĈ$iv |P/=FȤ i:#FI&F\=sNpV̹g70h <& f߾IdUy҃aۙ/ѣ"wu*)R05s4FfνfXmPnQcR<XO\ȣr@h7 sn~D>hU'`!SkKu:IQzΤ`l֫L#B6ܝ fzQ Jq\%6OUި0]j1CnuwOqL%FY0<]z"HQs}\k4Mt9FLӜ\e2$,L1pL6 y)@8FƊE:q-mG;ܰ[D 0ۛ,{LcAT"{$9b%[Rl).9] ,/UdSoh/\z?1I":X"UӨA9Y@2/L\"ғtO4e}G'ABK' hO[Ӈs*{b4 D׶<F\Q7ʜ, `G"/=OY+ZSg 1vG p|)P~-4Ѻ챡/iԍAhˈv$n(ITi k4"FWXB!`P{,3;Tg ~a_y`hα-WG[g. J )iRIⷨ$Y,nj}:BXA-6h> "bURwc lJ \|;i?1ov8tv)~hNעma`\&\eYr}c Q[xtCj^!ӱ>x~!ƞu79AoᦻmCcv|i G6n|fqxqDh*b_E~AFW1f)<AsalB^}7cKIat4H&U622rhtsqf#r #ߥ,KMnv|P\@%<J*xQՃ\h0rFNgۻx,N߼@Gn@8.b( F9~xci^E"t`Ӗ0;>o~M?5? Q{"is\]5*߯VyޭZ+4\+#Pҕ 0p'տ;>~ߙ..տ]=0n4s_ۻyɯWuڴ?L |9ZJ`k~7D5H,7M &_6e#oR,22fP3;o>T$6~'=)Uo-x5KqZHJWJcNjsc8 C;B*D*B*d$f++;c6;}{7y6U1K!0xSkO¼w<}[ۻ.nv<ヲNYWD 7mcA3%X&s}0c Wx»? -vkcVM\ -mq=e;OsCQ1ٕR0 )8We]='6k<1s:SEQ1#}Qʙ5qXDhDؤ+ NP88 RW8ԙbBb"Q! BO"*}I($C A 1p,&rO /*jX"u.&*y&.f y<"}îS ̣򡵄\L{3 O9yq΍ ǝ?[juy.{K#^M<ΊJdKS6@b^[k9ZR BA7+`aKF'bHNO=yÙc=UŎsǙ-  (K41%ϩ/cz^ap{N0/ 9{;`ǻ1"<X؍s7бգ KeYF9<\حwc8q ߟAߟm](Jj6[lق4Zxwyͼ6&(YAʀ޷9 n/8PoEڐ~Dpԋf058yOƹO.R t2~Ai>=_F\Iʓ yg'GRc굞>@" }ަ+L3R<|J!N0A tX/ :"`30"#Caxw=^Gr ̓S%U ]"oGŠ<3 gKbP]bD0Ƙ43 g~DۜwC'$݀1W&n/E8du$[ Y04kr~↘,oz<*OyӔ4SC ̉r`x!xL3 =>4|T}wQ3Kb#LikBF(D6eqw3AH֨ٶYN`;TY:Nz5y/srL@.=/09BY[INnɈYԳ)U5о;,.Qh$5wv`hO? }i[F\'xFy,nrJ3Kf .S8ilU2jF>v{ͼRt/TX_~0AٞF1y3"pg+SatX0r}3#JEH`yHP'WT={ G8bm:c Y8l쁘g|{BgC][- >y s5AE3'iG's5pL.\&vX!WáK/yȿuJ`XY_`:b t r{h]Z 34ao #1ăYD}X";A`),Е.j%DS) xTa7lkv9/zΏ82ô`cԴCnkIQ9σxWq\*˳10o1oaV*pPq1N`-v.f"  B}aZn">e][)GTBnL"O }T'~z"z^&QăFPO樂{V!*fiY(nţ>*1-G >b[?jPvI kj^A7# FE'l~2ƕarL\S8\4L< ?`uskhM}(_Q 1Vu7ШPߢF1UN0h5A~XXD4m#aY]gGMY:mP֕ FWd'v S-%.S"t󘮋˙^.1] ־^,a fs1e 3n N̥>- mt;mMW(_P1vNۅ"}nFy<F_8QQ%X*u):%>f'6EQv2o ֭n,q j K}&=޷TwcŘ%Ucs[TK(cŒSq C};ВoǸhhFx$& ADPdb}c{;4ء9ԬB֬>v[w  ب%eؒK*ҒB%]ia0бY*v\}$2 ŔAL^r>xJO=%Cŷhܠm^|-"cXTc !zivjjJ7p߭FGU2z O^ulR>uWn.V: i[(v6mSgjUMV^]Ű꭯:R86kM54m*YzA@cy9&\nz{k[Bne|gp4 sr/3ga~{(K[C 9 "u9 Tqx4utLp iZy'?d#8~ؾQ`,d6ǔh(|-IN!=!87B!MaXH[5N&=QQy!I@FR(eץi(η @|e][HD\ j#>#!uM*\8ښ>=<< $5zk""hban0:1ɓ(A*~ES+@!s;J$';|Wr,i2v? 8(ܬv[p>8X U·WIo+Kqi^̿OvkU 1tc--\mj]݅WD?WTV7 Q?i>n>2/d\|.j.?s tE!/[/A> fi|~,>ras&nVvr~:\~vSM#lFhߜbYb xU͟fn|)RA7mOfKmmY"_MYuA`%L<`7Q͘&6)U7&EZE)NEꮮիz{8F߁Xh|zcᦹY˴&c6e#8߼3&H e0"Nzf:yH/h Y|? M+TmO,drp7_ $Gh2x(䘫l~ @9iyr뛽xlnmaj6eY= ~5va`NjZ*Ff ej/tR͗]e[1WBw?I|_Ϋ}guݸ:S?PQ6Կ|ª)W΁I 8y<۵Y$W6Q=/6澷4:vL:zXU77%Fqvml*wn6)0_yS>2p< O[ xK&O!tC%0N Xw <d5{mm"*n)+S?Y a`<3 $R `3'LEH_ M㵼&  4 X95y8 U]fj(_4Xb2%#V"e4Nr_O]*wUE?a] =>ɖCU̩UM{s~kЖoH$sdYY 4@P- tٺ$mۈONji-pOj8-7 x}?]d q8dDZN~\N2 |'pS?3iyu3_.*};ɕ;%,S-vYGNI\q5<[d-o5 I|2oZߒA,dKp^M:դQvgv`v3T}+9< ~*lLvs[G&n)g6xH0ݮ){s >r[G`B&qNȩ=/[6d5.q cҒ=3NVO[ 6w0E eڼM! 4YBYň(&&WV#ramUҘKdTHG{so.ܭ([_­y=4kb%c3Q<1C*&HȩR1 F&rD\ŞgI.t Cֹ.KIT, մ> 59B1"8d|HHKIy.5ȕ+J?cŎvm{qr3\o3_ D|f8&2ygzfl 8 r48' S#FkuNTIYTWhr>U:O+̪#Uhopo"e˯}?ESDۊA(_m9J?j:P+>@Q""Q9NpNz8cP{(P,i|"[.Õ2gͦu7 #5[{Aƶ&G+,YSkSAucޔ/!6H25xwBXV&8hgF5Ø\e~ r*." G8r [eB0yF(`! Q4u=/pSpY zs%U%}Pg,ɲ4]J^sE. &S2\31)%Mg~1Pic9&TDdG>H^v'BU =gH:1)J81xPwQ||{w_Q`c#P qAT@Rd(3g9pjߔr7p47H$978tH074"ݖa:Jp)+9s ɵ< Dp5%,F))c/M#K=A6q<1 ~ Rgl=I i{*MSa3`G~9i;\b 6 )w [w3: v3_Ggd- w쥸y4V@ڍo{KYގ'Y׎R<΁&,31q)NgV+:6H1wlF1=!rTzQ],EG zOȍ6>rk4-F( wE05Q=>ĭ)"gƞOjw?gk]~0 smD)˜6= ~1Av2=LբK=!b/'9T)M?@5(Z^APT`@1(l:FG 1RY$ Bsa _|NQ^{i_ 3vig(M[nt`">Fp™q=23a(΀?h.y`9[)Qd¡ /w` _ʶs8-xKÅap X U՞UbQz>Rv;T]*'V8 iRX &4;w0z 씆g8آ( $ l@ ;Wsr9gЫ0E|d)ǝ#ɻ|ʑd~d#:=j.w&83ʻw c) ~d N~(RpwBx`B93|JIa./p!kw^{H~vx48Ǐ RQΜ ,\1f.|*$8]%)4C!UQ mg0[*d>rFudx,:梆|/K#AJ {I\;F4"66mP ; Eб TvjOryyaE;\EqoOH/e `Tu>L" {o5:2Fۇ[:)f=u\=L "ǎ =VegmPO|Jº۞a 1J14~F=E={$n=E]Zɷf8w"51z8GqBv=maqB""q=_~}{sBEn6[Z(" Z)N:0ʄrhu!vygsS5RqEӐ}c2*va4fS !E^G8=>N`^$gtVjT'{ v( 0"[rjY.M\ FS+#x/v=AT:iQUULO4.9ԘaE󏿻0ŶV9yBu-'9C]=?{.Qu%\XN mDn9,:G_ s*rʫv?ϐc\ddF4ՖZ 5ODyK68 )c +b<[=,u4BA_: =ýX $ǬRo+Ń i,Jc -{$~F6vD)a^(@!`-(@lB*c1p`SZtv\:w! eKr:bZ󭪤%j;.MbRx;v(7XFjD1g[nlѿ^~h>[4MCa ,FDrz 720~}5֬wؾΪH':|P&~h1lQf*U}Y1L`U>K|=ގ௯af&:[r#7_NsѺ]яͣ\7NTͪd Dq"Du Ngbg۲ހY|W ʶAECR`%N+gH9#xIBú*x~E>ζ:by *R`xRMF6\x! $"Z(,w_yl*-:/U+6nk .i+GaZ:yCOԧ-^⩶|g {<@LKp$-=865#W"76g%b&)aYbnp8x}su$6] "|9s̘ +c:#;@" &KL*Eub;&̫csd4 C=Eo':c@LR=MŲRj@p>Zfy}N5(<]5þM$d(+̃S ȹQD˿tkPh&G?z''h8&"x+)ޟ{*P#$E z/PyM/[ۃ|*ȫVֱ;P:!D7˙.RGB 䍬.%{/Py +!_pUU׭/|׃|Ej=E-2I$F6:uQn?%j Oiu/ ҃|39\ =@پaj Dm=_e?% ߾S y.% PH.[~ Ɗ` +iP=2_ڪ.X(d@?#QdY+Pyɸ|>!TA^cR@,"|i!!'-3eT>!@^G"UBTQ,0d`XAdB 5U`v)-Y$qs%|sD۾B9&\FHj O$/߃|Z%\?hb{gMx'B,wsA޺BUż?sVuI,Z;[V^}]Au5ʹ&c6D >lb.*Noՙ ,xF8Biי?v7u] uvq-Oo.Bs4㤹>g !6h9O:vάk{ndyґkmGuvaosw<ق̴ >X=yd=te.474M.l>qxJdDk~1gzkWzڬUY|=ʮa_5tk&-c@a*k%T{/Py-Lmd~K T@^`]ns @\2/T:dDnnRBÿAB 9fK,@~n~24^ ?’hyVh:^t6g(jZ?*b-(otc8=zuBH|prt=1A J4R(׹h4[1r^G(gˍT@d @Nݪ=q8s>_T nlհUبHKnA!a/{S̽~5=yu[9X^x(bVRo"$kK @^굈ZqnunYEBծ0Bc}OuJ k^] @PZEɠȗ@LvK _~ \jܣ\s)幱U?s_x~R(TG:jTԯQ`S|50y n=Q<1C*""!Ԇ3bT(.R%$.& [[beL>vb),xXeH֪h9Wi'_>vYŜyj"K#ujISt[(ڏ;Ǫڰ7;l!1k_,C'pqĐrx(Oz"%j\ڷ$JI _RQǒ/Py%LؓP 4 =5J @^ul1ᄤ1:č:K8].Pyp31qTYJywB5ʬE0iE Eea {f@<vݥ8+,̯ ( xqݓ`+Pyft(%.9  m=5ˇ1Z(sPro$I`u?p@ QefeK23!eLXwpIē\x=qh}OTIYBQdžǤ:ً4n'_spVM.VC>b 9rA&DLQ|U*qQEIq|g OYR*Py#)̍F$&:VXnskOa.K kj^ r=R EE$${ J T@`V`dA\g䈕Ns@8y n1mN? M|]]h6iߓ^Ŧ7-P{N[sj+2t+;8xd~zy9,5ϰf4? ok#)6<8$YsY 6 (oq)ax7&|[qßƳLV$U9*e_.J')IB|rJ Ml2,ŏ9:j;s;ԷMR/rK{{p-,Ag gP:ϩSQ6iR4kz y |tm,p ѭ3"$aKiFݏߌ.%JjRzEY?"L/#2t&[磖]l?/2" ύ9fỘ^ 0g hg߂kK_|%%_ @Y,|M8[S߯[D4v/uk0j6~ڨFV@MSK #IѺ(Nö?٫Ct瑇HC,8~ DQ %ɞP={w8n;,(C,q~| )r^|J/!aJ=>Xx25$2gd:%"xbA-E`hRuw9d<._em&P{iYe) Y]#u&`E.ᗉtz5?/GP… W'?性ۧOz[0O cnc:P녣(M Fj"HUp ߎ $wcOt*jP IY"bІr3PzT}db8*J^INX{,Ċ<($fJrBb^> YK}6'+Zm,3yOu9:_#Z;0B21wJPh g]ыlU`X:τB)bc$XvfD!RaT]!)x(XG6Og c`حqW2T P~՝6;V0~EunFvL?ulfdI9W9AEc\4%L<%+DGK9c\u_T1 /nr[--V,oR%a] @V(hA $@q~<6 !N-M Zͱ2$c޺1AXEo6&*1 X)'r`'a/?pc"=" 4UDJ5OV ȼT1![8 d ͌ " 2ZXy( E_;j0h4s1Fz~}´tswY(COԧ-^⩍՞,a'ouE Xz%gƐc^*f%翊?7WGog2eWQxb#Mw%zw>WM&#q9Z'BCcgf 霂/))}yIIMs_J[wl Hd=־Hڥ4ݥ՛]*:ޱ{:Ѭ7jau|;o քk }8ڋn:N~NN3>c~xJ~w7]".;v~1C~,+Ҝ;_գcg灰|ZUx=` ^ ESI?!x`mO,#&wПݚ׫b9}z4g scTt:KwMCL;B듩DIc,y2&5d:O%8Y$J<G7.y>ܵy]ޕ6r$ٿRЗuc]Rm|gG[kII`FRU"춛,VU"2|z۴ɌMg2)gvfy  {>SG̴:#6?.{?x~N:x p1H!TC.=q1m+(^N^% xAz W 91 2g#s699| وtZȜȜȜȜ;K?$8٘t s62g#s62g#s62g#sqXubȜȜȜȨH҈Ȝ99 9999c8w5=aBk sRVKit_("{5ǂݶ:DxDbHLs+̃T2;E8ƥXFoF)9۠x"~{9U,OIYDA@T_3?N#KJK b9Zm޸݆a>]^ %;cwy1T8Nኛ[1 vA:lB'yFp[O2Ą`!Tu,f-\:]q1q7 tiD}[L7+H|݌':8U'`6%S10cXIjw]6j^GC%'x J.NkO Oě7J]ɝjo )N[l!ug$ORH4=ڬ%r=rtZ|aϳ5XBܢ/ њPh!n,noW ټG-d,b@$)r%3-\_|?iM]x1;% 4Ջ]X=yW5*/, EwWA߄pkIQ)~%Ik'RW Șh "D锨#,Jrk) k=q'Vm\v\:&׺ͯ}lnmЮ25B@=kx:!9x:OW|ѭ#ZL:Jz\)fړW ^06^ ++$%h.>pi˵H 1C$Н1BI8hϠi}vxui=ټ1VI2 mҐ)T` ́Fh9VZ O09"!! &H> d)[Jx*тxa7 L TL^]mـP6'<5\Xet 9}1ÌN%bp 2b*+{-7c]yш訅2hnXC.Ӆaw.P'aN,A(H"TB;"E!;,ҋ hk^l.a9@@&(P: ӐP: iiq.Bn<4nO?ؚk>ozψ0<@TR+BU*hK(cDC* ̣~sfLH/GyDsp=[??tƚQ,e*-K+,)-QsUj:_j8뇃Q8n!.teۓf(tRˣУ6Հ>*Osn;Ox><+C|nFoѩ繀Gk2,uy厪9(Wl΋~tmwy^eS^`@fV_)yE w/9/>LBэ$хI9b<SgSoAGnsh*}WEmG-%24L0'>]3<#|rVƴ(UGMw9-M?A1/b8h ~gSYllK=u-v4~vdwr|U3vϿw?6˶iI3䩟O B¦(\O >odKߴ~n>m"B*\Mb2B_nm_oEEimA'ó~rZbN7elHGq'dWI '6$1:Ʌ[t"<.U<ͣ<ͣ<ͣ)iBCI$DI$DI$DB"$D#$DIH<ēH<ēhLH<ēH<ēH<ēH<ē/f}R)dPJSqROw(D*3ȲƕUV(72jNnf FkS{_ٺempYu½' yD0IF - \ .|h8AE N{+N1N;Nz"zy<_>5jpa,Z[=V[ wueĀ.(_wgZΒ‹nt{HAKUXd$VKQzc'˄Nc CCt4in_Ry,;c~za,;+B)B*CA*N ]iiP.?-GZb. Q֛̕[oQ*c2KNWTE*(c.*h>RA^MTĕA RK4gR əDTT(4FW36 DŽZF#qRSY y$:RyÌN%bpI]eel_|ˍXY)_{4":j2ta]% n؀512Kt Ҩ&Ǝ.8Bw[=yK(k@[sbS>-e]I)i.O^`Bd>b!qG!Eܱ|A#rTzF]koG+>vx.`ԈRg}m-WŇtխsS'=I|3'=I|w(]nv@=mݯ <5l|8v~τ=PNgV(:mcrԇ<F .}"h{;?;:nT@Ң^pnǘ%3O[wj4[?fEff̎e==a1/  92Uõcfqgջ6Pn[;Xx|Ky9d:+niĚmٺIu~X_NW'~y}|*m/;Io~ٮҳ )z>oS_eyKbiuq5 &J`B0r<喛[ᾼtz/729Bhg?_>j"s5m el*ptF _#W0Aozr~S!_{7w竳w::ښ3tcmI'n_]AC?{ʅ&Q?cS`hzvS&jgЫ ➭f~G(n{꺹 a\U18@E722g5?'a@ZM}f:faN~#irs ؒ2V6&]ЕWY"7BȤM&+̰zc.8q!7a7wR:D3/X^T}aU7iݩע51HqS*Uε34IۨwIvs:EʍE6w d|A R֖v[!n K3mDۤIM#$BK#Sm Tg1э])'И6:{͚bh!+7 w lQwoG#,G@똕2Wqa;m`c 4IUDPIɜ; #0os7@.\:T{}2w:[_:GBFDVE4$];;9N}|"Tji/Nzü 9Yr6>Om(IH/ZƩJj[+s!$+z[t΁z9)!Knju}d SavnE);IM -ڒBGZGWH mFN^DCQZZ壝E\ IңI[Vէ (хk§tp}E ܔ43HTG H4-+ )D!; D{j%x!,;v4#H2](_ qр)k|FnxJtv **6(:٠-;q9xC9GDaDM2U X.JM Eɳǚ[]J8HIـ`44kݡ7Wg\/ *a i+GR6fa=gA,0*T*JvdZ2ɷ LI2lG;5mBӖ * +5+0Ȍ̆veB#( KQ)Naƻ2inB AvEm%˲fE{PZq2F[@( `Ztnf!6K17)_JJ ufM0*QS r ֑tA@GJ LEsf+hs:89r b LڲFxwD"=dH_($6d^i;TW.#{/(uYոIUDI)e+%Z<32-(^5 5$DjeT"ZJ( er1! aUh#ǻ=sAƅ །7vg۠hŌUDzbMDr|T1&PT1yQHa8IB1M`!vnl|˛trDK&vjV2]`!0h&[oT^ፍPӦ4d)h+JUe ,äc*Jrk=/3)XqAyjN$\Pd"U̫ FMȴ 2]kVEda{C^LZ@mD%|uj З9ttIճD(ʠv?ᡔ4K q[O κ$!H;V@@]BZx _3XC-D;f e;VE̓BwD"P:f]q6 3)ILXT*L*BȲ$P>flq)PNX)j^ n+ -liVԀʬ$޲[6RS[ #(XHh&-w$a:J` ئQ}Aw2%e SZ&zvO/pK.ysmӴ]p$YrT PM!lJ''f= ]Z`{{G(IlfkЬ(kM!JKBq$RVO ]j31i'rxga#[N*3bO*Vj7 JȀ-ɡmM1WdsC<܈9)Yf1uA;(XfhTYڂDekQMƮXBv+Bq&$. - wH> +U@ &ePʢ#6:XLÎg=ggP` 4@ QYjЏQ3j hNmS; Fnf\ E5iփ*M3| R53i^& &ci@Z m'еuZ"O *U&xZ*;k*>Y[Tڠ@8XAVT@9 Z6i =W&EBb?Д n F5>dN֞ +JOQ!,iJ M\ѓFn57E!8XoCnҘM5Uqi J.ȎYxЬ$ePdpNԅKOp$@BBqg&!tAygjp Nf-~˵y<ُ% .`Uh?E<(!.P^.9/|R7@]9-=yŋ#jo WvNg}+B{ +76\ʝp W`pņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wl+엶Lkpv W@i$j-}ǿK;PKz`4CQhF36ьfl4chF36ьfl4chF36ьfl4chF36ьfl4chF36ьfl4chF36ьfl4c5Yw'uR1R{c4Z (bS<+apņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlb5\'Uc\?'{'{e`S4\i.pņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlb&5\}`fW~YЊ^v= §oKku7 6 {~vtL|6Fm^9Dɒbf1:_ۺφ嬳4[?f/.qR(Y @ws7lgw?%1I-mdaQ(gջirMTm^} ,Mn3A(cT W Sjyܶؾ4_$8@7rl@P 8v}騢 2PUE 25ixܾpoiGnOn!_3`H"M5cǏvߊj O8UV(sȌ{VS>W*p$`eu8ҦhePA8v]06B#6yU7㫡66`4xR,>^{%zli(SLyaڌ.a8Y_'KGd{CMS*y S#ӹn#4%)JE!}3qJ 9%OQ߹$$H)!xÙÀvt̥ Aʘ v⨩rzLig贤گJ(2!X?e 5y3av@-X5:Ff 15߄4I81|mA]*u?agT3)gt woCVNkA2g̋kY[v 5+؆rvdP}_d26(FZc9.|YhMmx.y q+; B,_ ͏x65`o|߈2  1lE ÛY㧅FⲃSP‚7`Z,q+'āsO̡rOй'sܓ9$RDQ+QIEE]`ƴXSv/2jnl $ TYLBxZĜS)O]>|}< Rkc[lË)Z͚6Uj~\RL/| Pu;\0Qj$G P1k(۟xrg r8U}*/7C_:yo%q_Π}atHA$(a&[A} 5mFuΨ'N5KFs`:݇QE^|g~"5umo6wN2F0,;)>`/մ>":<5hTG'/O_P7N3֎(h 9̅@)ZSK6z L3m1y2Ϧ[Բz'obʼN+NEFbV4PL"࢕9 1E" S-eLȔ3 {8j\#5];98HN9~8 HP&7W]$QEV'0J@f +z:e_.~ {6lTo) _^}~TCcA* fIv;^\mdL(ׅ=e6y'N6Q2bkv].@*FB \Y q04U-XJ ;woKjJ nA@_p/Շ}V_p `Z iNWe9 .phS| PHR g <0Q׫B]O n% +HUܙ!OD$4BhSV7XŽվ>'f?k93w}OzrT]muh C#A=qzmL&MUz7fbŋ?wJ|ö' KסikK޽ Z49ldAq@4RAz9-Q\ /%J˘9(%U5 Z~?-5HV+g)]x}Gċ:h4 JyɺPA !C\#/'^'GhNxċg )x&,DhYSLys$x`V6E+ ‰NzL@' ]'=*Õq`DP-0d3>x}`9_ 'J릿sn1$jRIσDwQ)Бi8"!!MItGRSd&Va@;µAG>1`R]5Vnwj1J "}$V(~[3֎ ^(r'8nH;bp6>ׇހ=eEWveO_yI&WB+0-1QxH+'VۉVxMW)f_e"@?}zŮĮ X*Z7yqqq~|}=Q!|)_FfY=4{}Y~4|Acn[Z6E:ߤx}TfLR?e 6yS~8HyN cOdz4]C͔ש 9A=gE;txl7.ztUq\&j~[Lo1ް7P`{4:(d6FT!=)vxchL?D  .+"5?l-x!x轢Q[oМÛYN!L bv W#qVP‚7`ؔ,q$᧓80.K[PK[K[`K[*$r#QI@HTȸK֘@ZcE=7#>;x:4.Mڤg^L2o$3x%;;RI)bcx+PBtONMvǣuyG{W@Ig|=O[wփsKbg}Yv 9 ._3Rh a"W QPH9 A3J85^so>xXt~Upr*,c! u jExHEUR$9($@P`qc#9+#7^Wuwqa9IPo8!VoRi N_0*6;([>EPTJD͠S1)[ +Po Z*Gp*O,,Թ DRĎR81GG4"#.qFzKD+m4aZE r(=߯Q::!-<3h?:O8{ ^{P KXlVqIuec&<(-%2`$&ɼ ye!EP${ 9ISĄb=rSxPd)BZXڨlsN:^ypb[L+w\ٔMo78f%)~9$pq.dQ tE5Gz=.qҗqkoWIZ-ЕALbm$Pգ 3@z;-OB,Mԡ/$7LjՈwbWLCM& vF2BE}j6ev?o}?zߍf&bo#wO-KZP/@65%łE-9aӊnDGo7yB  YY*3ezjV!Qu{b%25wD-ͩKRqU&BALdS{ωts|2^m,)>M>D6fO<:3yqsK=-(/OhͷQjHѷ0=]n 1@aAG#ُqvQj7x"C);3?̏>c>s5gQA봱"xe}?A!G0HA)ɣ`H9{Y;\Ga]bƪu'z7##kcͣ MvHsfv fjO^;s;m{3x#cj?"O#.`֚ R3I }+G4Nu|y+Ns$"WX|OםOם}yƊ)m#NjlDbRld&RO}ȓc8=7qa>`㒃fK`[]UJ—MRg^'z ,gsn{ pHkwmmSrJ]UzU}6&d8Kqݐ2ksDpr]o =̞R83ڹ#i%Y1!Jvxuҫ܀bAvͭxU\0ʚWK 瑭Z>MjBtl이κZ:sxGknAA7id]k-Ե:ߊ# b蘭*`26u=2 8 8`  80  8 86 Cτ9'j}A=M7RN0d{ (((Rb|44G+]++1N9ROBʜFMkюۤ2̫ӎN B Zq Ct6vq~ :m`U&z!3A$6mgsDu:+.t?%<_vF?F)jz;ohjH?9-8Rê0~<()+!7_]߬wnzn/zNZ9Nxѹ㯖2|i~¦(qv$zsF~oD u}IW+B)/~6bQeĂ+JyXpkI,VhXsReW$!ީB\Zg(80t(ѡMbJ_*ݾPo`ElP?L7ա`/aB''ԉF5{DA u{gC '\ɔ}j6zXJ}Fh⟆[h :œœœf,xơ'x()2, i<&&hs *(4BCQ虶)IeTA"la$7rNgB GW'6gzF-yaT]`.թ\ υL$x!ڔ*Wؤ aM 6)(bfPY_EwWtS$ع65#GV*,{8DZD:'xs)Z?Z~;==Wo[6z9Dtv}FBq6лLĿ4V!4ZqXH{BujU0 -k$r u,^zS3 ~K6*#vgL)SF!݄X{qRFVkQQnH(pWn`i]Ң )޸uaR}jݪVE *U(*L5=kyڋLۖ`T^ȨdIaZ`_+{ӿ06U"9/")cg4af]6QIHfk)SF% eşT3w!gN0 QkQnJ?_LƑx y4&G78qD˟i;k69)#,PS?b ]0":1B&\ z{%j7pSů.+v)oRP;teLݖsoC4pϳ|Z"aXRB@(!!W{cג1B| w*ﶇ9M$;k?E[}5k=U[Lq^S{,Ntq_F :z(ѡzAL*k V2pt5;iBU/P>MqZ4!\ihXg!wG5=.Kakn~u%sNP(wG*k L :$Nz=\֬r9/JOս_=Fkf>m)bTv9z2=MƋeu&t (@J2@Cq$6X/Ђ)J<̤'\`ǻ/U=7[-BX)Z}$Uv\ Fv6j@E s)b)pκ:\{ l?!˥\?2v@$ʹ{0xur2BUJ92Gih  +)#sEIn.AQ,#wTS" hӣ/[ zH=&[;l*JANɮ<(ąWmF7ShOi+ĺW7g`v%E0:vrmVU룦uoEDµoA -q0ͪgFpة d=hodDv탬~\z;v5#S=XE]*(P*2n.7~c,S*ź*8p_*.1ܼU>EouA%w9x% F4t3u^'tM`vo(UWe:ЇZ<]n|;3\y9)o rٱX xssCׅ^xTi($0` `;LlyG ;ǯyTC Zq4K8x68bC06B#\ͭ 1:K!i^s'3Q> r"h n5B2T! {)!C6gԆ3$a0b_c9g'dl.+# P򛴒;uY=Tq d4z6 }o[jm>2}6aklhO_[Gu`NzN?:~~O)iműfE$ݰ+Rѱcf=4|6^T4DteqiW"J6ziٓwY]]]'{oxyD8),AۣUt#yt̹5z:=F#+ *45w{'/LG eZ30{-#NRhnDna0!Zs1pR}ng: IM ר]QQ8@ҒPNbXXH.U |f]&mĹ^gVvq oچryo5l~?+P2zEiW(#x+8>jK ^Ȍ ^`A Ɯ\ 吹kOg cvVUddU42a2 8ZBI TH'2.DB9Yd-8,r!cf*(Su4H`&$Z`B`ܧH*HcG[,2! #$$5pFBb%#11)DXҠs0IL1c]X˦eiAsXL Ͱ  yl:\_?ݳb@HdHݨou*t˜+˜R1AJc~RctJchggmJ Wt.lJ$VSnͫ>b~Nl^u&3&Fdr YG FrvlQആGбČFD_AslqDJSx@!0J:gaBVDL"bGKE K%GXHǞtFhp8%b"#b΂wDðqc㨳6@&S-E{Ywp^6߼i wN֢5'@H2QeHN9Sd`HD;]ZooġY~M44qt* rpX mo^ȿޯ Verr$al 4 ヌ+5ŏ7P1Kf&,q~SJ+0yF~fzq䍈EX_3(#’J`:ҟywVG/)Pu I51C>tڟp*$hy:u&(eRR2;-|*lV ;ZE  0JTG]S]OOF@ͼjr$@.WKֹq.@ Ĺq.@Ĺq.@ ⨹q.@ ęr\?!z!T" iZq.@ Ĺq.@ ESH KnwLz\הy`{2H{^'׋  1> G9A[aoex*S7h_=dPV.@оDTyȗ}QJ&/-t/r5һرUjy^ {Z˂vA&R>)t5X@?ޤƢ{obc;g4ik -꠹7$o@:xU6@ 9UݕC^H_>t37WہAoS`KS\aK-YK^%{ɦ1"}}2X^AӿbDZzk%ZIxb#Q&7~x.fKXt~M=G:݊uCtNslg/{֯ |mR}mRl?$u@)ѭۆa!B=^IQ`6#]1DKI6RTkz̲F t7}wltPЭX|6}V^ zS-.St|z&P%^ vz׬.g=AS<&Pϕٽ<@PO毈Pp bg'iz;<>E-L^iaǽe?5p8|#ZSJ qmb>ϲ-Aj,B7y;y\*TX]G e aSzɯQ{@m> n@< vMСK u_ԟېNm7HqڄF/i9gw"ܱq;0sC]䣗]3sp2rr^pGlr>^H?h{ ?w<tBB2\k ?n;/kDo @H.'A|n$q"0ZnϐIZ2}sk&"p`nљ(b>+FdB=Ƭo&nD+L|%sާ =*/ض0O^MzSPi-yc:l0ˢ \N~`>]s7~Х)3>O*ӛ[fZ/A@"u̦"7F` Et]?qG_'dfW`lp>O\泛E7| 7h{T>ڴO7F_/< ׋(],\-f $lFOb35oD.Bc?\r fֹ+dJuXhU#ucJW3w=e'h ].dUUXN]Z30{-#chnDSN7"W3RSXo"BYL\^+*&Jy =\$~509yܷ *kr.|qzk鲛wq{~;q8 Flo - +f^LnϓZ{-,?`MNP'롪w'ziaz9N`.0 $- T)J낎[ؠllF6d,:Mm v@ w8؆GqxM("ޕPG Vp).}Ԗx93^{+!sמU(}[yyJ{[VpE;edU4ZK(Ir yIEt$LEt!"'U2`!r k7a iU30SAbA3 ('@c}>TtaX2e{C<FHHjbKFbb$S, A9,a#ֱ.xe2sZi}rڠ9HNfXkQLն5,C7vc.VR悂JgLP U4Fg { J㡃vu+¥&LI@Vi,:'A@`[/5^#-t̅PgTpͦKBmgٰL&n *0_n.Vw{CT7aϑ,N ryF/A`UIEpXpǂ`Dְ\3vbApa.o+l6k")&9 Ҍ;$8"` (O ;9ULP~, wiN‹23f݇)*myG:cZ{n,Jҵ"'2wST^z{(h#DC0 a9Ҟi Ӂ:!DR|Ȍtܗq2HVdfz=4$] 5QJH=BH0Xo@-F(3s3dZOkdWeC`$U{9كoXcNZ k.-9m TG f#ɷ5* ,|5sGDpb-.vNc#љჃy;yQ;OX/$y0t^~ߟ/}bEzkUj{[n0Fe灶w9O\qV9K˙sc̱b~};i/9v֎# d(c-9;cy !ƽ#Q?3;cwOAK|llJaJ#6  ޳mlW]la mqoŶ.P(+I}!Q(&eI6[qyΜ1[,<5VGR+rm8]܄l)~ jC6$S݆q6qI/|=CaͲէɅgͫO_~_Ǔ/;2צÝ7[Ii|7槓>#P jja7Kl[*U3ͫۻy"YnFghoz0~2ih 5(O+LTJ[S^)~ʞfrlq$1p4  .+Q/ 8r҆S&qur ܸtvQ( A[R.k%!RHD#y7pvoKO<j]օ3̻Qgۚof34jXXB`HHn@ T]hSfCNq09 Z:JH`n$=bW!zFN*X>Z䏯Y\r:ˆ^Fc)Z$)>HԔA b FрG*c"ҽW\hylW( w.Xh^vQ ;+}e7嘺!3o4*kc&Pf@󔔨 F刊`) NPLPFs>L-hnql QM;7eI>8er8\ڪuBui$L7le1d(~(SeӐwKo> `v~(O? OC7pxFݠn7(( d- D~g\g᝾0/JMe|Å II=\l{8z-U+ =< WȆJ-u//O+t{VN~UuPPݬn;W?f/b/!;o3~L Ye_Ok$zCCÎ>^Îi)F!/"_w,ӸQ>|՚[\顪gcm6K?j|k,<줭.#XֶT¬;jwobmoDvnGe˝"4d7ߜ$`"{&S DX,-,xGI4 GYgxM[3~E}R#O"W/ز?u×=ps`1g=ph _]xfZDA˫^}Jw(ڷ|kwdOڶxp &cR 6m%sf}a6H*m g|~=Cw#UyJ4Fg;\s57T <0HUCtS Q{G+d0ǼՌ)PdV*%82_V )M;J e:Z_jo+6P^5NǼ=m7 V_LX;*ⓧmf{M%D$(%زv)*{;c]Ʀd% "L6_8堦L#儵I#)6gF\L0&xQs#!-:œk9Ys; e;Ph4=Me;G2Ҝ?Z!$ 0կ-[wI jn+3x! (+ gog#?s RVkխSԺ:n&pI `ͼ*R!ZHʭ'xȨ*s_i5?o_WF(닮R|g >N8 h-Y?; ձ|i;m&؎!P@ΏIॎ]c}a=8#v9,Rk.F/<8CP$(_WQp<`/[:2Sg/suűhK_18y䭻LOeFp5 ]5&ĉae b,A:gx!2nsIL A7q ㋺L_t oױnԴ'v& PaT܍.'/R뫾l?! x$ )[X1c2b=6MVHKDoE>J4s1OMZɗfJ@}ݝSPOr w!E,E݊xT펩+Mԓy1g6;C׀ ZL2wGf~Tq1y3$4:/Gv_&@+6msm}0rpG̍!Un;M=h~2~=wL\w^F8z"z9%C-;fohأ%^y?gs' '5>B\ 'h: FP(jH\%8q\ )| hWQ\ !1%g$Jj~. %JP9W(Rn\~<\nT+Ww p|[~0kWf_{M'rlgxa}osxa~FR:ΧIWWs Z)N]J'(Ф5JiVyW4|2MF^m}EǸ?#ÅW;+&aXa 6:vI7 X 2DפyxW{/Kl_ <y,![nu '1beP$NcXN# t.FaH(|FVu)EQOo3uAB&O'Ջ#Д1"@X ?4zG"l>z" ͣ>jMCP:&  %vZ[p Zۛ_JZD~ s񽙛4IWLv"!Wy(4 9@J"B<6&'0Up#tF(e+b@B'JN{$͉ M퓔VB,P#J, H0*ǣqYϴā(ݐ&9ӿoƪз{]seƲ.usGSS|>hAvMU96:J|}J4}.{Vҋj4ξ`?K:ү&&VQR?Cb>OLj9+j$~h/O{: b:iկû(Mk-`ShXiSHGr;Bv `u;vOwI. aZ ´Y=krf ΊJ6j:Q8CR5jxT~8!,$S{+%R:nUcgMw4鯣{<ĉxM(֢+brK%bdF/cL.^rkOZi]4O[s-[=̾\u$܌F/|ecbE#hJ\B:Ha"R:"r`в/"88f`2 LYGf9AQN& }*4{2c/Ǵ @ Qc%#11)DXҠs0I̥>xerഁN;$5)z {[gqp}XP_'񧓟d4 Ӽz^z^C^isXS|.\xj??8 9v =%'랲\K{41Y{Ǔum 6N9ib uzg,MFkCPU}$\&HnE*燐-_ͯ5XјI&rdӂ|UFz x PBIHʈ&Ѕn,c aBV@4 p~䮥uqpᄶY[i[pfаx@%:~|cB)@8(5 (@+Mjdg[a.Rﷅ!nõ*ZǪ S::ID9_i\[wYhH\cv8e>gx^\n$-LB\oԼmҏ, :TAGfPCfPMѻOIsg#~e \O@*%tZD+li FB2'~B(a2g-GHPĿ~=--otjuTh~nmO '/t.o a"@^Wi&_"X\`JQ$a7 7ìt=wx͵&ѫ=IoEV}7$o~2]]lװj%>YTGt5y~lGG7CtMAъ BЊS~K-X\GŀZbe=;zDhS7|3c6s y|iowjw:@I2˛n~7j>&c+=¦UVb 5uuK?SqۓL1ZY s U4@{&z7pz78z3 zd߆%p.kvVJ2STD,)IgrnIo?ϦCR(_Xt/x,W ?)K`\1[ZuR/BhuRY  U5/a{[IZZJꗅduh>pw @F~7νJ1.*'|vg=D1RYHETDhLYxB Ճ) jR Z{KAR* U2G%vJ9ҩ=qO6gPՇl{Y $D虓˸Z@eUE Bo䜠SF9өnG= ?_Ռhsi}v;w=ų׸Ծ[%.HTCS?f*ѱo j -Ԟkat R R9)J uX ~1jwnC/V{]+ bx1<&WG]U?ANksq%AQy "CwWbB^6U,9;v^ ORvJO^!%/@ڠ7.Xk8y8ʙ~׋}~2}XaCk]Q P&@*0C ~NviBtE'ޱ]}u}(ŭr="|[_Ukiøo /+̽ Y*ow@?WM͵Z8T&>+qXx/2KL\g `ΐGtCw X_8SJEC.*l7 S}\7oٵԿ-0&l:Ғ+ݐVCZ i5ՐVCZ i5Ր\AKUn%Jޙ0clDuFR(r@J 2QI#EAfibѩWQ%;%Gs-5Mѡ*K[r/KēS>Id *rEeLIL|2E /:||6aeKZnI-i%-喴yKi2 .mVb/0\'ͬReF|ppbр*!uLZ=q𓗦bpDvku7A*k/2)P)<ӣe;4Qcz[a#cm6f9b8%c#I&L&E ?4us5I ᨥ:q^+b CL&U<'b]2`0fu/3%UVmiKlG"eKFtkh8#w`NR z~[0K&!w}l]מ%,ؑUZ|9X'b g &KC XPht5Q#1 JIgKK̡,l^ua~57 63N- ǸE N &]z! Y4 ;df0DPO &UC~oLB;V]ܱԖϳx"#\%:3/ERs:ڳ|onP7*DCAE=+R˘@F=|)׆uZ%D4`wN`12@U2O"4x^{%7~d"_[PNI9Y1}NrG7l(S.m}ߠޙB#Ka4?Ǿ0){v i&-W$9KkB@8=xwd~fkJ$Wk|iLj|GTߨQ&HOW*l~2wb1g|2gVv蟭h,MϞ<t7.Hh>DDʊ'letdezi4ܠ;"xys-ܛ( 9E uq~ע1$B$Riur>oWwhgg~My/v#[T4ΖKyD\.JH$L.9msQٕb6¨@B!K}Důw]:u8x@iZ*R=,FFwYbNV*Fd@4϶ d]sm@_Ê66LpM -9SFD钡XKNH/<EBjX/c ΍A _*[ LsLРVKMb>|cYyZi#8E{aRʻv[™R@jg(eL+ڀl6y!L]#+D/(XrN^B-k$.l yզÍ-!uoBꖝB寄}Aw* R Rim=~ъY56#0mS )GCM|PPTd)TR]P*䶸pQ*Rb@B>hV} uoBCRJ꼥7ΦdQbJ}{q 3x;6s&c#[KQ"hG0`F dJ``3Sf8t`7=ZږF"PR=\"S,  3)$8W(AFH E 5c(*[Ni ͦFB2'B(ۅ?{΁?&1$8y+۞k׏e'^f [5B(􄟳E5SB~t"Wqsu/gn\6+kK/8g䦗rsy;M5 fh[?&쇥9=z ظUo3ݺ"~}iry lA2A0<`EփngA(N,/Nz2aү^]jJIm @dGֶ Wezv[&}AI˭M ^Yx3jh4ykr+:9<[fcfdwƓ-\xO:g+A L lafWmg-XL5u;c՘Lxs<^o/|Ͼμӭd+]x'5[ -WW/(oʋ 'ގ'F߁\{拳JWɈP)4.?κdE 0%c\qޖngM񜙜&X|2cA:Eu >ټ؜Z0ZƓR(L=?wb"K[U\``?{c VJ+ށlE!$ySg~Z ~wN7Lq:9~Gw7K3&G4mjTZ|Uh?sP܇iZx1jɕN\fjձ]tҋAMC5%>?[,mM8Ǜ?_+K>PeK(0_G7c8yi8W ށy\z ֝ u3ZH~_pR|?|f5%eYg Fmw?;cSÁxI*XުF@\m'L.4<2-z5mGzu땱u-onnG_oqƜiYcy!Nyjv>9sN'B"#ꔒMRU!S:aKh% ^A XkwZ$_gxMeRhSTA.S5yyKD YE#:*˒"؂HE+QFyb)f%ߕ.!AFEQQ*̾6* P*Y`*P+kA\̤AKt60\6@mbɼV]c:BiGۘ"cFTA=Zyiяba}~LY U2 ^9+pWWjǬGqk6kB: RIc|;jW+%]%ʅt?wC'COFc Ό#l C-LZݫ'm?Yڡx&[6#S;Q;͞ʀЬ\d NsI4I OBYt$0KfI]6JȤBq}ngM!ܺ.6^ *J0f>f<^Kgh/p]-j.rw$IF}$nz{$N:iΓ$ݍJճ27C I}'8Z0bC>iA'*mc`W#5H"ENB2dm_gvDAqjhGeRL(LQqVe;|cP\_o=w Q,VNPlʱ8H((SPfYYe@B 'Iw.)%e8807.Gѳ T 1{sTRZxx~hK$Ib EQA}*Br(6KOIrWPnAj +]YX #+Y$R^@6(,Az$cwjj!k.tgTɠ!xtbzr5 Kmc?ͧ} +h eCҔ$ho*&"u=irQ..}Z<ƣ{ͯl2?FEnuvo62:ހ]iuO9n׺׋߸wu=Q\(&|8uw9C=I@f}Yy9 ^wlGw&W)kdNmvi\ciߑ*}ۜqLxuyxtjBjY G~}^-o:miv܎x,۪>qeۀEp\(&(9[ (=QFSKcH"rPd= %ATng?SB*|m !n=ϾN}:Bۮ: T t,'5ne-SPI&2NՈFhs*"Jfjr] 9LOj~}wځMnm3O޷~q)~{[Tɜ7Th?}4M=wfBɹDuX?c.Z* 0\|Et:HiM(?RA^)] ?l醮m.mI$04_Mz}5t@4b쇬QiUv }{(xWk+$X!EҰ]1: ;I0e)RZB~!\lHB$!6 coHS7Ox}ÆmYE{`S;d-ISRr=9SJ9)81*d=~C>B_I#`uIa}j!I ,䴭q *Ӿ;jG`=m~jZL \ѱ4mz2S0*2I1`UYdQ7+~E1өjb#̈́9~{) 'ޟ/ .Uƥ?oLtHwmmXyŦ:_S1h` +[Iad˲JRTe3A:w\$LtHGi4}R )JwX#=_ 榪q<8T^j< MEǔ4JDk+!T,JYRiJIޯW qA"9Qx)%M5>q41ʘ@c$˅tj2<J n@d$bhSTBm,:#5( ƹDlmg4*j}={D~3^t7=4B|a&Q~m=z_qqe L"O\H*sIɸĂ4y`%OwMETQj"yKR`LxgU0md՚1 ;U.*ڕ8gϦEj}<7X|rT =Z~M o+B؅¸tnYN&՚ֈ\K]ؑ)'"h:Kј$p ^;}&NWNOWWj< q0{q0 5A>McЧ:o/=w2qgj5|#ߒ*i]m"ݎjU{<{kӡl;3G鲂^j9h`$gڠ5gę!J<՞JpX|&@;-Il^<̑a9ir-3d0Kg;j۸(ZOZnc hmm\wDl8BRVIjWEPcP~¡IŽCf:Ykl2smN-/ 4wu;>cq1;Ҧ^۰Kj!W{a `:.o.qTs_ gUe]HU\ m-:5,?WIjZMj{ؕ gq>u!8!ݧ dV*>\`~%:TRBE8PY E(a6W(UISR4%`Qyk?iţ rO$\y#ӻE}tkIrh,]u2tk濠Rx]Wulݡacvo߮_vWS.|WW^k{<䤉[܌tQ}=nr=8xawM_5=>{ˡt>?Mk[/c3F䠊w26.YZA^%K)i)p&URWY`- E]辫,%E]Cu%RkUZ F]eqPZMlUvՕdVpfw V F]eq*wu\zGJI"wt/ /5t;x5_d0kr <=_-<7ĿPX1b׾~*άRsb`-K`Vݲ]f)˪ۻT+nŀ ,pVݲj0*KkDUڢޣͧkN\z9 \Z GfUL\Yτ_?Woɦ c|9wdEYBP%LE,{#b4j&O=g(O/8@Kj; q\OZc2 @H[][uS qa4Q({4ʉrB~= 'u'Acvtw5>+4 ߨ.pK/~) #"@l奷UbyhzFxˇfkYC4DhRYJ4Kf),%DhRYJ4Kf),fA-Q`M`6l0LYZQe_=nTi%ur]\,%ur]\,0sdqbM(*-N7q1I,p:c{[XeؑZu͎O[VQLRpԯ! QtBSUhe(䨐˒#2k(+TE*(cJ%9T9Q-W1@C @E` Zs&@c.$!Jp^Lulڧe#HuPg=mL&5mxTn^4znN3g?dW^6A~ <Ј"h-0&Z CI5'vކyh}د7_ g,B 7G$n\&-]]畜<فa#5)8JҊpHu@M Ű)|+G~uOkc? 3쭇j*/n?+sco&~cu-&E3k y,6uQ-s;VTH;-rMF^)r#8 J!cQRڢF=]qƽўi叝B8֥ClB┡V`H,LJUQ*K?L]Y2}^lY;\<0ׄ흿lPxx4_6< 0}3β^J^ tBnOML(Jm{e}CKRiH*`dJpQ#B$XdεB<|H2@Ns] łXX˜5(6)My ;**y4Hgi* fc-KiiiDڠvPtt kϽ8[\BPTiJ2yYܳ>*_V |RRDsut<৖rL$f6([ BXW9ΰAKBܐv'ӹPم SJ?Kۗ-ݟ}U:EdK *ZɕNA`i {OLJڀr):t^v9$%@4+@5\[/,@I 1C$NOP#] v9љ8l}gn:5ۣnw%>isϠ-zuo#;>xgD>IoB0*0pQ Df$hJJ&*b(XԚ)c$p qs. IѤA[%At9%R;GJ`PBQ VO$L!(5~_MJ1UjID6I(3o1_89b K T;o^KWU#Yf>a訅2hnUKPkw^pհA%kbDQЌ1/f|a~;NhnZc1^\hBsjRἦ&(]BGTk?y!D D* Q!{D锨#, Z$)x9@ɏsҳ(mK%l3KӀ0w!g~upHm<8T|蘒4VFWhmxX2*CbpR$iIX%.ŔNS`xJ8&H\Hc Bܠ}L"&4}{2P*HM q.Xg8,"WgOwC#Ǯv>jzwYYw֊'v|y! I%pN9 C2X$"X1X(5pcmT&o*ZJ̩ld՚1 ;UvJʳg"?,JE]me(D]Q, Y[T::1lRr:.}yMt}`G"pVuaRm2@ovT$UP\wxbXNO]=RH8i41ȐRd"E.chRAW%12gѠ. ?=!r@C31Z'2$c4ai1]H g='^;XIۿ)pzr'ʿ:Wux{ Tȧfp5_ O7yYl\'_ndˌ_נ xwiltX۽ww;WwЮM1$ܙ_Ɛ>fM7'3Xqn/~%eiޚyZ24Z$³Us^*<.k]e]ڂgZȑ(f}GYĝH ђmߟ (_A(!e̓O6fg65&fsy50m҇Fa}zUH oR_1JѶ+67z]& ˭-7On͝SRzU̹pS.}':98ʎ3YCdީdOmNw qPC<[PjIoc9g7O;g/qE8?pūN] jފB?<o (W&mt[iaj%$F^RɛyֽcٽeGΫ yW;LC,X>8ԐW*5CA†|4qs^gbvk(Fa9 c9?tYS'C@-Uz7 cadJ}qc=dK,0(IǠ>` Ț^TΡRyL$fLYDݦE Q(B: Nc޺$e)T \]i&=i4 5$9Q&QL g 9N #X3s,b7C4;QLGi5)ud#hRZ{H2В*͉77.dv)cއM?Iͬ fƞQQKlvq;Pq~ۿqh5vOBG>?SxdiVʧ/tWm׿,I/8jz"y~Uo_<c ]{b$8 ׹}3o3B,՛9_!gm|?;_n̏b~/Tdr{7v=z:7?({;M`v rR b["m|zј}~ q+lSd!tY@9 RH#Z[^ 7r `ȃZ#cR4 K$SNNn(}u5kS{Vp")O-eݡe A u2H8u8-m=atjM~B|3PI|u;=y2*Pd1f2dMp_{<+o-s^I0T )e7T08 4PQwژE vYF__BQo5zK;oR|z}4?syAA Ag8\b5gv&'jT:Ƞ:)e DXRs4$W'ӻ^W}tiְ֛S}4_zrmH2Hse=!rd~ R,bǤ!s&& US0o U0:rlh. .O'HR)`2)aqòVZŶVe 잮(|xY=Gp{m;6o(}By[` ZҺH~O֧|]v}sZ㇇-'qs>V\1랎ǯss Kwij.nԞϧK w()rr WTeJ]MieReJ^eZF)d2U\YK"_ 3;>OkJyk; BYi XFrV9U]Vaȴq0,A7"} '缶Q"scJt2^QY3@?+J Hf BNpi3ɔe4dF2oV1YҰEn;?3؟ଠؕ,gmͅg{Vz?myE.iׁ#Dhԙd)%iمDED܀D\ 8D<"4:% ej& C`)6gcN&ȲKXdF†En@{C٤ xLh-S< ̂V"Y1 ^U~Px|ز\'EbJ7˜l': MEaAaAe9ki4͈z/!Ȑ,|pK1YJY@.7.&#C1 Cr2&1P&/XtD敱!r},2=A+L)y}d|V -tQٮ*ѣ3h@[mk%tkO܃'7lZDI)BI6]h66 c,kW ]>6'a͋Am92ˁ 8aP'#\t͏vV" uӝ\(#PJ;;uAeD A=+ȗ"5Dz``֖$wep Md,8Vsfsj4uh"_θwՊg9KI2r[>f-oԻԓN0~b !fu)0+@;z+rU+1sD`A,t^!wzvKfL'ʅz}(L%&+^ _Hkx303]3?yh<\V{|tzELM,QZ$ Z0BMD{PDZ4ZkgfBcN(巄^DٍRIkR0X4p(~*J07g'+k l "B̑LڒĭJ٫W޿_,YbdPǡ b&EghhH9wiK[UϨ)@<\F2) dso0iNc\j{j9Nr_XmfʾP7_xR_­9Vֳ/H81=m?od8P:D0LQ!cJȢ(lԄRVEʠx_^,_6k6lYSAkT p PNRPF#)%L!B ۲!,YZSZU[ hUl'\=S8̅( ]kw6W]䵰vhCg.v@nbJj,Z+Ҡ B7?jwd!tY@9 {uiYN:\ùj2;Ĩm "b) -Ph(E1B}6 ;ݫ/'NX+;,tѱL?,kEMF}aVf(Df3CEX2dMp_{<+o-s'{L*Ņ,yL,+u(X;km"Cц\#p{!zK;o|<@uʄ_ǟ9\HdslBǁ>E-_( h^w4 V;wu6ԉlikRZ*s'vƣxCB s,Ts2J>!,.1)6p^N"D"Ȥ5HAP0K }g ɛjAM-}f2B6$q#EO ^f-6~6=UE; =˼͒F9p7yb!S9z$exYx 4S2sEL`Q(#us4:εs{$GBp@ m6J Uצ5g;?tsR }厣,){z<a1}Fa|dnwxw;K7ԞMeO%=ډ^ fɗ5iv3H9+f BBBW˲ eL&V·\LZ2"pefn2a6üaeBէ V33NulLCWSBA*ֆͷaaydh[.tx֖d=#-N{4gIkR >߇-#RSɞ>WۜN`籼2ቇ3-<齇3j3޾r*e2&v7WApଋWϝPi)XA@' }}OnC6fFm]Iq8VZ86rjCZ Y*zyE`[!,hwmH{7ḢE2X,w8`gbK1ő<<ˎeRj=$vKjUUŧȪJV$pQ8svN(֛m9#ĄT8e_jta[{_k*;B͹6zH?RvJS:_탮nI-N9K$ن"HlIns}NDQ&7yGY6WEt5gv7PAAu$ehDb"E](xZƒb Lv!N1i DR)588б6˩ LxNj[˒1LGĮi8xZWISK nSӛQǐzôܺcEguTE #%V9Tx' cB`)T/ktq=:we䪧Z9)gI9gtIrZPm<:Yk%6T p`*+{Ӝŗk *3SZ) SDT[CBe\E%;QemQɈW'JPRU@dJ)>w9'Q+U \y0#C{T^UE2)Vs6D,Eg!tEIForNST^0Fb06nRZ>Sϝx&{*`?VJ#:3A࿓WMNM72?o}zNyxsηΑwhOC >uU\0ҡJꪔk]u/C]epB O]JLC*v}wWUJ3讌֜b0'㮪O]UiﮪF +ˡS'X`dkN]Ui]U)-+j6ng/Tiߌޮյt'<.T`$=O~ḷysH5 7Jiӻ?r,zԾ*⦫aM ap/McM1EE;i8(Īj~1 ,dL"5>{Tc'O<^{.j-?GY5ݭZ%(qYPh ߩu7mNݏ|fPbl"Հ \! :4Z[dZ=dŁ!:SWT.(&8⯣W) sy}QӤڧ뷭Wɧ$/(EgdrNWkKA}Yɧ<ӰK5ɕ7sr/?:7/-XV/5ygw/4cA5v j5@&VАrhv_.e;?{-O}jW eFE-lڂh_2eju,Ki5Z&1 HDlf6{ˤBqug< ԣrcyͱ7>s^=v*T:٘)\魋:}iPu0ttkR~8b@mPXGt B98q&ck "xʦ(-IJDTJ-QiҡqZ̆0OXfgP!Y钳noX7S޼sctKU*]}e!6KX {%u`5΂йDΝx; hO <|/lpn[Q=vo߀Q}|L1Ll |F"*3״UMG{Z|db< m7i5qZ/ u3 |]S_Y"&4$4'9 s1Cch.4"nh;V_ڳcr֞aG͏ZZ/=7zNwѻJnÓ96](=VOYll}>sM=l\v؜Fq$jzsjz;jz#jz?P߿(!)dR$am1r%9t9 XdC`&N(>"Zpmb7`^,ED5*3dۏU.y /'KO񅳗* ( PlVA堃$r,Śª209v['Kq/RJ!XWFQhlH)!fSJk"{]N]4&8Ib FQzk]൏@P҅M)vWb G,ZtFP:d FvHp,Q&Ʈ(Փ֞agYA[+7#m:8leWqJ]veW)@UY=+)i ZV u4K|4v }!)hlHBi D}SQ1pȴ٧hNMK FwOIt Y%,8㑒6)sh- u^>C&R3QnK䬒Mcn5seutw^@kHir}E%bh<*n=Sht<ȭލU6M|n~O>7{sU$ݷ$вlst32__WXAzsNinޕJsUْ-gsݖo?E˾ y֔vt -K  2}cr0Ey"?l>P78Sʼn(YS (=Q u)yI"ٻ"kO(1x<}.*E?ZaǕ%1u[WY>} 4 2eB0j '2 U2q^Y#* I]Y [QZ66)$APNiAhrtN`#+q(j{U_~Kd÷|+YۼR͝urĵ\Nuf2]o_JyRZd!]j&*+I꠫?Iv*N=EWr. ^0-Jʁ2Qe@(줺$QQ|ag/B;ƒ•Dfldv|{4-pq1|q]8SKXC1*6[hںd!ʢ+ E'$*"9"&r'Lԅc]HH)irlLD&(/"qE..,ḷ)58 gC9&dӄ)1Wtpz7&6 6X`i*x-ї>.jkTu eGMm#"ԪRpUJ]&w4i2^d:z5/}_g*lV[/忷>`*-\^Fe|57̛Vnqxf~?._կx%egOwf[&jR[dQ~Dl=ܐmyͧpX̀F0du`td_;b!6: %5 Q66Y$B=g!ޱفc8l)1Hkw(Q9@P𠵌%(J1+i(wL@`z1yaZn7EI1(& #%V9Tx' cj; P!0sl%dEۜ3$`\ %?zPUjS-ކ=DLjL Y{stc}TNXs)DlBFTv)* sGQC&#j^3*AI} dJ)>w9'Q+d  s{46G~Y*I`YdQk~Ѣ鹍|<=uku+HaFF[m9ks(y8҈C`NLE!$/yٝNyٷ72ޝz-zW3#9ܖ1yzqc@fvuyQ:8bl"Հ@U0Ę, n.][r $cLd“AZO :'Hm&JǨ(DI۞$m7 NNE(֖[\JHu$Hxq0H39;Ue6Dža4\͇F!M &jngo޷9[H=K2& JEhHJbBFݲhcڄTL(DW-BIJ$GTTW:x3q7B=95;Cl|f^ζGlnLy?\UZbgi+Z.-ϑ*mK"BmĕGn[4i8NےIi{ֶж%;O=;+􁇹?MvzO]DmFR%+[n\׵?鼞OuoNɎgAD,E( $)u uFN>Q'%"*xH] h7 +V%kjKL`1&5=f6+F0Œ`#BPJLM 45&-Hʣ9Wzh{ nJN\?g)U7?Wx؋Ve瞛!'ο-YcGTLWg("YcJ- br5ϱ ژDmKR%<'}JYgC"p֥F4r!k(UE{ކ_Y@ou[ݱկOܿmgx>XRzl+ƅ,i!#˫n#P_xG~#OՎ N_`REe(r`#?wCőNjt_{qM..A|sjGP7ج}*_fZAo:|I>]Njuwk\-v/!JhCh/0]YOfÖz3 F#ƙɨ$H:Q8Q%kFP&fMz DfK 9ږ?g8ir[E7Ւ3v_)nu5y?5Ylf9*Qd=jzbJ^B:-"RՇitߠ^R͗e6dO޶tzuKȁnvy>Z!YGKࣥ{NoW);*pqfPh}WneFgGhyҤ1;8{l5s'] δ6ՉҪ֚yh/{]LΡ)-+eq$q;^pqWNPiYANFVۺ?ꓫ4e X%cho&Їng }_Nij3E -ijT{1zl& _9fo̷NؑnM~p7t(6ȋؘ{ev+4ʕ,cz?+Hwu$?=r~-v##N&7gFlDž-DhЎ+߫h:[nEs z@ΩBS@b jZĔGD;}%þ֙2xˉESkd}e2Aiy#}/ bPU 1)1㗈?#?!EN;}˚Z]X;];͙rF5ˁQ[Tm" [S#!f[ϺأjF ,5kމ`憎*3KKI(m%'+>{ =Kt~gy7elֶ?8 YV,9֍w_s~kbҽ3sh6:7 ?{#ÐX]igw˳{zY|Rϳwn7/kz\>* W8s<4J[_մ깺 f\Z-'_dܗ2#$Â{#rc1ō^No='N֝M6\ϥ\FR@[ϵQ&[}s:9ᛄ?sf4!ażc' @At;+5_?vp~&[rDM qSXrI`H>z]N;XN䉇]/ܿ}v_T6סiqbC{o4_?ꉚuc0w`(o}0w;4Ld)w/6)#0r9 r5#!m =~oqu`5tQyumWɝ׍β1ـ3xx,ڳm hA}bI [:DDzaxs5w <w&ȳL'86:暖!+@PBKE-(JHģfc<(KbPma!X㨼( x~Xl4xg )c `l>Оchv$L`P%jc(sdp(bTٰ=,K9N(M"#FcѬ i u$m0M9 3dQL~-d,p\gy>t0j7L]tFo2I;iW؝eݗŏh$h)UhQ3nu[>A'򩃔B6e@9OD6 pǦ!3KCgzX|g×7h@^38T>L0UQɱS W\ˏj Z 6Q̀!ِ $U%(YT d2ކUv&\]N ic!_H.Q5ݷ7-cd}ebGC;tw-sțB x2^AK&;JQ]wUDW]QuѸtf(䭁-1!%Fֱ'6.ABҕ"YWz B!iJR`k|#MZ K=:vS[֞uRwAE4ABjݱWRW3qܺPO9ysئ!CX}T,ٖD@,:]vP-EH!ؔV*q~{ voG2{^~5nf;=8>մm&ÏPDP/Y83-E,"0{v9搠)j+NwbEDFF}r޴FTMzS[ٚ8Q=e`Jpuqg^9XO?eO)Nӿ)gDQ/o_:6: P߳aPl*ɿw(%>~9M@]TE{Q]V~vD jd(7Xgc틣7I_ٯ|qh.1};v1*sQznEay!ȅDg??EP%S1U!\w7b)m}I[vjjSUKt$xЦA\ +MD/Ṅ| UG %skTOzIjԱQG$ ؁թ-0)RسrD@9Їmپ5mԴQigaP{ϙu!ڕ2&+*Y 3jN$l{.͉hޜrZelNiYm~Vͯt~CC4w"@FNġΡǡġGWAIf\1[%筊 Z$m IVlZ]*#/H wtHO }uuky:VΫZJ!'F17 t.=ԚkdQy.%WZIc:T'4 ي JQ؃2.R e5gQ5 bp% _(o\'W h-rb.\\JrZ OL[V' eHP|5G?E'}rpϐKlLF2d(0"A !ER&Xģ"]1~xJ]٨L")&^w.*%,j7%,&)hEtleVP#;Dex8<%ə[ִ*ZޕZ;)gzle[m^oٛL)UBS5g%Avw|ilG |OA&.Tgwu؜\Oj>XJW .hcdٍT%Q th=<9(.52%ҡTb_%%*%SrEAE22m~MGF4CJGHu"HƘx0H^3zy=o6DE`4L[JSL31*]ɥuHţ*FхM9>Go='{صY, + 5fƪtqUƖ,BWcT); AK`ƶm˄TuR}84 u)ƈՔ >([p7=BCjmH|LZ~Hy'8BA# t{sjM&x5_ʏ>xs|tyV3Ti &w7Uڦ]Q쩗N%즗v dC`J2!R(1R(2%)rs㹂wA԰i  :Zf§HrNi/S[":ڴ725+b`, s0Fd=Xh]]ٰ8Ql+z6 SJzXvRK-a- XBVZmkbIO/A$ܦ6Oݶ[R78FRqInw[G|YscdWƳ||FRiyE#vYQytLP{nkbtNx+RM첟PYGǖKXwBźQ 諝o>>i@yS(j6q8V87{Z Y*zyXvV yO?@FHL1㙱>ɘ]1U6Ux,~ !S4' jnP[5;aS}U[mD %ijT{:Q_L:1s)ꊴBC]RڡC;P1g`0%?y=ѮNz 3kAsS13 g[bZ\@jt f2M/t}ՑV*t~Ob> 24h R>s&0E7@k`)* ](B &^+Q7,kU8^pny7UmI^5es@!`LDjDO11ZS;^oN+idmb&\LP@3˞K,$ Vb X'=Ӡ`PTvAc+d8ϙDn#ie[)Excp#H\d#eFPymЕe5q$X,Ewf˄o7-w?v,SWfܑ֑NL_w쩼f2*'w54gtg [E[c;WL[;ĠC.o{E*Ѿ,ϮXf_l>I96OprRՕܮz2ZC'_˯pKQEZQo~Nݷ?{w޾u&yGC|~,;,y"!6?Olõؼyq&qA]J^`)a^fpYiE`4bUiEJz0/]qWE\~1UEZ=*Rj6XPuJKbeok"]^1AL^+NW|yj٤h?Q W\}A_Fym)*u~3Z -jOog/u!Y,уj2N֞2D!MFi5z7Vy@a1/Oa1%ܳuN\2N#cu41w9_mrpMߨ/8׽y( O“}E?eXx8+MJ›dpLsI; l_ qA^ .Ү-S=EJ; #I`bUW^*jwwUfpWo]តaK6]L\y7#/󸫗Ir _ppW~i P7LC|^^ cSetw7wtdBRhUHIE Ie1xAS( pbLǛ|cY\5oQlSTDryH qeQ)mNNTF *%G r98{R_єcYДI&qH&:ɤ+&l=YXw̲)[~p?wo\YgV5oVa2L-׿$&#-h,HhplL$34\% jiN4#NBAhIU~eMV4l..57ZT/]Qo_6i |N߬Sz.izzͷ/B`"G/SyHN6FE-C!t_|͍*;xd+d% *@ ^uH #BpI̹0!I2*#,]Y|t'ޅ{%|Ey\=tw/R?ԥ#3qSmxE!1A3f~D 2Df Bk9 .),K1`F2gbm[ynG S_?t#޵3NzG(3v(Vb/,zA|cevl8I12GA`xP<(Vȭ V9y>r0pLOBD>}`V;`)j+j_}P[a<&Y? :h`[S#82/ <@̀WσW9Lo?>dzAPNCy8E_ykmT /!V%#b 3oț;GeAB%,ȨGA*MZC$83B588{@ F6v1/-VKGkm3\Xi> SCis>f޽ڭu.::: y-6:zM-5PқRT.AY!]Bo246BtG!OcȦs q$52ǡ7YdO5 %bkңtLKy!f9<:ޗvh#=V Iuz%U\SREc+"iAT PRK8E8ru F2 }gWx3w%VxT*ho /x|Pwl~_/ &O%Z#?n}nE7'z8ZC׵^NޒN >y3W;JQ`4H;%#gPh< ,k4u_0Ԟk2!l%x-&' <ٍmZ \{1̆گRkD ZYrxiNDOm6@w.?OV%r=olV[uϼ1"&r«t- Ɖ f鷪Q[c̞ J9Zl̒ r=)\3z]6gp&EIVLNg&nϸVӌB]>79Z^uT ."oc4'yBFɟ熟cJp>f !Mƹ{TO6/zKj6*(-gBI䘵2>XH:ind!P$Hg8SZ:^ 6X`iǴ#}GGU<}^1ErV3VjSzNVZ((M7C(jH56.6iJbcQJeYKddH>/&AI敲t'u糽܆ri^*w:Da KY Prr:5X耳4@n@;Ҡ!rcJ2Mg*FˬYy7݄kG(.i0|z` Zdn/c($@M 2*Qh24-[,13ʤ*UmiB]J\%zj }yuKI(4}4sDTgϬOb޹)N)Vbn ~@IdH_ 2DSxMkŭK`nAr]uEX9ef87 oK39ZR,&;Q貊8jKWB1.vXdc$jk&YLN ]aKN֦$¨$&5zdc;1}ڹ ؅ʶ͠a;Lہ0v|yΞ5|q0OQꆐl1x ]6Q=.B"S Ɉ+|TNJ$LdR,[YS!FS[R).dig"B( 3p sR彡EfLCNmz> 0S琶fwf\8tq33q ><Ч C.|Um6>.}0H''⩑<bCilD.,)]QEEr($9e}|Y3\2XT)[y:8, UEQk602, 䲌)t&o(ڲeb5qpz VͷV^ v7jzLMK:}#p|9ugzk6o6LvC:J(3]M˞WK=cT [V(Kp3"KI UDpC@K,s$#9&9 }2-?. ,uS%8MR~j~=Gb3yV !.8T +\)D,K$ /D^FN^FG^-FD^- y~xPPpH!@+âz1(6:MpK䄼kk:&By ײգy"Q@4:x !*`1X0b`'.͓iG.X;"E Rrd:-f`TȫLI$({>6pfI"0#%UJ{Ő+(u˵Xp/^ wE%KUy}AVv߄ɓ'q󇩯 qθ獖C*_wu_G,yO:q[k̤yO} z=5uϺp,aX|oo@ZFVb )yI`{Vڊ-*m|Ïd ?FމfdSllOsl:ۂ.("BU V*TZJP,*P VJPi*BUPi*BU V*TZJ`0.4%pn6Si 7M)H¼%'wk&W4ٖ¸f?洨RbK*Y :@/:)m'b, TҀdT&P}aTȍRHNıd!dٵB#Z9܆}VP>wgcm64:X#N@ A*w6n gJo9`CEq0 ZW:_JHbApS7Q} 1F(`lHLg)CTc3)S0bQX,D DDFQ4`D[ʘKr2B&Α; k\u羺_>B<a+hv\BʾڦWcV7M ~7edwy4G1p`JB iNcJA EQAiịUh4BkMCp;6`#*$;A (Utdl k^8Ѫľw_q+JdyȒ1;*ݔJJ^J~%i{/R#\J~}’_Thק?6Ʈ@.e)w9TSb+m4E Q (^Jx\^/NJtr4nZM_嵚=0 u ,-V7~2NGO_rĮ7X+|칳!~5~_/"CksȺ3=|mfkwxE^!l'm}m^ZUa6lYћ7,YydL>B(RG)Ŝ(aHI(Ȯ=WT-?.8iժ-%Jtxcng!&_^h,YrY ѫ +VQ7VjX'깼z+ItHܰ 9^L6K:6z\hR+7&+^, w8_ݪo*}eѤ}lnqTnxݙ@m3Ԗ똇z"|Gu\wmqr54iK&85ߏ,u⹓*m8$DtRkW;rW1e坡eq8fZ8ɰ6 rLZ;Q(&QO'q$-W )<ʬzfx.,q .B4?&lsU۩n3v0#luߝH!zz]N]dTR nZp!5-K-R H!&ώ Uz!4XXH9iW!JXH)0x2x14hO0؛U3ODD=WVpF<#,Q\pW8^p,JKUxs ?u41/1 S|/`xv V bJe*LLK/ SF#2(u)=lRj`"`ˤ&9z'r(=~"F<UX-띱Vc&ye4zlkj4BZ"2lyBXVuubMY$I^{k==[׏p ?9&k*}&ܺف[/||UgwwHsRw<(Zs!IWݛS'0.s5Kgc}S]-χ_wu8ϹG,Wu㖢)-̇6@kzl}cxc%>:Ԍ[/S$9]*l{U_rXTb ʦ ƙAR\H.OJoUNu;f%d {E#\1Qv{*>T;/ P摑eOϖraq`-<6wC0FozcfͮuoVP-^ݠs,B4(f@AR I $)- >aƉf+RC~Z=.~n3r!rFV 傡T#"As_b]6tRB`0jX0!+"s&ql%s"r5ˡWc1\sqƵqI":$YJ'׎z@-fRXL Lc$CG$J"fQ6 /k=7ŧ61[(vkQf-1Z"Mm2,^.Wͱ%{Pk=uۊY+x_B1` Za /R(#((#(գ(\i1#V:2JV]A:Rb&y+GEاk, 2HV,KILb$djQnڰw]ުSdNԻPP2i=J`q10 alxk OHiI4, uH@xJH6GpCVKGMXR_t uPdTQ+J>uY>aCb˹%'$|0ʭju)LEO\g0)ȒK&2MkLTۣ;Zg淋Xnе~V_g1h^r`Jﬥ싳.]Y>5-'*`vD7&)B4,ijz+QhX|%B[,"&걲Ҫ3@NV|I ,ӭY `SR2xnk>t4_st3TWqIjVK7g62ZEߥ \Lx_P;rL0D8A뇱ڪDfgLNz%wZ.yGwߌqGrhrBqM?/K}>7&cJtkFv@}l4le(' JXNkcrM1ACِ 鷘 -UF^X 99|rg^J~ߚ sIӏu2 Hأ5de0/3ؠCJ*Ǧ鞡$Zfjey)${ڣc[ԎQ;Ψ=3<^K1FO 9!rXijX5wRL$Ա'Qk!2C EW/QQ jb> 7dS5`<fxHt`D"" FDqF!@VA6u(\*iH ժIsJk6z~9>:ž!(V݁ 1WO4 M$w@ j0a8nnCQ Pk@Э\Qu~>: છZ=;\u+q"P_n %\Wʕ+I7G?\^ s A3ף[YGS|Uyd荃7^)g /1tHR O\th3L0M/f.Ol"r9fהNr%`aQV?c|ߞ]\׋Nb'."?zc^,?===xvuzȯ-;qOϿ_G9qxSS%r%9ȸ=nQP(4ת&Qsm7E5:4ת``n异Cε)8Wj0t0p=n-}neH3\}/pefޚW ٯr~`gg+aa 7uK[ҹVY!W/(\@5ʶ\ّT4thn%f3\> sGnq#uQ;` [zl;ѾĄTS^v|(mJh08.5g˝ǖ|Tm:$ph$gL}軩$y[9dy`˧T%w"k”LcmHjI!\6 r2(s6»H߉t9dݤo\=~}i}Cz1]}sa\;޸ݪJq.pTߣ GÎ(Jihvt림G+{rCjIS.O$h J>V#+Zl 9䢫5K*UP0z?͇ Qat l).7$X]`su0s&G@N6cNi_Ks O|cc"Nճ K,&69볺k4=;dwH^$EҰK[hb>K%8X,vovX֌־-TƗ¾ 3 M\=ӞT%=WQ2OFJ]3Gr KŐP@N*4 64(]bKSfbǪ%Ie5s* K5`8̜ 7DqA>^saenź6 }ů}o5!W! ̒:h8_]#4$J&H'1N}surdfOf=}>&y7{C-3]޳ vls0J@@%.XMpH5&9pd,c;pVg+* @i,BIz,@)1+/vv0f=?Y|,HiNn-r +A)Hu$#pZ24VL(#9faQһL,yXLpLq2 ce9E}C|"%*&h X! oM@xJH6GpCVKGMXR_t u%ѵPdTQ+b߇ҹ.'lQl9sC1~Yu\-chbD(P互6榠=m0L $⒉ A(!9(Υ֙"~u$t:Y wf+Yכ+X&U&&&MK`ە(4on?Nw*=śңu:6}"-=AQ:Kz*}m _tky&H3ؔ6~tu2T_}Y{_y'+٧^|> pW?=I , IWGG?>ݭ_ޛ?=|Fcr}qn.'}):OJn}yvuz^t7.@հ&n0ToGT/%pN%Ovt73-ž->tu婥޵F󛱰;nH.ՔO!X^\U<&cJP!Dk.%Y[|NXB,.кzxZ s-Tӯ˟V6&c=^QN,Xb+4!űB跘 -UF^X 9~D/؀br?·Yoل$ՉG jE &ElOC2;Kq >ޯ"qlJAHUiFy)${ڣc[ԎQ;Ψ=3<^K1FO 9!rXijX5wRL$Ա'Qk!2C EW/QQ jb> 7zDsyIuߗ5b;Aa-"`DgDp-}INyY^ɭrUϦx5?NiΟO4zS(L>X/]LDE'89w 5i5A6e*j @=@J/-V^L@M0ыflf!+ 3I=D; #ZOY4SՙjMDT+%?C4ֻf+wQhoܪZ{I^t{A/tyw~{n31->Ng+/ tr4(HvLJ@RT\Z6Ӡ;# V{&5_rRK+%FZCLK9TEҼT(%ͷYɽPn=6oUY~sS7~;3DYen nmeV(ʰ)ieMPl%Q( PNZ2$Շ 䩁g@uY`}yXm&sVo_6Kv/Pkg[-L].gw,f3ʺYrbjFF9L5*k#PMWԾ櫸]@+)M/.z~ ~\o4m^nH^xt/`CK+lj6Gɋ7!Gt'_- -Η'[=R6UYi-ڮ[o݌wqڕ28 ?6|2/pvWNJBkk1rZ?-,jvg=-6r\e(]OʤRlκdEt.xWES*;985&Tjj TW|ulWoix>w^r>T[vk;֐}>/(PUsKwm^ư+XG>w1.Tn6?Qm)8*0^s Ү /Ji| }lMPwUFy6ŕo&~ ]U)WZÛ8xm=#w*U⮪ֆKn+s*Y`㮪>v՟biRjtWP]Yp[m.Kuxxlx|畵: &8)_94]?z:4hxr>#;TJ> +?7 m,/ ž*.nJ{҇*M@7aQnz5e1IxfQ⯉U%3|e&E`ӡ'{zax"'#Z>n72x_=o΅J-$?᯳ W}_׋ɷkzTBߩMz?姝N2^*f"goHB 좉eTL#cBvFX`uFIa|*Fnres%')Di6]Uq]Uib)n+Jk%+u:Ù߯o.uή=g~΃Tv-"i/JG1Ɏe(\!|^h[?:R%LL6R0HRFTL rV3NQ烆PЬ1BHt1Y;.?diqO{ !-ه5U-WI6^e~ /4.;(2vޚĝJ"sI2~eϜr%u"'PɅBfM4M` EN;mb7`;kHdGx89.՛@Yɞ&CKzU1P>*IEE<+.' ج,$ &瞔g R)*`?v)Ȑ,}pK,اD`V_7\klH) B̨SJkzᜈ/dOn6y.} rDrc]GYx] ҅ro*_GF‘kkWmBa"enسD `"6i16RN-;}`XɃ^ȶY^^ j`+av_sӷ N`w׉۷ֈm߾NJ5d[ٶozK <vT+ٳ H@(36)sh6 P!t)@ٳӪ7"%7B~k;UE3Cw1kĬw bVv2=QeZ+tOZZAmJV&Kաi.Z-nޫqZ0?伜MFv%sE xn5xef. Vq+7`G819]t4 ֌*_ޒEo3bw󞍍AXH{cA9q%VE U2PY#U oiڬ_"Fi( &2k[L)m m49:'|Y7q(]-C(>e8`{{a&zhFv5<<%YO8JD"d9!- /ޤduX\NXMrA$P̿w%٠L sR7.*ʾ,Ba'շgMZ7f)'iƾ mGwu"ܑgc_Ѕd=6Y.EI4!"HH6A*/6k#$3{]!+cTmj뒅h!;p'<Hr"}3%vyK;ڮgn&'9=S%m@cr&ť!ɚj4EAb )z#+d %p495DOea/6“ g؛8ᬨmWcGM?gGlqcp E\d`&:;PclgdSJZ1}47q{5BJ͗4_2h3zF#YlF֦x6EJiWCYhzq HH/;v0$NeezeNl#4V /V&@RPԅ2KdcCCNw 1U]_~Y3-8 H U+YPd`LYZ, 1`ܓH9)gI9gtIr9H8/[]!I 6R=p9-`ngrrrcm/ھhN!R4ل Hm^䢢PY8mQ6?? ?8~AR%@ dJ)1I'0j崂L͐ܐfL]4γeI3e**I:ɢ$go-Fr Ѣht5ԵUfi'L8Ha{F[bXrQ\qI+ C`Ncq&c+{)'4JIo:W;OG2>d }@&><=RsUWg3.6c,XHMooF}cW:]^CӐؓ|azxs8&O2Ja$eJ'{HBɱ(-El:'Hی&JYQ>6pŶ/X]8 k%{m-:QJ%--Z|Vetai4(Ο$^wDbFDl ! ^=>oN:wrlZ[H=K2& ][o+OǼo}pϋ( %ef^\tY\$7(4SXR}Tr*eg!u9mu+RչjHphS)%ȿ&l5@{}> Ω{k‡̫TO׎ S_]Bҍ3m%Cp΄_Gs c!B3~U`X0{Sϗ=hlpUr]yk˰s+]M͠l}[sO-mO&ciλZ|،< Oۋ^lv'@㖡ئnѦjDl%ʩPT-UUrB]9^XO67:7BAzGbxXue[ދZuê(@E> >x̫S Q2=:!km '?LuhC %_oC1щtWW@uzڣV=n׾G+׾؎L@L+) aV클r͌ (s&50H؍CB$,U+HFQFy \)/r?'{˚Z]H?DyМ7ɓXH/G^) E& uAbEy d =DdWy! ,UU,Q'f* 04kSQ UJ: N6'W|(wyMc ͓&~m{RMt9nO]FW]72U駾Ϲ̻}nɽgF,ͺ*7"=S"fȿXۜbmyOK%^{7d oO>8 3y>4y[ݞ罜Ou˓o9R&-wۦWCwbIJ<7橉YX|i7Ktߊ0U[?\,HH88w:ўކ*8(7`d~U޺l7QS4E0NF;`%MփucZ3xk,jS,$W|r"T&*Q[BFkuTIcr,u!0E AG0QLD9Oi0cJzՊݺOoNU)4Y@l7bLR葤6')%07+3URwzOmxXχECȁ4@4ah>@AG"Q#y=j'"y;L[G|fN@^38dd 55Drוw2h*`GC!HIJ1*QngQB&;nlxe*kpuy~ߎcL#} Ji^e3VIWlmO|`.y`BВOƋ;h}p V)X)kT+.E<5%&K:ƥ PHRR}9O,yHASO))Jv0BdH1#!Vrd犥fl{%dS cLՌNy֊7Աf:p1%CZecRPs0H\@ej>1 Wc18>κ\t6ΐld6I/a[Ic c{:2COw[Ƕ͠q;@ہN;szY"v3ch{/ '_K4hnw6w'RVGɟ/'^fp0;7˿M?.22ᗙڕˬԷR.x&~1LڷF;\ݲw;X撺A[czw3o؅&M-[)ʱXYذDp_:e=Y_N.V_zP'W7/|?o9~,Jv; +(2Fرesݥ?:P;sH0Վze""pv6uvl7KVg"0L{ۺsMI-5>[6=WwOw&<5w}wV=xl_6\*wVg^NU(g:~p9}xώRw}.||O@[ǒ $cYFI'ĐKLJF Ǩe'UuA&Q +̒@jBe2N3iS-Dn#kz!vFΆ!cu9}7 PxL;%>K ݮ(O[o,h|I0r>Ķs^{X,,RĈс jA#(#xq4:εs{$EBpJ m6%8*Gֵgg䬗q+h4rΟ|}مm_a][;,cJtnYBYqz&cЅ<+"?W;N$LJyh/$to7{'Ldd@hD%tvD$ [BLs yJvsO38? K\}ʬj YmN<>jyβM%]ډ{9;\IMGTF"ٝ*'1F\d6qL`yG#^nٸW b؁b@J &Pqk"ŘQHkXIeցwB%#)%+u.#pJ M0E)WǮ0:#gbI=% gi9^OE|tZ[qj")U^/O^#"xd KNpg3 ,){QrRxQ<}Rkצy4,$gs ̘l-!t-:)C$DPIث<x^NL)k×zA`ȒqْQA/## %fcYJ9j,ڲo.{Xkwbn? #Qk,'.x.A58}j'uT=! 5J!*&$fc2JPfĸ1ԫ;ҧR9fj 5ItR{UT] ?O~GT# sީpmo3X\j>3Im"YF%ԶZkRH;6i3m{,Mlsiŋf}aţy[0G ]_n/ۙmlYjۍC^:Ԛ`NtLH|L! @e@Rj,iqb'8qgkvtx!KbsQViP2ɜ2`&0cʔkd (1V^]K^xku ^Wsךw u?Q<(<=LoMC 2B:8+8BlQ/zQ-x4y{#}?*TG?hi8"1Sz5Ƨ O$j K,iTʅ68%)Le-ڻIErVoٷșgst҄0}u}P_`@_kBzMo6-olIO7P'zv6Z7f\'oƠ}S{B͞x8i `/#dR{j;ϖO FW95~',1Vv~:f~rF UL+-lD]aPY~|D}ܒg Δ'%ϖ,jv<$0y+;Z1=Yd@p%L)MvW9pRb1Ȍ))y1dKfC`@5#r!X)J ':%N(!sr,u]i3r6k=K)W(9m)4MeL˞,E1+ŲƛTT)KV6y+!kdP0U V9{(׊Œ )km^x~5))'l۳T"AY("LI`d,/ TvLj'A@-HQzI^Xyؠ}taݬ_Ġ>`@ 94B*Y&3&,'?4=ִ>ZI;7P4&Jm'Qc޺I^9Q0R h ӎ";' #h96gg /p0Z`00 '~Zz>0%:m (&;Y !HƢ ֊LZ΁E=16e^fz.;㉭.od>3mlO9;.]GEb2W8>&'A+2R3a Lku>}:RݏWF][u5RV(uq֎ΑV* R   )X1)T9!U_/e@-%eɣ9%O0 J$agܰAC vR(4tuKCB`yEl*Ϋ/^G(F'j[7H>!n &p{yquN_ՇtkΉJ_YQ6^ՁzxҠe edJH yzVyPƉ#du ID wQ<$b o>ǕB.2KəeNv;;#C_Nǟj%8M>#?.`/?O.Of$ ŭt&[RN_#v]u orZfzpJfCJҚ| >;yCdߘ [kGf,ټj.vjz|ӸLzZR(vy볎|Ϗy3k:n$zT ^u8ggM7n\a4j> dhC]wrmfI{| lh]N g*|bPb@Y@#y_I6 [/W=} F ,JCV']j5J%RYļVJz6/A; i\AwͰ WStEYGd_31fv^1K7 Z?ǟ#̠$2MҜ CBG=8-8:i" CT"Fȭ0"FZgN(&"J`Svtx_IZEf1+;Gs搾i|Ҙf">|K.ߒ1Gr% XJp\zbcfb2e u?m/o^Aӎ6d]lr)ͿheboY9޾ٕ}cSv&/e\JBJm_`_РwgS˵]RET,#7sy=(5{QȣG)4m;A0g XUr ,%Y%;qޭG^ĭ yHܢә""y RjOD; c3r6k+?6p 㹓/ijϪTǍ֪:/mpX>}2}nݘZs&ׅ`Ƽ&r9Sb^jwP)]o^/肎]%_AUs2vA=ת!f)'L}`<6M&G~*+NeN}F0or.ᜂr 6so*OWӅJ{5iˬ7t5=;2}0FBׄ’W&5O1 qLmjsR*j:<Ϗ`wEzb>."򺖭d^J"u%=Myk Je+=wo\2cU(*%YH{*Z+<zjk;^[ȵPZB^+B=ӀusS|UX^.xÌ ?&$|b448b&Ԉ~t4~˰{}5#fb\JaNPiyU*T UaexW ~NyQs}6.Woʅͥ깘U8_u{!b6x4@c4Q{jҾd_6_wC!NhJ 0_Z2gQ=Gl潞g2y;6ZKԭu^ԐM9*1>$0)ypΊeV@k!oR+S4WӀoŮoFn̵w9yeٍXpư5zp2G|p<.\tϤy2HkHzwnеO+&( I,rS^0mTnT[T9Vo(2* WUTd Y BT"պ7ʹvB)U>]iPPe7Stܽ%Z?>kwzBǯxoeN-zᄥ۴V$ͳ6 8axp0lϜU;yg6;4݃tlW7E2.K6jIJ_.+zDngeR6y{RKʹRπӷYv\(8qõ\g|秓#x)#ךݞN>ZnN'G:8Ko)ܜz[)|0~gjS\R7NUYT.ym4ɘ#k8e++ΠIEkYn#WuW ģ#fvw$2&e'LI|Œh"KETI 3JED^SM%甋phJ|/)vK4b^F7]̍>1y ,뾦_~x%#~ͥZl4}(ՃR@Bo|xU1+>Q]+vڮXkbǭSϬ;iI%]7芵]R+vڮXkbm(LW+vzkbmW튵]+vڮXkbmW=[l^=٦uڮXkbmW쨖WToS^}3R/I-|ǣJX0窱>0%Ox4Sß!-߱)mi9#؁m*/j7fOwz~CRπ]w.>=fUjtӥ{o;C Wf`(w&Nw]YF4wϟo{wڗa@هEEv}s@!0Kf=6^Qre]!_2~vaV޵DZ}L;Ǚ3<~yuJsq߾ @i%O||9nv>[/Aߏze7n؉7n7nG{:JznD:2q'>>&p܎IZF*E+#Y[T3HZipZ7'm1g㨈,s]+` I ,DH% 2jU[5}bDFQzÓ-w^㔡ɜl߃,ŇyOۻ3'}Z ufI&;Kui,ƀE5Qi}7&ƻ~'˫O\l`ZX샄 z)#<x $")eaU`QH:Z5ZՍZ5Z5ZUdL.@r)&Y`F)JK-KVI)l[JT!KCILYGcHD96(AҘ(6 \S?(~YqyX\&rmj5_^բ^zH`~VYWW&ۊ :M /?,^Jc=9 ن#jr89k/ӑ<5q<'998iPA  J"Z)"qp)2})Σ.CE 9)P8T0` څd,ED ȷ3 og慔l; g?ī5dk;%K[]ԑ\"_ieA[jVTaWh`P??A(P$r:YY6IJxR6$D.FbTij+&8'#TJdm?5X%R9XfR”]%Mc[i6; = c-`kgQLs>!!Ĕl-L5wi߶Mo +PWEAX$җR:J*yUS\<Ⱦʻc r,)^(f?&ؽјlL8rzJZ .?YF='RAl,Q&mHFdcV>}uo۪ ;rR,=qJH$tbyݴ6$"Y xB Vٺu6&I:l2M" 2̖%H=q@-m{Өݲ7u.G|2{Iѩ-ozjS3Q=FLcO,GN>ΈU6MׄXi+}{r>|2A9Eю/2[xu qs#B`ĬE frߣ"#HDD$aɫ$e` R|kVُzׂѠvq,jƨ:jwkƓP9}IFrBqh21Y[ͱ$BԆ,(`]AJQ4jr5ZEʣ0F1Y bl͆AJƱ bq,"ƈ;"vDd(,u,QڜByzCYT@9 0p#̆m}KȦؘΨRq&儞팬t<"G8]KغphQ'JxLkfɑ(Ec\ ;.:YDePXxFWT7p9L92#tSX8e[<\xxo罥sϏ-}ȽobIʊgJ! ft4pMx+:u֏]GwޞoWmԥ 4pL^LF;S.?.]~t]숺tޥkseGUAX JΤ ``*c;&$)!P-_c{=.M3M*#rvc=lK,QSZcL 1{Cn~KQI,K(}eOW} d5{ R%*tl8D .GHu!i2!!Ui{nL#jzDO~l2iY(g H_ 9Zݘ^ۘ.'Tz~јF^D [eUl`?RIVJbIoAE7 gtf,[Q[2_~|,֌.#}h-de<[|{o]6k1p;* }Bv 1(R"{EzGG1BB?)V$)#$dů0Yg4Xt̊b\|aobk:;]iTgT2kNFVk&7u͆xiS4젼׃" )?EL%jMW{񕵗"ȂP 4 W|$,LPOt*龍L*dqm ndQ+.b-a@dcQ)9_9.uVٿFr GnCl>xf.K<~ys^SYR3δ  mvݾs/En/d IC! >aTzJAcn| #Ҟ!*XJy*h}MtRS6}*>Ih1*#J%2DeTZ=_ $5S,:;Ne U(^Ypc,"4f͆sN4En1.xfzy˞_lp?!ezpoz/c>ﵩ_+1[0F$GxEd I<1.5Ē,CۼnaPO1X R@t@D(d*/sJAFUPnl8-x#zv<-R?{#j1Ţ|~{\#3E|Gwa-[qdmD(xe1H&Z/zHăV=h7Ǡ\T`5UlJ3{0/_YBq)SH>.Ӗw)vG7_/9{%ΆCʖKzq8s/8MlF;7AP*P[烤( #i4Ё @~D;K(L! L`1"Pd[5LٸZ-!pdilZY.I$ʨUFFll8ar!Lfa]g _jՀ==0̮+ݾsYE(2lJUF`ASL‘YH1a?#}^!i|)`5ȴQ8,WW]h忻t_}r4gZ:D#Z:̵1ؙ\YpWJ{F')&NsVm.bo/{'gkY..>pܔ9Ob}K©bEԅGd{1{C*B޿t21cKG*wx8We?h㚈u.}QC5}CC?S8!ouO=}Ω˷_}E6rJQ1k&6C:O%X k҃Q3!̕̕ܨ !Յ\Kۙ.wur[Z]1M7O7(~\07۱L [*tyOŏsm/\ѵ2.\rUr_;+qBڝD2OjM&aZXH`oGs-WJO~O/$ 10}:^P&d$`Ԟ;x*pU *[lsY}@qbęjj1TvkjNV ;uZ=U2k*Sf}]unD`ML1P?7ܗ>a1s?=@qםM#ww­㠈$EG޿An8 vz JK:Gvu,Yq~;}wF>;Mt75"FDάh#]sަGy}HTۯ)9 {C5q | A^ ն5\rnФLq|1 "WPZsbɱ%+ո&j3űɍ2;*N \ĂB,_9hpqr+ׄz}Y{^!K.~0P褶/͕lOT]Ҡ#=orLʶN2D'%,1;燚}9eԘ= MͪUrTVI7եQlB#N&Z8oIk}k5x &@Q\ #H#f;qzNS|nZtmWcuTaΗ:ٲ] RmI9IifL1 b{qeSMNn#x38f"s F'$Ei!S2Wa=Mpfb[Lz84l\ɣݳ԰X7 06Yʹc0&cw!0Ј*T{}f` P &0Dy"hNߴǩM6TeG`3<%cNBk>XE|jŇ 47-wqZ3 $m9ӝH9YC%x} :b_-Ű$Hѻ);RcIE{.HHI\Uc-"欍j}%5.Rjk*xH,ap\mV'r 5*"/ AL)PA5ݸf|: DnJ >{/2.Yc :`ў%C"XdGoO ٥fӄI2U0_@"T\ Ɗ-oګ {g]ǯ:68ZnD%Iwa.: 噩tq ꆣ+yX↲.qkTbw5 52P`0Tg8Q!?kQ:dZ{f#*X-JP_;wMAIU`l)s{1%'l ZV6JJMIX&6 Ȅ6;΢sKĶ[% ecL+b|e ZC`!S l0%]b-Vch!6KH iK#AI1TTT7Vǹ`fı.nVAP SѝYD\ ̑f+xYgxw"=@w/: ? A-F]T Хj{QRR}j:ppq=#1"/On{䬒%9τ@e"*&' AniF@BCk>84i@g5t_n3TnjY|C1&@Ըқ2‰BQ.: Lb*t2=VK-x :Z|Fw ذ12,4ƋFeAܔeJdJW=$q'2E2|x CNƪE ՜@H@d"UAQYBfd8oxƁ1 ^BdB2,یS 4"Xw`S:bz΁6D2D-1=s]J0S!v0a--%4ܖS%3c[   WT(-f9ceuQΨ @1by\ NKgQh"X g 2&e#kvpq3;,  35 bDiv%&P?zP*ZhwE$NmkByذ( !S7@>f5e1 @mD(ؚϽ nwKctўEwiP– Pf% M]!mRtұVM joQ@ H {80,!0U FkryXq`<|e|hjo{MeڳI\7IQR\]Aݴhpim& 5HSɔHuZ5;ZSd{I牐zh횠@H0qBϟϏF7-J3bO6V c A z آ5\͡ xy܈V]~ў Jdq=;2X $0eR-1#K+@O(XSag٬7İ4vE vV$Hjc$͂57+)I bX 0p Ch#Yrb$!0t>yU]X: \1ٖ*hZ ^7f "7gjcRz`=ɗ@5)A W-]A?9'O=또痝 vSU!5^ vgF}pBi  Zr,\!-4m8EOzdǺ٬fpJ7a"ЬƇJDZT{ZWHO((q˱ %׆.I[ 2x?\( E}6Ք rtiD|9\Y $C8C$/q$D Lb 伶\X]ax<!G1|GXkl񤯯|$. qh$~C,n[>@; `v euil@pgyٛ=zXj>>/Y?s/6ce M9eaCXlwnuXIsoonB&TCŹI)&j96·B/\/ܷ`q1%𘠴Y;P>uZ혆+qB%6/Xh9o9~x>棓ta*.4ٛd>ۿ«~m$[o#ǿ11>dNlpvKcɑdO&ݷjɒV25Dž [MbG,rnzzh넻{Kc״4ܤEu%o_u?%' 4PͲvNG~l+ lzxn W%+ԼQr{M{[cՏv@h{ouϮ򁊣{,:C̃ 5寊!]-i|KR"_1:j:~RG c3sCJ+Q:)H@JR:ҁt )H@JR:ҁt )H@JR:ҁt )H@JR:ҁt )H@JR:ҁt )H@JR:ҁwLP0v)J!J՛t(KϟV2H%NX "I4 ă2ʋXg D6@<|ڔe|A#a~%o/qs.^|e' Ud %YPV3IJ%oFyĩp;Vf!cUI5֚T$UT3gT! b8@pW [Sl Z >_>jZ^!fW$у#_I'm4uҨF4Q':iINuҨF4Q':iINuҨF4Q':iINuҨF4Q':iINuҨF4Q':iINuҨF4Q':pC'][xT.w~W eoѻFQ[2u;I*t#%j&&w1WrM ~\yv.-'4VʢHƵVV&iD*Òȼ0SbF*j5Wks k&ikoW)l]= -qnʸc'j}j Fe2)S)]jYjn|Ҵ Gqm瞷mg9vL7L_UtQZ$s! \HB2̅d.$s! \HB2̅d.$s! \HB2̅d.$s! \HB2̅d.$s! \HB2̅d.$s! \HB2QRfz1\/֊W+7˨\?u )\d|] MFUF\#.պ)/ tnHtro~Z {+y6r#mg3G@IG"I2e *YYVBdP{!qLrr*(@1]Q^͜at< _7ָ912KF4CðMYnh6,V}Vrh]AB:P)4)!#u$f$Bb4g%34}bފ}9}'9yO!PB@0`xBf&兖T*h?<{;yYytjF3: 8S#:i2Fep؝E7lɚ%: iTY7-ΨR(X^c: %E[g2@*Zq` !@B0 ` !@B0 ` !@B0 ` !@B0 ` !@B0 ` !@B0=]Zo\Z1)KM7pOu|LÛtp\v8F ~#RGĖWjw1 Ąͧx>U ٕtITFb_J$qNgtH9Bkvkߎgpz/ݤ]sކ{FOd Lcn^gW`֤eI,t ((4qo tTB^\I[k4ϱWާo=>&nn?KY.טswh2_k?ax(rAŧVu\,\Q~@NjoMn}|j/?&7PCvp?Q?~$F-9|eV%-wvMn9zr5?A@B4ݧo$z%  Ϗ%,1oȤAnbeS?[Q#IsHZr[νIAF dN#&C1w'WWC=YW͜H{]pw9OalQH.e&w;j-nvJd.쨎:/oh'pB8iY̙I,GO[$D%<)/jGbGfpGMʌy34kK+ٛʋI;Jɮ{0*yD~mtx_}χMQ+)Kw6hF@M%ٜWM{we;&=sO'xaJ)ISirH6Ƈذ\F2ASTKjc'eK)Dt9TJN9g8>>E,E2e) AɖCbX͜E3BaNGF3etzē;UgzzYuYY9d$!,zO˾4b!J4`ԁ }jlYW3g~ύuB2I :r wGs#a=;Bo=7;tBD$$%͢CΣl(ʦh(Ė2rPBa|)ݫ1BW`>eUn}{3j3rg0oeoE+xk+".$yRy'/ۼYC2Pl49"`P-w:Wp CۏeZ)ʶ8i41췛9q ^TݵҔ OQ-2sDS&FFRLBLɲ%T{*E* aT3g1L|yN<y"p=UO yPb5[G{.jrs=fMGj2H,d(Fj-\ouJY&HXbv&q<:bz|opM;5e19xu6&A 5ԫ(X\YUp݅qif=q9~gɳ%l wF'ǃ,k/H$&d 4sIt$&%W{:vhg?j%2YAQn ?JR '^:pWipOMZ|nET69p9i)KD"Yh-O^+ɘH(nyBoB)폼G8v["vb> Ol&GӰ_YQ X kb@eNwws1+xb:ފsR{0 :oWcY7_68`9*NVS7G5///;JoB87 0'X¸ \2$#^ *'pnL&qb8q~f5^^ކJ2^4pkn0fr#ϑ%"DxѴ4eeeu^i!jiqw{.LjmgX7.<{_"zA=k WbYn?ڇ"L~)Lg\-%L~?A+J^'zB8vEMe'M^ߖVR'8' {_ 4._uG~m߫Œ Y&\˙+Wt]@5-f]]~xuz7[z;ރ_]t}^X%ꃍVxrfق_,?U^}"M6&$$/*ttL~93L~%չ@~_)\뙩T0jKs$9􌸔{n ~ŒQh`H;d@LÛ}['o< :*Ѕv6EI*m͖:o9;)T Tމ c p4a|7/G %)s*1%A]E);[i#$eZᙬ0mPp?gj{8_pXV]),XatWW[eR+vRDQ#ihM82g4zꥫ*ccuA%:쭥9bŽ@! ZLQ!d_ I_Ef4iKx٘i4mF8&"$lkɐ^ʐh2+!h"xMFTD޹LV[5- )Te5Klև1ŁhF/F0էFmb(d-JRZA$J jj|0iz!rc{J `Ǎ7n-G߮Oѯvr Wƾv_c_n5Nop9={ʺVT{mJ_~zEʹM'>tuIh\q+q|l_+:kVƞob@W5EaoJčWMZ;ԠI vܖ޽@׏%qWd+|vxgu- iV,5mYuym,D+{W7g{qYmTo8].zV8Di"w02Y"aQbs 5یUDEֲ1hmbOFl"xE{IQ>ɼyrgK99ͯ '<wo:M~n~}Ϳ6Ш4yGQ7n]5]*7r̝^0jO;7t&`xw03U. 3TeCwû~ ^) ^m(Dץ -I?/tf_>cQ)4F\չt,UPQ4vϱ}}E͸ّ,i(n1}UHOƢ9UbԚ}fu2*jΙɤ,i5=TUQ#sN\(9֦R0TVR mB>d^L-B {ovl,ÆaceM:6wTNxE ;Ŧmhج c8>k/Pj3!MIc1:34&ÆZ|w&a)/~ `z#9g˳o=5osv]tr7K=p~\\7 }y2ߑS.TQ:TFM&[ڔ quCh iPd1Rj$N  @SȚ$.VEV* 542&-p*a`+OXXx!Qo:ؗ^^|[dNb#LT1$^2DBDX&V_,hUγV'%{es6 .{WA۩8* q`L;gJ9J;vE00j >ge0`#.[g}BdAÌkQGo%w&Ӱ8*(GN2d ,,Šaɋ,QAʥBi{#yn8DL?vE80" 'Dr%V^RѤ}ΐMހLPVbIldDCMV%mKp6%S;cH$q6ez`<_X^u\\tkSs0-q m&>GhxGGWYP%fQ""N8<L;vC=,h<|`Zo%}V#G >/WV7tC(&-W~>Gd|"paoBפjd|"%7A2>Q9<4]^o~GI̝Qww]#w 9)W9)W))H6LV_:l3Jyk Erl+'t\)%H: G4t{;c9?\k[r 6iUFW)#{Wk [wQy99GȓT.M~2 l%'rGAliC-aI fy3 X{\qOGy#%Oyucn_rRʜ@^:FDnT)*!?hr9(1Z )15m,&6hc")bqJrȖ|lM Ƴү.&c]!h"xMFTD޹Lll(eIW%zm- )Te5Kl17jʓ^{`oK$O,]IJ5(”D}X$[mA/?ea~0iaz! $?}01^z:FGxw8umFO<h}XdS5IS"*d_2b*[Q d$*J9b Xqؖ…)_OFGpxō^%9ߕN!+TV2l!ytٔmѥy.F @ILsI.l:UIeSXz|^|ݓlx?J,+cjLLUKBQahc-$jK&EUj@NDd1H?|PzMQv\xKx<]>˵#B;?k,fz0:w}$tBW }D Ή t%;9ad/V}KzpܶJR D%%ë[OHE$[[3ϗuSKű7<"Bۼ[b}~s^1eY>9xf92뮴_{"Q?GFʧjǙt'jI&UT֊ nSH7<] sy^ 3>oNhB.TV OLVVm%*LkghHSystnA} xV)ݟA~<3rgǴ6Olww/K.4.pgɫD6:&R"jNʱkt"<"4]^ 00p/*APk bP]lAP#F]<W5Dmv:ZB魋Qo5 idYBt0a`DŽ~D amuٗDt\n|ri־Y"9ڭ}p?/9;68 y4PmV*SWt?; +$,hg<lyTc#L!OATLNck&@])j,D7:3IL"O 4pcAcp6\ddFIS , VTQQJ#3{cZlm! =$ԣ3rb{rxio>%@G^~0&O9 lWmjHkrhFM޵$_4;eefD SObylϼWY0MIn{~#ɢDJERJ,J ò*fEUE:F_ENDx-aزlV%=UYtPiʣE p_瓫t?\mOl<|>߇W9s1YKW.4`W_yr~9Γ4?Qoq]{c wܳ8%Ⱨv*WdN2'эh-SKㅦC9|dt*gaϓn8ÎS6 /=>dx%ڑ7a9!omҊÆuo-8>\mH}ؐ.a.[Iop=%J|[ CQHZTR a[B/%!AJ_lHkD{5x7/b.}_fRߦlE}lilϘ<ߙ`gM1MDW\~GZz0'6ysftԯ-}qUlp1ă}Z[귥.eWnˋC[L( 0lЅ(CJ2 %fT2,ϔʡI%*KИחBS=Rwox,\I`m@ r9$ӞiMfJ[`+E7|j?5Լe"/_}_U zӹsC;xrIV&CQi:  l Ƞ=j+|2hcK@D>FR2 Ì7ܦ=yo9b"r8gW U^V36}Kl~ r<~_|w7-WP,[f%Mqcɦ_V,=c?f ?zr5_W {IV2rib]w ڟޓ7v rU&>lw^aw];#\j5ڞhVlouUJd;3p/pSD̥fkg]}:ծ}nJhO;̦~|>/%أBad-rkbsϯ}N_䫴 \#F;1"/W(Ȥ7'%ۨV/-٭^L[g ޙ0<dE !J#(YQ9T #s:mlY.G6瀦$٭cZz b6~~u~`3wo>;:;Z<;ڸ)~}W G)K0E(9<)΄ߒFK Qeֆ5B>Btw~5 /d[}-;-=-;-;!zL%D^2IUFG(<j)Z6;a#7&e/d'#>C}`F92JRT+j[Wmjʼng3i}TX2]I_Ѵ}ܒj$)I 榞`Eb_=_U( u1"8.F\SbTUf][7H2^Y=*~2pU̵TbM׊`\AR b+2ײY}nt*V"4zpQO',Zs*pEjaWJ\E`emm+ ͥޕ>}kkuu^8 :'/O sOgN I2LRS+2؈Yj,ɼ-)V&GzJx{*8:\=`Ɏ W3qakaVJ6z\'F&嶋E)G/sv妳▴qaRua[g,aK>\X ź||:ý{jꄰ|a(U6kౙsØm&ـ |FYٟ_zVkwqyՇw4.}j]ɫ9m;TXKt} BR'B%g_eO]ꢦ/ѸبZO//hә;Qw6t3uZe*ZeSljMU@0-cESl*ZejjMU6ʦV(f=)ּU6z*ljMU6ʦV*ZeSljMU-6Wmj-٦ 25ʦV*ZeSlgGhj5r*ZeSljMU6ʦV*_C|cUV*ZeSljMU6aUAJ[~-wEO&dZ;Zwgc Nc4#԰H|!ye{q{=y N$2N(Ёy .Y (Y\NNEUyg`FA rڂ-S2Irzf(q$^yg5s02gCw5]k;%K05r0΋%4.jLöČ2?Ďݲz1%;Bc(QEh:(H^I”JyJfrs)@Q]J\55ol@ K2X-XyT32dH3EC4²4 "oh<ci"/BJ|AMcڋ&|Cn*hF2sR.p]Cѳѳ/EfٹlE.ȵV@òj]QiO!1Fe`Wt۝Gp;tÆ@֤$,Q hN*P޵6wЭ~VYܱԖ(8Փ4/ɈkzbfQlT/Pg&xӻqrq^Uu)5wI֤2DUIÉ 霹 Ѡ-W`-9uښW|{agwɘB YwۇIl<[dzTr 5-/]\R 7Z]Z㯟~@08Ŏ[#;̥vTI19$`Q YsNlyN#`ǔ`IѤF 2p'P~wyɵN;ane`{6f:Pt\??r:|N`B"dD'=g#߉,鏶BIJFxnK dtff@!\`do "$$\ȳ-j 4 skdv_\"~|BャNj_o ލ^7ڇ}LCy":t%^AB`5 ډ9JHUEFet1{PCDK'Rc^DNQqs.HFj[TjX,TP5,|V,h>R~ˍu}]CFɷW 4bLu}! "ӫQ m1.$:;%v(>dK|u.K_J㸠ݳӭnp"Pl"f尚[~ *YjޏF(Y{n(mH(_@j^xKv8ˏ'N:;|iO? ̣m)ʾ}i1bcQ1T,t6.r/37> YWHì7)"֥ȲBڌ\e$#PcBk+ rm j_BC2z:Sy݌ǡ>e)]HO^ٻ6v$W`fvcY p9YŞ92 bD|$9[lI-YiۭX جXU MRS6!X*Dt%hF^5DeT9CE)=?ShT4"h[[cg 7rL|ˀ2hNj[g5Nl7](7tm=[-,Sݼ_,DE,^Q9%iЗČ"=# lY˼ mIbpt!D[ pHP+ek UAwDno[@ٲtv xndY>e 3jWr8Z^?Ry,kXٚΦU},WI[4P6L~iB $@hKSN- u|mqWn;E_HXUimD Maj+(Ɠ!|4$=G8udMoXnw~:y'qtWc;~z뱜 Xsщd&o]H :**J$Ea 'DxDf0 ~1AJh (*#R2uM[~wJogd㩈, 2dﵲ,Z9]>Ik(V% ӔӐ ;juSO?.XwEu2;uOw=Lnj~~c)z퍖mtA٢ k,ZjY0 ڍ\6 ab4V=A=_zI9~'8Ξ9_%E٪ĖL(F}OQU^ HV(E]@ IvNзz]}> "^JQߖ8&oD/U7};_h3|Oƅâ 8**H06z)QPʑJ K%_񃺽dMkTՈ[C!h* A'oR-.<2 4&Q+[_Kw׫/]&f9)hNeB(Zʷ_$uui7m_ggFg.ֶٍwv lմg|qQL-2>B*d5J×J4vJCJ{N\\M]*pHSiV Me㶾dH9}}H{NV>gxu~k+Fo{v;_PאA7]k1Pr}א~!^rǰژ{D_<챂:ȫmɞn %>rCךnȆM>v75^Ŷ~}qG ͐\ Cȿyõ^ծoFiĥ ֦|EknݣmŎ1p30YLҺZj ZߵwTn zĺA.VlT/wȒ}1덜=t@u8\rWJo~խ]ݏt~ݱ,F\ ̰'%*WyPS gJ֢ҙDQJP!c-FCx+o:rQ#ض7EZ%QyG{ R{dA`K%➤Ȃ@N*޸ݦ < Ō{NSZ՗bLuMxJNk !ۘx{+͆~$T'W? /L폵5 ygR A%Yp !b >GH!i2[9NXIHXQTe'Q1@): Q%Y{U3$_ u$N4.ÕVz5Dao R8ؼbt$'k&V`yQN/Nz㉃N;24mlΕ|ƤηӜS!Jp))vV vS!gXj8"}4pTWڇ2pUtWb  \Ur?Ԃ:\1JW^m*`!WJ}g WVUrm(Zz6vzjzWV3/־8(3կ?Ol5ۜ⬿c))F^J:im[>x jHռ/I;QѲh9>_b(^NKf} i47jlb>y^Z3ƠiI(9/dQg -F5~ ߰˺Tz[:oӋRShi@EKAhk0l #1ࠎ\ɵXtJʭ3|t v5媌ѯlv1?b⧿m.&qJ7 {x;YO(6Sn:}ގ^GLF*ӫu~m7Wfs/^\|r-:ٿ L̖ӹ>N?1[ K8a6]J\{TLxYU:6b#WoJKoWUhWMuׇxqN21 pU^57&a/nJ>O?4MG[;pj2^;,=^ѯ\9G\ v!Yuסv<_KcVLM_ǚgS)d"tŚ.A}Ո[4ԛɀ`Cݕ!f%@Zӏ{e |E9)2ʃj}7qkPYmcael, D*c34ˤ4$c)c')d=m*vg)i.»#a> J=J|mr+_X^\$Φل6!ܛ[jZМC [Ȁ!iJcP&JhT݇fBX2|vƽ3co 17^bi|72fWw2 !7I!/iS6Frbf+9.lL t db.EZStD5C`ۻN#={:4WM7h+gn*F]ta=S[ht[޴U~6MOg5o*uYi ;*F sOk8g}^d͵t2zXz]JIzcwoMweDi0^U|yZI-qN?`idۘ"SFh,H>g%=D?ËaWÊ]s -wxm 0 ZXȱPK1yնVZF%@RJX*&n{HlQ D9SƘRƐTE|5 &r=FΞ wB= ^0_.sjP˽FS! Yx V'ﳙ U2A9+p*j/] !Jc ('$ @{سFnA1o欖ҼΎŗO!t9Vj8\u>^n밊;sȝۑf kwXG=Z,/VWKU":u-I-Z Yb m N2+2:%XlM2K%$#e(DOIUJkba{#gtް78 ]XNXXxEQjmW[7ŗ.'dXm,IlEV6KQ2ؘHHx(ⲶRXG"B} 5b5Tc/f%tJaS;$ggiwq< BM 0`m&[TBif)dbe $8Ͷ$BԆsVXiBfPQg$)RiabflTg<썜x CAPDNxBĻیȱ\(ԑzK*/4 lJAQQkֆHFaٻ6$W=u]Vއc{gmLOâFG)my0}#IH,]-ˬdT_dRiʣ,e= LhI39؇# [cgF| 8⤨:[[%;EJZ֋Ӌ^\p) D)+W=Y@5L#7k [l(J'-:6Ֆޏ3LؐlZpzT).T@SaQOǝ;#'e%YƇ z9|)pj{K9G7fbApS@T1X˭h,@YBIFRpD@F;pr;)#(cN$$CHD.QkC#HHT+~*]7sE(4$id$bڨ5g3%CT{*YrZV Y1 } iNS>͹̆&fQ.2@߬;z2Yڛ9§E5~jRo[H͵Te2qO9>! lwV$ hFKiG؁FioMIry'AYO@U,"<;cdz 82.`n8U WvӛñN& 6VHiuSmV5Z\)6b4C[ {f iػݎ"zy<> Z>i쮞x$[gÙϗ[T*glk!R(50T^PUt1knN9jr#/'m͙mM50*9j8+'Дw"y|$xY9) Ȩ]l-Q>\jAk(c$rq,w!\hP-MђvnMOƗרta˜C[ J@͵i_ ֐x7UaZ([t$(BF; Q#pUihy J )!I:cu MSM\J8.+H^'IP2!d`$PO1&$ƿL+Mxj;sZkl>oZB'tr*whڎ׉$O>svi&ff`;s蓏qm\p@vV>Gb0z R N-`ɀUFRSFe!i/#DR)%8b\ )(x0VB2TpLd0 ?2;m8Zn=4kq-v)kzjUvP~}".&spIXf 1XSvQ` Xyޮ,fl\ZG"A8 3&'Ik1&*Z,o@ǣ45͟Xz\'Jh)Jr]YG,>0(3d}k&;>,#ϲwirJa 7nH*ZSahC8GKaas _Og/dr-b,s?W~녫umy^훳mzp[o-|֛${ ai92gDn:Jčs=Ko^o7n?ʴ07q:{?{$12PB.]2sWYL. {\{YX>mNA9|Y|6?&WꃃϖOkтg̟nr?]ʄ k1J_Ɵ=yFC<mvKo[ݗZ#x\-k# W"M!FΥ;6\]Ho\Ͽ9v//p`&| //pq-tn!g [b3"җ1sg\jg3RE,q`?r MϢ PJc B$ k2D ]gvN:Ή:˜A+62=IE)Fx+)XdPB5IJ*:7VܸQCKqړfIgnqYWCYG>>)'LNIGy-hQ0@pNK mD#1G(z-! > # ^٪xN P21C7D/༷T,NHb`˝^iT5vt,]@!__Ѝ&YkԵ| SM&S.2&43 ]x{ңiIGqHg[3(\Afq'^+d1^s'.uܺZ'9l :o!x߯͒ ,eQɮo&^oAyݸ͞ϸ6k~4Lۥ!?ԭp|q^I:_Ϛn4m>K?6s DZQçYƗY~m(k'A7iL!ϦWB #AsU7P=WYlԳerL  \?I0Ir@QZ\*pr8DfAnbHͰ3)bM Y1Hi?_#ٸ9vHq+yiR4ѷ IQ(Gx9si 8q:H Bj ޵,DZ_nVyHv̶7A]afm$RRθqRe AbXΉMsjz?)9Go2R Kѻ!܌bHeZyڐɆs?9RL<9W'ɽIЯDl|8H]Op4{x>떐1RoA) ד>`|miqI"ՌAÛ)$0xoMRvp0㞏M^r9~MQ>M Zl7p|2 (cIߟ>=Njszm{n/]C0 vv^\XrOUvF3/;ٕьheg6$ (9"F 4% $-aZcegԲjSr*Vl60BDT*dI1)8#5TQ|4l8HoqyN%gi?{]꧓m]Ǯsd‹) /‹r9Y8 q)+ OF/7עGKiz.$s5Z~]x^ ]q~kD?]V9"rC%EZ$ؖ8BgO|;eGC3Tt`[XS'cyŒP̒d_ޚ= zt+R Yy_w:S_:ꍭꣿ|`).+ˊRN.K9RNg순.h!GK9RN.K9RN.K9o|!-wKRz-wKRz-w3`'7Mgfö>46 HPXġ}XQ'ZOf7}:Y<_olHޞcq3;]6G#X (u#W .Mep9cncJwiT3X=Ya f$T,WkZIz;I*/ļs;Fp߮6Oҧ)j0= zŒ&+.>A|)-8}V5@.;X}HAHR6bLm`[L+&!huTO |IPr` KE0L6rHA){J4@NɛH{+x\*y1JQ3%,-{)/bpy%ѹŦ`"SaZfkQ+a_ҹ.'$I-TPׁ5ʽj4aml4'3(T}r6rSǞ \m0d#Q'#fc'}ua~/kq"ݙŶzdВ|t`7Gr^;o @c4)6 lh(\p,n?Nww׫qͳ]{qvߧ8ۃiX:nַܻOP{%=F=Ej"K }઴*u+@&+P*ӭd:%R2Nַh.'W|NJͲnϲǿmj\dno>\ەGE,P{O_tN}?ooNsa}e}_!k}ٽGS[o'Ryy V%]e.oz̟}xNsҵyCk۟LJk'Yd&}3nMųv43B^E^J0W#M^Jws]飼Z+7=sG>8nf )u R1xn,X3DgcmVfm|ʬ(xh Clbv/ITKd0LR -yi=uc= 3+Z9SZNfKBuJ'γUNkUCr1b$Wg=H8$he/XBK^__)08,PISK$ZhdT(ZNnvL$ݖ{;+4e e%?>ٻ[ӓoG]Pr.{=8ڜ?uTcAȽN >0G}ǃV&TU6JLI1&lN*]r =}F,5Fc6@n RS#dy@d\e2,281  /FԯSg;$Ukr66|&Wwߞ0ã/5܈ ApwCV[sB.P](RkBS} 6`/Wg|)NAaG%vev&Qs@K3ibĞl8#v9Z+!lP{*qb{ŋq>Q>f Jhq*J֒Mk${Z`A#de4f/XSptGbKm֠Z0p܏cPԁq.8}싈ibDL ".x'ԢDղcռi4|YIAs{eCUIhPKqhh}EpLD^Fg{ Op<@&.̹JEk&EZpqŻ /Q9QMfK H9Fʈ x\| \ Nf⡝> *lrطͩ|{b|\pmX"X tkt7:Wu"Q `t6!A tq+B\i\iv\qF\q庝؀QB)_$P$YЊu*+I&ޖƥePQNhM@ÝDݛzO:JVfH*L6Dh-wD(A@]#r*M}j>NQbADžc7dA RJeKJVGC1?nuN䝬ee)/@J+{){EJ !^BJDIkiW Fb/b\ BM̐18ײSGΪ0sˉ* $#R*#=#0ȺK řfqW=f׈\7h٩3WI]Wc<欲7`u".rbgzngZ7=;|3|N D)jxo){Yj&X>+ i )g5/K?IOsY&Z麞|JɏIJ}!x-߿XOXWZo7Mѷ!pzr|&Xr)CsN rX.~ 1[B^Ģؒ{g$kY#ȐM2pKR("HMXqL/9.((yfȇz_F:@) 2\zSfY"+my^v%8+ph^ %48_ZT BNPN'C/TB(ؐ tRJ+onS |MQ4 ay|z.FWℿ2g O#}\ Q`hzuʰ 5ǣpx4(Jv1X< p\B~OT&{"Խ2=Ct ܸwhZ.CF륂DAZ*2GL# !:p'@WZ%7_QŦh]'9yR qN=9gssSGkrmq1f5I𚫥^Ro$JdH}O1J*(, $JBXVĖإ&"::Uc3dù߃oY]%e^!/%|2d5 Fx5MpɿA:ZrfM(}ZM Y4y"zr9H )e_ۇ)^6Ha;@쐊I{}4%`i.:O[w ߃|6&&81 (z3-w͚pjW{48))GP;;a{~ 13Ɋ7}[O}-+8m҆Ed@/be'غ lJ6 KDxD|f 0, Em0rHhRҚ8h⃢FJJ\ِ(;iY;*`"RE.szCvwbw-݃ګF +~JO& 2ܶ}+i% urhÐ[/ՑIĦؒG\.f̡{#ԮeCvD ڵ@p{kÁ67HA*Y~iyzw 6 !BCg>Mql6c-=r; /}辸n'EsL'ٗ鼏Kْ~۩Oҗw[y}^K ؞#ȄӟnÍֻ{f>y`kz%o$Ů׉&ow~_Uuz:߷h}ڿpœyƹl%Ax~A{?skK\^oxI0z;Th %$OZC1[$b3R&k55uhͽH7:r5VbEm2>biqYaYUnjT֊1`'2*U%)TGj,6J;RiF$jz:)b6YCuSBgXrB D bM2BwwKW[]-k-!|By[kzi>T>FTLV0 T< P/K%]2a[c')ԤQK呪0Q!lG *!0erVlBv0^bWwUľ|wD&-}(>o;w]-}Nh{Dn%mĭR]ŪyLxۃk1uj0bw |+sn76zm"j}FP~;·=x~6-x踯{a.\twp" Yno/ʇ4qѣò6d;~h-:NIN1k=ZǕZ^똩FZ vڝ \UrA \UjS'z!oKok-m?ohKMr5B W X<>wdzb4lLwl'U'-޲.VaB>|^'Ld`kO+^;L3 ?!L{᭽ϋrqqkRe'ju,2_fS=gZǝ0?Vsy)hl.Góf؞A5qT}G/R ɣZ&r=bndzev9`@!LoT&2ao?2Fّ43(BWvEYШ?gYcR*X}+O3Hn@18YEb闏52}tt2xۙ"SNh,H.g%p9:.x)ސt`ZR NPSiRsT1tSZ{ @9 }A~z8!7$2B[0fNﳙ U2qAAnUbNBhh_ola}F+O %I,$Ni79ᱱ6#gt׳5=cH:ڌON=hsvzOu<,TMQywy-}@׌Eue yܢ*uRafW?5EF\ b!%$#e_2HUJkbajٍtfX،3Bh0bFq\|@>=}]縘̇'ՠrxm,IlEV6KQ2ؘHHFC@*6/sۈgi^JZSUL+l 2ji"ϭٍi6&Ѡv+8]cv#j6'sNFC mE+fB&VּvӔDmł-d a+59ƚLAL cS.6 d\9piԯV`<Dl"o~D2 `.%J(ԑzK*'4 lJAQQkֆ4E`аmL,IsRggT8rBrF &Ȗ2NG' Vƈ،݈R':H:qɁ(Ec\ #.:D299^E^So r3phG\| \<aT\TMұXXH)Pdr,XBI6IMhRIJD(AohzJ:(gl1T)Sd;FSP(^Y c_Zw>:Exh/Jcumy p}jeHήL+?ٻ6$W?,&<"/}iLwzeF4)*R$-bINeV2+"2 ?F?LFSFߏlqtyuֿ ӏvr9]9.?\^Yhl^]_|fW<}LEQMf^\+.+2%&k.6M8cs{lW8K%,MubbҤ*Y44}]TW~',}~v-aL^=?z~t?D^9F< N?F+Lqkmê5[ WoRVNRx,})NFPX|SR_O񖊀-Ik#b5lsSq|w:]Jǎzv:ް̱g~F>ש% mȡJS4N imNa|~u9vk4Gjr9]lUMCձ>U[?)r24:ɕ>\l+RvsdɾÕYtL0m+I$3c1sߟXR퐨_;Թ^t-]LQEE-8/tPWjޓgY-ޮƢibLy:vw;^v/lx*b-Bac ]-4|-zAN^)t W: QPd^BP6jɂ$ H0;9Y:Nh/ oWFt9 c DcM9g}୍(bVz9oru>{4Zv;͆h<-~}y&,WN/Ԡ}yovlka22_q7BCB:_=Ge,rB@v@?tVHu&Hedja3)mNa4'} `zfUlPOPs̋628YI*l\ &`^ff|gp_9,Kg N Ь߼BJƅ.+H^gmm2 yf1!s5Zi*r޴ r*ڤ O?f߲ 3Cki>+9B%~)K{9 [ YCkruJV3 o߽iKڿ?Ϸ5}$0vr,Sa+3O?|4i͉]XA$p F2EJPMplYsă=o޵:Ѕ4NxtyT{ gЮ:^ծ?,7t}ލ:5݂z|{y5Y`1P1~v6;|?0jn[Qmoe7/嗴noG8Gw.EM?<u_fPkjo|ecU3@@L2*IMU+;M:!l)I֏DqGҿªDEI J'dz溶zVu&i "2 %8#7/}܄HI} 6*8"kRY}f]TM7! %W+lLx]&{! U2p*4^Ik(4+d{Mvprcǃ6} jÐ'vpZHkQ! 4Dq fvעW6|g&KnrK@IU Ӹ=5Oĩp8ԃÙv@8g2v, S" .kee2D bQɅ{0L0 &fmdٌtƳ|dXfUy":#8jC4E%r87qn_~!o>^/J{6WVyc^iaPŧI}@U{Ժ1 J#w"ol R^A"xeb1J߸W Ȥ(I깢ysGg Ly& (-B ?4pD . R Nv{fqj1U}p\8=sm&;r-2!졽|޽cEGBG 8{#Iy(G+ ;o+()J |#|u=~u2en xSy'ǍD Cr wD1TWde\Ϋˉ$BhJ ɵ ĀqᚇR!$Ai#r'K{fqr'ΦJvcbE`=,7ˋR #ڒ9,L5BcL8T+"f4-D-3ux;e_46-yW<_oz3oYCkFm5ǜÄb<,J ot.HU Qw)t/* $\0YF S/c9[| MkC`{?M_~˟:K6-n%#_&QIl#H^Xr9&hU\|=*|$GlR.A6-IdRPtc71xzݪW^U:ECThN+"ϕI;>[n,*߈zֲ'<[XJg+M 7i{|bREAp{VdDBnYOV; 56sjlx 56Ɔ[@4ܜƆ^cklx 56Ɔ &ET/ƛkK&e{)WTooƛ' azS3j5gLSsԜ15gLSsԜ15gLSM會PPJSi{*mO=u(`TpTQ<`TʃQy0*FxJdh$C5PM2T $C5PM2T j!,n‹-6:0$lTuӚO1( cO;1haK#\cQ\r`I)"4Q38Uz]+T RkRb(@㨁Π$L*&.FT裶,GK#Vz0!ȐVWJѠ Zi`<B !shEJ|5~ߵM| b ݆PDk$U!Q'c:y  :g_ps]iܻ%몙d~7 4:iPF- gJ 2n8΃p9p dMJRAIe}]cZc]kE~Xk߰onmC[T́g2ZtK88-s#ү#=@ -el9Ԍ$@x-\r{L~U_OmO^K &!CDXʽvYb53E4`iA"ͱҚCk{#Ik9Ykm|"fyN֞IUN p5/VKWkR(/QZ`蔗j*t)/FS^N7nx9S(g@-@.J2BE!9 U605q>J0&8VrRY* @`SÒٻ8nWEU~HNթJN✧ĕ j#jR_^%Žp%gɱe1 Lbb80af=^vo0:P`Ĭ<GEF*uAYIX*j'mpm)່`l% AYb`P(Lb#qJH '֔@mt~5g(ӹ>>7&J=2;r^. ٻc}<320 w}ht@euhuV.cp."c(:B毚"\D|fwI1Z.T})&:VRA526gfJ3,lbmv£b3'pO^ٷ.^]q1~SύHzp"+%LDEf S5HDm3Be^JcæI2p$\8>e` R|fٍi:k!foPvڮ1jvƓP9}1!AQ (dbgKo5nQY"&p̞4E&XIU7Hmx,w|MQt*H[.ʎ Z弒%ȘYSJ;I- 2#t*}~Pͨ%肆rnh^_K=w$E6p|oG/{=.x&8S72ײҌc!dC>夦k)* s++qZ.{;CtHT:DRAI6CX ѺrZL!=!} $_)c].`%*V~l2Ҳ,FYd$_ }$czncOf5Z;Gc'@^Prm#M0I~VJ CPD]t&?_MW38MDtN`b |~wBw1x uXc1hˋO$E3Z.80z"dAx/:]9E)HR"G:H(y'g)l:mc(&'R,Nbmw/oJ\*AT*۬.F |h@ )ugH]6e濓EL8Pkmy^vOY{J,, NpG"EDJbݷ Y z/Bml'"W&Y ia! ( {,*%)gE>J(ԣ)b㔶Nf^^.T܅z8WgI|j{BoBO*\?.-ao={?ϑ>M'şZA Ӓbg('̝*}w2Ao4U辑bG"M=HLkB}hԔFOR"Zʈ7J4QIgkh+bhvR3Ţ{*[MFQς-cL=ߴ_8O -3޹._B|?d%Vxx뉟?,Yyn>^|hEF@d5!RA,` ju|R҆e$vHQBQ:T";65J3qv[~+l*)d 6?}8XGW KE5:A)>A,n'4iP2ild'¦_&t۹1Rfk"U*)(%_|^h͏f~`H{sEW+=yܺgH~ ڧbPH #9!X-0L}+fCaLX*4 8oJ}a9}\V׿'.? Dމ867;|{SyC/,<_h+VD$49%M@G#"TE%TIQ2d@ B0 %,"80&H>/Ci)Q{ ,fS$j{ٶ;thG6SȬDosZVbK% 2jU[׌5g70LRN}B#ˆ?iɗ{a?!WlZfny-|0ɗnE?{N;% DHXe9B H Xr% E&dƥxC3 kC(1l)ȂI8W'}ػ^&„lrU"<]xQFӷzEIQ8 sP%#X DsA' ifVCrXlv_P]B=)jٍnºī//~M#]rR VAzysTVYӨP(eI%M%od5a%Z4"oRR+n b0xur&6?F!Ƥ7BwtћOug^bp+*y)ѯZ$u蟾wʲmbr54:_;Wr>].?U>`˳/?'ezfguQR 3g%VUAĻ:tJ.἖pjrA܇8!O&:Ubߖ5nGf^ItE_ņEjë}WުQ价nm7)o\MCߐ۞톱46 r"oVg+D<u77Z= v %W-?0t@rZ7#l*2>ߖXwt[~ 2y|] CP"eڵ䬆pckÉ>5we>y;I.$_7nrfAuk2mۉL7"k_}q1]M2t<ǩ۩٧E˼<ߣl/Spu"2svlǘw9LW zhyEZ6^]oXU{Gכ.Gl&~]fB/-7Ze^7hi_V!Z$Dʭ"2>?۾C0T;h %kt>ieH0%/2&aFyxW٨ʡA~wޱt%C@#JJyccJYHpI8H}գqP6W5y׫(Ǫ>3<J=2<}J%(: +xpqnV'},{1kW2h:DR}̆(ɅJBZ%2FG/!muZau-5/W`o(t4qL Bg8}*, pVqe8*sV)׉D7!k8O \^S*ğWUJzpV\U?bq8JLJzpF[_`fznRf<939?i-rx`ב =ںb/̫>-::FBWN>x^xnD\G8i&|TM$cYd?`M'WZz?”n^A6`niR );M _x~e2yj˴0^{luفĔHuF6W((pR'c 1F+UZ}HT} s~+D6^-x{a ~7xzlNggE]#Lp <lRfkB],:` R>6]k[}Bo?MΐKfI^vv*E#m.*<^ɝ~}K%w=ؾ(VSM4<-']0A )I ``*97- 69R"kxN>IGSL j`oSͫ i\@L=oд7uC"ݵJ2Ǝ^{߿EMIm~{ Έ3o:7Cuְ yeGJ Z弒%ȘYSJ@5XJ)*d~V7_uMeڽm{ռyh[-)@4,-㝗0x'kaȆ@VCg^X{++9MhFƺk#dƘ(Ab2gښʑ_a%3:W!LeɌa-\)5vj{I |x4RTv/pDn{j%Kz;@o,hOqR*IΡ+m}(^,2a1! 3<^Yt<]9ecݖR;G}KKG&dkzP?~~J?[ƽǻmux [54;a9GGvJe@sJ"h,@׃  ڨ<QJO5zEhz\9lz akf{0M>P{Wqk;]ۦ~:7Ae_Wb{`CY p,"̞<ȯ*|o~as6WWl:Q-ӚmvJ;<.Bt֒,~5^zv,Jih=fZF'C$ь#-91;*ovxgz\J!/:*螕g  HEkD' $zOJ8b 3X'4 s))\Ox#IRzpJ$P!#zj1!15aZiSirblZ :vicT·Amm&!3!5!8']{nqh<79o% )I\%+ XetL,uni8le/C;Ab K{xNSQ`dFOXrl?nڻǭ}NىɵI||{9ebR)8 U^b{S<*bd X\, wVTY4$fRu֌ɠEqWp1q~Xǣp2`矽]7Z+l5܀+:dy&d)^`50oax3%r:;.\/u eqSW{ ʈKRt i_ tcpB^DN@$J9,J8BGNm9DC!$ϢHPB՛%!XyA+Voﯺ*X2;\ jq栥*FXn-!@Yb>0ɨQX  Y5 Cg`G⤖BXJqhbRᙀ\0F*D4V=㴫w_\GI@](Xj>_ַV|bs3MӪyk+v†Gj2H,{B|Bs'I/IlRgxhr;tY5I.2$(krf)xu2AU,K.x\Ϭpw]!flg3[׿xQ2H& 4'  jj. ĘX߭bR/h׋uٵhY% mj/4!d! yyA50yf EkeicLHE5X )`|3CH\Aa{d@Z ZxaU"p ey~hn - 1D:sYnSdq6SJ=9c%X/ßZX3Tb"0އӆӠq_ɾHV8MN>'py<[,/4EsA&rxKϜ3x: -u[au5 ,(^t~TS7ĮfGjr];ITwK##GTݛpRꭀW|O jQgJȶ\`)wQ[B5s&p@)cʑLLH,7J5%yPL,-+W נӃ]@Rvf*'` L;1'0Z~B{ 4;s土P~*yBIi.P٣P <(-OV=޹;/ EI;zV@{%MB+ᬪʱxb`B.rLvq {I:\. 5 z1"|4X =h୍c!TG`7jK\ yDiur-C?.k= 7GF=B\y-㖶͇qВe^C#Nkl:o[ /lKf3C)[Q\]}39SMNWoLl&Y8&W08?uO3V>7Y{:ok0]εxLX4!>Li"Tm.9UEoGR/{b)Z Т2`\E5M{UX u/5" A(z-! > # ^j&\dbo^R!Db:Q"u[J{ 1IX{`20k*D7ZSΓlZ.-}C[5a츥7얦]x͆@rn%\rhӚ4YeGh;W F1OH:OWӧ Ɖ:Ϩ/bYO ᝵f} ڛ}ci/-Ϧ- 7ߧ'f!^9͍/gyڕ<>[ H-/" [$v^_q3.]JBٔJ)1dIWQIEPIy3I%s D3xWB #Asڑxz\`k8g'$˽3(fh4 F0I@kn(>ާbnLڹ-Ѷnz2΄xE,>{=,\*s,Fu,kg^r՗ѷO%h I Qs*E X;,ab9Ru@b{Rlr>G!> 'dPh+x*(g]~d'3h?oNw9t4ͣT=8\][D-dӝgW&|"<-gLǺ4بl%U9UQc䆴s{~Nʧ.8yҎ89g0S&udˀh.fS!2t %A0gR&(H޹6#y_1n;J(4n{;y=P$ɵN?v1Uɏ(-GŶLXHU"͈RĀIJp'6y[bа@gl32\tVub\XUT+wm[V pJ͋ġ* 0WtE%($;<*5싡tӤ(gS*Dt4=׎2xǓ,]pLWWPh1`Cgj_trro] JlN|OpfyLdōRzeyٚ[Wo?3p- kaBM>8( (X\ܞ-rٟŝ8sqz*I4%OlĿH"3NhJ%Yiv-Y%8* "AbTZT4@]@Mo/`-M}83o^,vrcz+3|Qϯ7L{Erw[lw` Uȸ@ySAW!h%@Kabpm\E ̉{W1mp딌ښ*ƪ<lDwI zR+WSUޤ9j\R]39[ut,cWY>) o\?"xC-7۞}vu_ ?O?ϟ? 繉 *"e)*DbFp@ L [(gJ'm䚕о8Q0Yـؔ p&\܎YW p BX0k]2$6vWֱ+M6!OL8clAjޫRU)&pj 5˭8wdKH:tQ)m|(3d 9,:d51E6hԂ|+YIu2w|~e /D>v%혈v @}*4*D% =rh*UJ&@Mzt *DonSpn р3e,q472p&-Ho(9 ]t&g}vu\4i3+ّuE7pqV}"mviТO*LP%Lt}&9G rSp/<:v! RY<J=7D?)Q(DFe3S6rr)j^6UJ-X6C 'W(W$W@."WRW+i W(p H.\pEj W EWJjU:ب2w\\ԚP$ :D\iڦe$鄌@pFت E|zy\_^^Iu=oh/߯ЎNGj1!p51DV@PJ!-.njsvI>y9tS>Ci1_ !XsC.?x_ҦަD雂;MKa>Ӑ{q zLQMjl>Ix"rks9#ڊZ:2x|[?ǣMi\_'<=ҚJƍ%pP\O y }4svG|ՂqvSrZ;o cPj-ڃl5̶ex4[Ӏ=3rP0hrf!Ժ/J͇uCt HeF"&u{x.B1#úAJU#1TMtiE>4y{:zو#@-<-]nQ1u)׎w|шWss^\S뉊aQ5FpjP7>I,>OSP"-Bb?1GIڸ jX0APѦ6fl0MrU6FcxcTr3`1mU=_t PJb\=]'nדΰ#ȐI$9 oH",J$QdpFA^%'GumDC'o3 ݹ<wXegv\Y.bpsVqS e.\ P!`u`+s'  g瘟\PjT骥Jӯ' mg\*p&#\`et6"N+T[<W\"s r Vg+ +Tk8WR :D\+l%d+X."":@\) ck HWZwu 'W9JsX>kW$WdvEjKRW+Ȝ `DzU+R+lqE*"WJcݻ(!FLrK9bQ|SnTvS='n\ecyqND#~Iwqv= zCS 2r)l:^>g*wgTar ;n|zQ,5Z09RìW|ˮLVw\;r;fT/\Sic-p\\ >W$gIc)ųJ\ 0Ywz,P0"\\asUH6WR8DFBQM+\\s}JJzVj P 3 v6\\e."OqE* :@\)4DJvErW޻B ĕSw58C=NOu/NծI:~pũT}*NA,SP{>-FU#⪡Z{?$rxxtN a(X H.dnOj5WҊW+ex#&Ou'sYGJ}+(1]tfSZc\SYUS2HǜͤXZ+3uL+5xNec%&:T]; >99lpEre6"BW_ Ė]/RWwZh;vjy;gĀ]+ee+l9d++e."W:D\ 垼xZ"e+5 /iҀR*P0 Hӹ ժ/"J:@\Npڰ&$W\pEj;HW+rZBFNr Wֲ{W K+m\`HvԺ Uj>xW+C\Ʃ6S %Qd8(J Q٬S'=Ө =`1mRzb1CTFZiHb H3>lZ`sz>|\ŲYj$)$ N:)s CD6""FR\qE*&Jn3>; V;i'wکu{mRd \Wv9P0|pEry;Wĕ:P<\\ ~ک~?W `.#\`kʕZ+Rkmq*Cĕ{\%^>O_Ӌ7x2}{47+6hGwN?LFO )fS6k5o&ћ-}sTZS/q奟QهwWտ)bZU#fhaɗ'Z_w[ݞm|+eU U-eRYU7i6+Q(]wEWA"|e ~Vxs7ojKOXA? \HWC_"NŇYn0514,wz#7y}?~+Zݽ%s$O^}G}3\*-o9l1Ư&likHoVsWw5߾^kϺ֖x`i@ k \ RӇi(>1+yiXr:WLJf6 TVGJnb=YYAHSq _K@S\|˓,5B3ݰ2S gi2(#:B5m5?^M&xb/ɻS57akza|moyo}xv >i6%R*X]8il}eQRaC!Zp7fq^ৰӋx||?/.ju]C6/h^14?ܴYߖТ7ZlNox /4O>eʳݨs:fӎt8eut6_NON0 Sg@0ӫ @yL~e9>%#L7TEgc2;x6Gɾ$_/n9<;xe/=Bȍ1'Y "Ϳb~% qA`ky^7ĵ!|󻡆~^O|^'.ƽ.ňw/'~ 6ܳLY'nbXj( l8h95]"ԇ]1}!٢*=R9jHQgMZ ѼtM,Rk0y\J"FZSz ʞ\NR|(/̆4mlxՇˋ9z\:|aҋ𠐛d|q}jG}Kǻp}MCzv1gߥCӇٌ ͪtݓ<*i7C2v|1xbAIgk pfi{]sf@\9igOz6ŧߑ'wq{82cOprsC2kvApƫO mM7b L=ucr}OB/w!Ƽvo뛋}^K޳࿽ecj{-o2+ } w&c{c`( ǛvVq뻢l^0ە:jW;P~Ytt~/^\Iy\1+Pˌ~fJ1q5~ja_Y| Wg O^TDŽi`.=[%@n,P6$YT%EvouX(35@P!euj۬X j:.=>vMb&)d+yAZ+MA5R mP($ܪbΙ;+-qfdKT"J F5dxs@9[]Dk·.K$II fLO > r0\l حk>A77Rge5-%gC ~| "hÉk}˒Z2vXcT Gؐ\8d+lB1LXaUr=c^`P 'pBaDI̠._i7oS61ĬdUfS@NV^%Sc)7!ZēCnާYhxR0tfͭU9ӐJ4SjR4{dkHcOb)%&["dq-HHI?C]Ƃt5yNj)6$YRX"!@mMu/ 9i+/5d* "OnѼNE65H:YӍo&d]sw&%!bP묰 XTd[h`(x!By;!4&kr@,&(UrsFarIa9gؽZԀ[  4s8Q#(qaQ3@*@yUסJ,"`.*/h0mV7.y-s7uh0ټ7aG{u l{>K-ZT V u=+zdZ{)f@Va5JٻV8V1+!瞭ꜰA1ŰzB܂QKu5#b&ĆM 4Vh}ĕLiIah<(,ݲ~ v%'*#7wŦ"#7 c:o%"Ғ.pPX,_&\ڱa4 L8r!% yaCy e3XGfr~V?J ܛkTT1@Hqps$eYaj` ,`Hl jCr! vAv Rq&Ά.qVy/JAq{Ӵz`p 9x^i$3lc@H g!5BDB9YY` 1 dc 9+X+=F xfA 3ЫQHNviIb9!99`3Brjvp'"!ĄK} &v&؝2oiWSw;_^nx"A_{LZ$,>2x G\fVl:"JHC@:Xe aTP97Ogh (tl5boHhhƋ~n8[ouکMcmgn|ynGf)m7Wgz=!~ʫǻXs#rn|6~=tt"m뾟3 ~CgA3 g='I}k'I}k'I}k'I}k'I}k'I}k'I}k'I}k'I}k'I}k'I}k'IsC@l}V烟N,&:plj[v}< 𷖤mkmvMT[8D(-kx ]L'ݷ }ڛwno>&#^蛑8_4[vvzFN/ǗdK[m_#4m/ݛۋzƛ>;{R̖l^!zd.o rz2we͍$0kZy'Oy#OEjH=ߍ,"%IQIDZUL"3$> 2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2"s!2קERt#@@6T ?J"`W+b"X +a^d .x&d*'jݔFG-?]~ro~{] ~{Kٛrix=:8]&u$ɘdeZ iC$L rr*(Z^#]9+Ww[w0|P=ZtP2 00c1,7q8=\+×G롏B!),cQ qI &1SjH"$fAsV2Tɼe9e铤O.)9DBJ(Z  N+'$23ZR4FCxI+@dj}<$cD'-QFsè\ Jy]N8PIYPFiG둌MS( h}bډem(= z'` f:!0!0!0!0!0!0!0!0!0!0!0!0!0!0!0!0!0!0!0!0!0!0!0Џ ČQ:jWd<_5H/Dx*n ?iM"7"‰YM8!H#0*Fp$'_O >b:|gzp?nzp9['cڕEiv@;gOQZZ=uQ3623p)šOjcgAeW c;4ݪ]6\dA^~6p t_rar?iЖ6{N+hp&Ēϝ`Ӱȭ>2OƛۿQYhNQ&5_~RͮiVoT󫛶Y &Ϥϔy[Kn [Hi.><"E6"ii#1n9.'BMZyɜ:F Mއc΃t'WF9bEtj0WeXh-'y}meoJ0$j 'cy5Ph8:QPu>'^hē9dG&AՖ)Iي]ƒVɳ](fdإJj}}"9açC.xيwyu=ʇ!9,+Edߗ\^L`5 ڱHA(lJ+f#TU3(ësv5[.BUY*ԅgՅk2Ma{kqr}Y/^h hH15It`y+\` WAD=h1_T)!RYUcsóe(ΞpP\PPDԠtߎIXKf`ЦƮFa'bFkS5jm`q|"wi ޾Y)s˂CRꂩL6&0:|x:. u۹d H*CːRÓ] %fZ@G/Y@ cv>Fa}:{:(ƾhjqF45Aq Pv!@U)R (p߸ 39XCZUՈ5p*jjֈ ڃ^5KS)-EԋDJGeQG3dA2Th-L9EDqԋЋ}чոT}Hx/ևgPa묰c-3`4C\ƍ`G">" ߩ協Hly-o 7[`ly-o 7[`ly-o 7[`ly-o 7[`ly-o 7[`ly-o 7yZ!S5uKFr2wH)yRE[. 4 a?#lUm߾xr#ע+H7!jhjT;]t7Z%7}2?3bKe~"-LS8'6 +$a$T"Uf҉)4sr\s٥ v.]Ɋ9^}M{=ջ~7oƓqRhu3Ko)>|O`fzX|U^gk0a:8F[騣M14-g3V1] ;kh1 ONf o#γdӫ@h@ƀ}])_W4c_l!8,A$I\Dt6.LJĸ }U{hR0u:V9MOP"k)x3μ{z_0{ll-݇ >dpEI}օk8Enѥɸ~a䘒45&׈dmc| ˥2YF3'=͐x2O\"HK} )Dt9pocN9g8>>ELQ`LxJCP1AUkԟVsMԻP}[<8/ïK"Olkƻϯ,ϼ6~7߮ q1SN, ^f{ZrB<0b ) ,U:pcAy]vtE Fa igro.BB.l j} (Trp.@`a^=8:n6m (oF nHnscak">pߋVXwz|73~ 7e:Ss"Y\,J8 U6hS440#CL!1Tt>i# =<-X8%nh7S"x,`Ϥ_;3 N0xZp*t2/tz̓uKouJY&pXbIZLNՃG<ɑy'|Y'Zgc>`PCJe"(8*'}^SuFﯞ6 z8.Ys||gɳ{܃'c^HLȴ􂓒0H,M5)YR_M{j#ou۴cC1"g7 JR %^x!A348QR)[V!6DۈTdX˜J3ϾP\Um8Tv/RE&EƋtUdžw_EdzU_Hrv]OK -f _J; )Kҋb)RLH鯃瓻~4ǯ˟Z*~^dANnW&q$f߼^krK{ksʼn6U/m֍ijjA&!Cԩoiz(i8/Fֹd\brs둠5k8p <H\~``i|u/A*>q]E[Zm#5kЕJ󧒼x<"UڱhA3@3zzxnIJJvвЫ/ωnsg4oWITcg.ͮ}2W9(2yB7s[ܖEȸn4,4:bbn%{(Zo^fbUg&)WȰ |0{S w9b)-! =r|e_~a;ċyn8eá/DԈ``rV'g^]?xk2DN^)Ouq9S: )NYĹJ2Ҟ\& zd~̠Jc?w %/x4pk ƌP{CsdH='s4(]j*dt:Jp`WMZVT㍓ZVl5e0n*څU},8Jy< + H%[UD/ÊȍSlMUIlm ^x)K.G%X%}*i]%yۏ*VL'raEBT ę)&8<,&g|x-3"]')A:`|YafFrj+983")%${m "b$*'t`k.ewӥU' WO H~~}] z{?~ ~R41D҈ Ycxk Y) =7T_!=7sU_ f[O mͻ$}7Y|ܧH>HNy^|u| mN.&eJK Z?{WƑ ! 6#ku7nII&_-iR!)ۺgHj$Isl@i""Cp[Lqqn=3SG4fʺɠ]c0ZqA}s; naݳ?דp>>VP&`:94O2kዝޣ^}Z/sQ<wrp "ggߌ$z)s=Ѝ_^}b'Iݩ4)y;%wH,Bp (h \.F`9"Õ9"խ?QJB(N4 MU,dg*ÕhU*CWbǮD1 !ǶI͓> ]텖~ %7-t%z:eDWT)*hg r ]+D \f+)!B@CWWkbMNWk%:DW ]el2ZhuQz:JJ;DW0U;CW2J{:AF%g0VݙjpMg 疆UFYOW'HW`4Ct@*Ujh{t(U=] ]) 6R'|$ݔ(epdP&T5͗I@XD%>+?_~~`6]Tl1vYPV*B%=Wrʄdo\oM>Si'8U ЄV)Ҵ&jfϛYuOqπTTmVhH^HSoٓ*VK=V0Ge2>ĚpY#SZZZD*R!<1E>HL2s*pϒ>uG#=e)8@alHE8̦p22DNŝ ` 3\Cb"ZFaFeo~1Fܱ9!ڀ>:]ar},j/j?e9Sr=]ѬCt)7 WʮUFkd Q2*z:AbԀ+Dw WUFM QjJ{:A3!:DW p3`FkZ "J[W'IWH]!`Eyg*+t*v(JrK6\t2\*խ$)pAT03tJhU%#S+tm NSɸv`Dz8/[@(fo_UsI(9*,#;*Oq䑚յ{֔-*5 SiZMBWvhB{,t%@P+, ]!\ BBiXo]"]IN"eO7j~ů/r{bUWn> es8.( ^*!o~xgjT❿@ĿY *je#փ(tp}ra*issg9M-׋NrpQZ.Coer= U4L! Jk]$?*5}T~-/f1Zz_?Ϯ^ퟧ?e{yUF^Md;_ #X\-r]τ>CiIUR+?ź v] ϤH9 D#9hgqHYl 5[u\ۛ|K(3dR;E.x'IJ/2 CO VP9H`8aDRAKwj{d8Y(rTg^[ ? 4t<75]1[p4hYR>;|d6K[< ׏*שL|G\s(HFktFY.hVӾ>Qe%n,>Zt] {B0 4TA:2 G$$$ASPTszH# & gՎs!D&(A!A4+y\W mu⚒|GThS34p0B;µAE>1upb.G4MxMBBmj:$VK>+vd=*:4SXgq싁hU6M2Pe*PPW`9ڤcDE%@V\3 KtۭjEH:Ff J9$Xt,.zVyPՏga9a.|TޣY}uӭ{ixC~۝Џ;[gD6#&;.Sg-D.7ĬoKzroEFa}LN'aG?4s!-ny͂-0'AZ۷NZk.n%*[\{{>q J\ݜ N)}Ê53 8+zsg/,rr&\GOȷW~Z'tJ2aVwE9j~"));J7x-g>8 $au({g>TFK, 5f(3V)PA@4ROkB2NIXPJ6!I5͌!b1[1.lL2Bh g5\u/6CnӀo~1ooŽ|3F$ LZWxJiֳHF(TQ,KJ$DbuhI`3dgFs i TBߎh0QH>&n8c$\ְvSq(kY[[I|$(69)@pF`t21# CLF"kj舱\D% z%!eHzўG\"" r^.$G:*k<·Sg31CQ7̈gĞI:@=ܢxKR/cLoܢQ!2єS(7RA 2ϸF=@{PܪK|әy_Fj8/զƤ@^a^4=/;@T AFg6J=hh"!*6zh%gŶacq(fpY| N\fo1N,6A"% ( wȘpÇ=9qCg {깒]_RY_\ͦn7b0.b9^0%d1q]6m`ȈE'Ձ'D9Bۂ&ozҶle.eiշ8tQvլ܄,U)H sLzʈGEŀΥd)09$KP'Aۻ0Grai>jW)+|TBEt"OH +[0h_a<7+HԞIMvEi$p"9\;vSK6z Ƽ^k6mӖr RA?c<}}[hqkGȜ&1KҌ&C3DJVJᢅE?6 )6OvؾŢIg "SJR«4:G΋FˬJk`#n_}VqZѕu sf\<:[4&3Ëژ3PIHRЋQĿ}; YXȠ5Ǒ*i"c2wmyMǼ/K6 @Ip/ Hz{$_Բeu6EݦݬC^ R6Bc}T*S1˓weQl{Y(i;'qJ+E BodA1J Eޕ鹕|2n}4 nP9C^xԌS"!C+BG-zc/**v߄5[wd>=X/Oqbw!w^|j1p6G|wjK:65]]>>Y#+2KO[ 2a=pmb~^oy !UYP tg.>[-oOj,$QXBE!.?nk+6g.=^FElJ;z!Hewu1DDx5xkuxOҹލv1 )W.7Y$3N__NcGt/hIӁGC=Qߞx=)3k_!2PYX`q Q 8}h[S'hha&!4("A-/\=LVhD=p)m8ʼޝp tffo'w[m &ؙ05Z>?;G!-?A&i@dDLg,.{yalSҵzbGJ>HQu kK) a(P9iC.. h0H͋Wiɛ"ģ>8-2_'m"vӫ ď^L w=gr\\3y >WTLLDZL1i!Ujɮ.`0,*IU[^@.oxDdH@miK"ҕRBm@JUqL(zzxAd;)y;z<򧃮Vs*K|gО7/mږ́ (BU+ 9A0e,!y1xbS@J?. };8r{r䒮? ڪl$$ީ{Y%Hu^uֵzIDZu-? C7 ؄*O<8]Ii%K]Ή`EvmL H"I=-/|VI T@wQ(@JEA٣dLqLlhIWgr3'$+:D=>vۇھY"ʆ/2͆Lnتj%!/yڙ^jaī78]`)^7nƹ"7xV^8 :RL T)ȥ4YF鬉%DH7Թjؿ)%l>^M6ӫS|VϯNoi-tz>)٧<ߖQCpZ^OgjЕd=E/,^4de?SD '^Ojy!ůKu<̦Nί?~8}m^QMm8yI!~R!9 օ[XU)l&ٺ f Cafف:9״ 9MDڻ[>I\ I\~Bi\C1MG!VxBUM x>?>UjB/bhL0?Qʈ\ *![RoqJ[mЉ}mrlhkwnwytU,Rn' p彣`tOOyk|L|1DЗۖ/ZQ CiNY#U9C3)[ Y5l8& d%C)lkE+cy2sE*_sx-a?wz|^@{AQt!l xR[AXEF#y`3&VaK5tPPwpoﺖyvҹ qZN⎻ N`U -C[ɟ滘 ɽ$O8|_O8MK3=y" A_~(w?}-_Q'#X5t2{'*%v28^-~FV%6-h3Td:x|HUJBŷхtSmyۗ]v+vvBBKXٍV/tqZuU7T<&ȸ颐yGCEps RH9H#iM넶FePQ+1$ے.2I׾d= F#3-VsV]ѧ~" UhY/C̗;A␅/|'0IqDX_&DBa4N$=NF<ʭGr[:@uA siLgdp2DR~q "dmUI?/_y7ߨ"C%kA,tag,+dD-IoY(r)sc$=BO_.vnL _%N; YV3]`"ljWT'3U'#% qrt'd"XTF,":ڰJ@1iDPɇaOd524gE"NxVQ : YxJJU4&3zi}\Ͼ]M7H&WvYWomC{ ȲvʢPdǘ0l;v^]/_jles#8Rϓ"HnR`V «:jő^WS+kkWuz⪔ZqVs nN^K7;t-Sٶ˚'hU(|%)XL|3>U3=PrQiEec;3 ~豌EaUHHD̉ S\s# 1@dCДsQ`9 )yR҃*::Gfe \H~т- I;ёwr(8[_M4zA]sڧwͥVÍ8âL?GxsA;߳{`Y=g=ǞрQZ:8Y=g=Ǟس{czYoRg=Ǟس{czY=g=ǑUÝUI# ?dÀ>oNdO3CbZX'RR҉Ӊ̸@y.҉F A3H zP`0I+-H'2RCJ&m:SЂ#hkmdŏ `䂓<%XEڂ5G\t$[.mOcslwĮ&"ȏa%`E *!+3O'`%.bQL~g]Oǣ׼K(d '_.ָD7U QH"gUeȽnWz铥R(qI77^9 tlH9 hsq:*jб;[6Cdm46X2d},,GPP%^(*,l>A _RmW}-W#GAIgMĠ˯=ѩdp` * _0 5ۤ0Xla~O,Zz۪{67N/{>&1wS^c(r. ZB!PL #edc_9:5A-t}<:YnkuީX]UrC vt߭ zGz9DBg[vŘ%jd&֐.DQ=ٙhS,f{K<%ӯDy%N|_]4PpYyǟzw_dGtvr=ԸG|N~zzUz ;OݕRפ;9] ׏տj1&ַ^Jo+U9'F^q|1(ͻeqJS*UO|Bjמv'|.>rq^杝| 59d 3TkGЙ2RC$z4k4kf-uҬL#P0mItcweR"Hucjzb#54"QJ$kBA"HQhyPcݣ avŢl&PEyO @ErVpj/]m)oсtl3Z h+muF:!j%aCŋْ hu0Ͼ6ӓz%/;t J*{}7N.lmr}H57a`*ТTXmN']iKufT"*H)pީX肧RؠlE vCZ EP%a D&y,NRZA526 |fXL3B ݀'-VxpN< ]ǣׯ|m6%R wmbE)mhRV(j1l(NrE[)CB[n4쥢P LÑ0ql'B$e"(#Ccn&aĆlAVq,jƨ<ؽBERdV(,ƂM#A֜u>eV33)T!;1-"sP>xLx 7 O{ ~1"DH5"t:zZoB/ NJZ7][ ccf)x ȁ CI+urf<&U[W:iɑ(Ec\. hp9_\¹ڜ C(/1đ#{Dpsz4ed~}ͻ4) ?O'WFRg"":*bG6+ȍ+rH{l(jN/[=W3w3HMѷZeui㳻<%Ps÷ [q@'L()Q9 EfDAG/NX8̮@Ǒ䝫s5ɾOdr$ϡB}!?]T4LìF676vA].;#BF!ODF¡¡w®Gn~|o 4imm2RA)"f?$%PJmy͹RoдOrS3Ӌnɯ<r@)bb*(Dt(Y[[:$cX[i_ylߧX];T[MTeC A1ʼ:mxpϵ /p2?>-ZWFpZ6,//?VęqUHe7o,ll Aʐ%% UT `ΙRD_Z!!]bpRYQzD]ObTŮU9LҺ$%{' ׯAE>^ۘFyVz~јzN .8*r┃6DCZ)1%Vzu$o& C$u_Qzw72_w|l| 2ۇᐿ?e.&<ţ|[8cL6bv=>x+3@G Ug(]t9,O쐸1@/*jsW)3G:%DQ$yGo!D(d)-H^QK Yo7n 8sU{Ѿɔٻ.T^ENRmLG׋l.y^؍6㉽l@LS!$$P \`@āL}Tϫ~g͉%[BjQx 3 4,4cw%v*$je[J欙׋õWtsw>˿L_\`qoޝyGi%oܝy6i'Nѝi~Q`g|ܭ6`uyxsaRJ K(`tR4DmgV}ԌD(Y)4%QR Hg͡d m9D@ʜ]NDӀdo,NIst^^98xˇ އܼ=5#/jD*(͙UB"UdMaV0T:DyvK*Gi 0JbL2! (BV0Iueњ㶙8-Q?l:]Ю ؛k־Ng/}Lb;6f#L`c{I(K\AW-H:,;`R5|`#89|v~n/%׻qsʓ4PX]BN]\{ ٷ`LiO7/>98{c ^߰:ܷ?-#mGo}/':F;)u%sܩ:Fsg +dp>DI4d@ B0 Mud5&JC@(PBm5BŐ!`k:ٶ4G1SȢDoEZY6NMi %#Sn3L'AB>!gvugmΞ||hK;S}D|[|P,wΚ* *P; -j1l~?VYClpN)/5yEd;EO]S8 i3#Zʿ^_TKg&?m)+s>HI1?[SMf C3wW㟞睹L4Y#R=L8́o [VmZCdt}=PXk!݇G&c޽tOQCU'vdÐcwVMo{ӌw=b͞$̗Cz~MȾ֝)[}ưCsz6+'./̈[&J<~BĐ7qg.QLffgi۝m<+h ܸ?>#TR4lD #,\*+˴"OKnfPԆ xasFyJ󤰦L\\N@Fx.M`9ӄI]oI t|$Nn=c'??l~0Ӽ|s(~UT TSB&i O7%;$ QdۜpʒfJ"UTh,e,ͅs28') %KfI EyB$ϭ.)/sT锩#&㮸a<_/\.6D^~_e6wH: nBwM7M/֋uu&W3Gէ2qC<)dYC3q}OHۼݶos닏qJέ+L#6>Q6n0GD:q]Oga: ":TI?[#08T"2T"p^L%\%/SVT"N%r5 b\Z%]\WP}fz++/B+Pə !Bp%% J+TEqJx+l?B ՞4a(q׽k}cv Z)4P-+%<_5K,f""r*kDohXח6gP9ߡ1%S¦Lum(ziC)L\.}4uӨ2,1F0m_gIU*vm;ǤhW|/^tD -C{7kכӚ?v >9&@2'Ⱦ] s,G"A}_v{k˜$Opf'QN\$=INJS8;4DhNBY!NjUM:ϸ5tJg>:r^>uOTwufu%e]3i~)7~v#ގ,0ۊؾvL{=滷u%ͣy`}^qֻo~B^\w<޺tvh3s,CyyU 9t1:-1p{Xvwnr5#hs11^]HӦ_I>&eSz:$P֙'|O[+{OXS=Uy=CQskx9x41a f&>Ë́Z>MTiLgBVp#\\/ TF+)#\`O ȥ+T~ T#pW(J_p8&q*y#Vt$Z @ \Zulg] ׭ԹSVR2b9гr) NZ^PQ´NI>ma2 ίμ;əw7W 7uZ*@a\\|u\JKF+#\v#qrWLibkW X @\ڡJ`Oʟ 5\ZABB\;~j*[tU}m>UO[Q#\1B J|y\JAWO+N#\ؿw,\\I}u\JKw5F\ b +\ZM\46j \܁wS(uWjRTHPUT+T+B z\ijē KP?sWV;+Titqn9S&,bj9@@lTr _a$hy`o̪4=kY_O{LС0n3r7NP;èuVDp [&盺+,?S WY @1ίXh3XtB%|]S6pgO ~j0{SɈ[{pW4jߪ@\\}u\JF+%~ |x?bmrT2p5B\qnsK?AK꥖[:@,jYܕP P.W *:PW#ĕRY$ (op%"\ZP qPǻz3wjATTq%#{+LW(WP_pjtW1JI:̽|R&"vՔ Yn}0f{1 H Ӯ$hY0-r!-Ӛa>,(w?A{j^Ti ^+!#\` /APiiR|>bM^:ؠ\3py/ ~*c0bWV=U8W X @e\AO@:5J`0IpŔ' \M ՞q* !Z+l?B+PkpWJhe8ק` \`2O #.$%?Ac  @ u\Jcĕ2Rp?Ak/q* !Ҥ5\4jTNP+8W z8T`E7FLZ]4"`zv>~it=X3 YjBº Zf#Bq Q.f! WFP)|6N!X|B ~c~rS;T~*cAxվUOVx+7p_pjO}B\Wn}@\\m|=yB; z\q  @& r:Pe+KGxp%ڧ fLz+˵/B+Tw5F\Iji0G+/B+Tw5F\)F-  $+TˤBn z\= V\\`tXB\a08J\qug jkX5` I%"Z/]klT۳MbZ=z3?pUv4oa ث."[7~Q{XB?|1۲7Q6[\ѻb|vKP}4Z:SWg~i W"?Hg^rۚS.?o~TYٟUr$S ߪ:l,]/ W8_#Jf57 l$\^E|aDk ܀U]n%jbjh_FNllڱZRn_^DOi᧴rU|BTk ^nC m]unm/n6z-ewT&J>)nj~ݚ_oqOF^ eCPC\|x-C7 x7gW<{G%!/mF{j~J$"΋ezdz0-&Lis&MYPexH IZД+)Y*Pф O(n$UC6wKmx2Yd?5bFH+bir+ St MRHrmJwuqH4Q` Eyew`۰=jK,KcQ%ZCx/rj_E7udJ.c4PjM!g\ȥ4S5TO0v:NuvƢNwBC(B%I:,e*:6gBIhCwD*S dK6hFKah;ΣdsrŇb$Ztm)Y@CHXùڊ> pnf%z亦ܕ)aN",=~X&3q5sƖjh4U4jYЂ̔|~_mDmɺM7}And,4D2R 0O@>R;ƈa4BfQG |h%螤K~_>˵*6YtT:DyKP[7D> RtwޜyVUT/i>s<:RIͺaհ)=Hn$]QS31'֤@{ur38:Akc0mƀڄҙܜ`/DmT/ *Z}ƥ/-@M|^YbHuEQCBNFG6Tق1K(#mtAڳZaB BQ:K+z o'0L'mTȗ *)ёO,ܢ  TTPtC[B Z 4BsX?ҀO+x AL7Xu]/ Ţˋucm̭lLDIYـ0#Ȍ,Hw0`T,I$yc#6JQ|:0P6Pc.roPPCohAydSX|.H10'؁~oktufWHiRTL`6(:\d#t, |FEEt(M*ӫl5cd$IFhU%ٕӘ;F͐jPob!8w(c)(duP BHP%D&T+Di?hx"W X)[K1|g $$XYu0wTL@TCJB*) bUQ r 6lC@WJo7XQ;S )ti7 U B!Ь=K*%d@BYZA!N ƺ]u Rq*NRQvϪ J*H{m>vQ,f^$L'HH/>XԠDtP 6C Zb@$F=(aL! #A K/wh{ .USW%$' cEUJ=IkB\ع zv΀5/kSަׂת`X*f=f~pv]&!jhX 3 b{PT8xiT Mƫ9D%@ۈقj2AVa1 %OHv9!Lu %tA\xO(z+R|$Q*LFȼb|b1;3]Kc4/zOG3}dH֨Vx$ ;zhm`QՅY,TG7>Xżs mG5j%eD!v"}IƟݟn7۳3|0"d*KqKе2w*1wP:t 9(P"QPG݅ZR@aF 5ܘ jcYۚBΨh Bؚbut  R(VWC+PS`E[LnV2ÈыA(Ph0%/@Gmlc:LˤUUPZ|MC@;3 5\oFʰ*+ᠻFRC!ە55VBB-`5ڳΚ&ѠTf5áRk4GoMu^I"`!-!ZvH¦lpݢ> f՜%C8˾L Wq麿 oztל2Ɏoj x@^,4zV6 `-~&|vV8Y:Z:5WZsLڌQg5rFCo bLehֳʌ4IMG^AIj9lzBg 37[;:)jVd*! % Puz\ ;fc_XA|_w?y xMRP'w d!G!ϺCE7 F-)~Ba E5FHKCH4pY/xp`\Sclci0Ih j hN\76xk+fnQ4 $+?kփ*H]6 |t~f҃d &S@rZx tmKzgAц?jo9hE kJ (Bu(AdJ wbdzl=?-Д nFГ5>dNҳ֞fJOQ!,yJi]0d@b- Pilp|1[.ՂʠriL  kȎEЬ{+IWG?lx)znohw__&_J -Ƹp^q hqQ(ƥ`\Co4n\`$Y ]1Z;]1ʃ[섮]N^ 7- 4ѫ+FIF ̒N`Ov1tA-5eBWO҂vbBW?Cw2:YztMvI/h p^R;}kWS`++W+b&=vbN'IW>p`O{u <]5ۯ'o\|Ĭ ;qDտ+7\^\iCoapqk:{:ёOތ ݼ*g//Nv Up_M+۷z˛~6rӞ!@610LP}WaϷgx3KXYGWF%?W磇4E kⴞ_7eo[2Иo(~ؓJ_ynS;7\:}?w8Ni/3!u4TUZS7 Z$SBxb*Q㌼Y4Z֔_ƚCˣUCBF)كe 1a'x~ڿy?@G7cȔ H.Wx 4[MUn j._Fk;|%<Nˑzi)~KxFS|.uZҊ'RGW6=+BW2t8">K>YhIϟ>;_gЕ DW 8nZ ]-)zt(ր]Tg۟y H󗽭^7:1^<~mM[~vNvUƺN9u,**kѫ!bqn ek0XMݦu}L?` =ߏ@"'Gwi6^j䋌ߨؽ'G;i[Ñ\#s L 8[GN=.C!؏v#.Dy~0UG2Nn1yy`~)y~i2tIS)InXİzO^1>n_d|u9rx7psw۶]ڼNSO~T~7pvyonwpg܊w^|f7/p~?!Qo/=v}>]בs-aow?g|n.;8pw)2[^~9vZ{D۰{'ޝx<;xtYۥWKfkm#WŘ阗->>`qfݗ}!Ĉ,9Lfq[-982eIcf2N7ͮ*~.׾|Ŝ/s](m~Ƚẍ_O_]d\wWK/m$.f͋iJ_+[[uoȥ'k{:^3MSG^/g.7 AoϪ{ǣwi4|*5-|zz.x\3cj-tKcݦ~fkmY_hw~=_x$fw -W75GZy<백MF<@*cDZ`] )H8@-]݈ w&=M}nd[Gȷu^JZh=М^uv&Sۊ8[:)̑/)f&:RH'5&z"kѬi+{aq-=tᡷ@5Qojz6>MnXU+>>ێS]Xyρ>!՛Y³ ꍴTwZ G7{~ghR|u{x߉z3څp|եq~pru?mu#&>}E 2{`*'7~N~j;ܣ+li^nbB~7,bᅫOϏs<;0ݘlet?~Bnvf'_FݥvƼ6_U} L*uO xCNK5P>s'' i :E^*AAjG9d|kFZ^!4'ryaqӇlH2 kS>iW=b|i>BcgG4Ccn=*f8BGFZ^Q I*Z&/ժ9dMY'#sk֣_R.Gŀf8yaQ]*vV ' &\nr/4vhHU$-IJ)H $-@:oM- 2$#vAx2VD`ΔʨU #ek&a.>(nysWqʮh?||I`3/ӊzxU thp:Kb$Ȏ{ԑX8 Bщ贈~0&ƻ$?n=j>w<~ؔقoڎe[|5O.Җ~6 ѩ^&u}N")3|:UDi8Rwv"JYDJa',DIcq62 `"kQxep*)m#JC*d=z}$SJOYfb!r )C o# .b3qn)7+Lv?GV8]l(Y|1U-bMs1^}!L0joh%^:--=z]JuiMѥb)tNݑ){>(Yv3VS;QxS1x6=Pab HK1z-@Xib. cMajqNF"?8=? L+\.Ȓo6Lw'uhvY%Ž>OvV{2 ꩻsϘd~B[(جdPBLLAD\J;ԍf2%C6 6\!텰6"eLr-֠˩)~^pW; rN"" ~޳aH ,% gGT& zP*`,0&׵ރ<ќ~3j|5HGA% N!Q2.deYaQ T0y5dCo&k1Β۪~>l TǼ髇(}ֹÝ;q8C݉۷=3l̽H)ܘ_л儗;7۠s ;u5sy 6™oڠo|}=RbN*eRɠ>EY9P$r!Kqm,mv٤t(J@N%ltA HRLyȏ3 YJ6Fh9pt;㵁 ϕI|AQboseـʲ9H`2% R)*i-9'Kq?Y|6$Ģd9A ` 91 ZXȲQK dD2*%bb8iݢf>\aAi?_Q/çFS!Yy @*qAY#UbNW&iJ@2k; ShAdrrNxl]8 O Ki9D6ӓ"r7Q#rϩMV6QpKW>!U,+`bSa7VG]e uܠ**ΪzWr. D^|fֺ$.T})e+YV) Ngl&#Baff mc_h_xR_ѽp~[ʉl6:e1<C0U&2^ѥ(LL"*l86^,YHa,ۦi cd/f%tJ}MmZH`e&\ @lA*o챛sc8Q̳ڭX{m7x͢*85I`bjf(db楷$B@&qc]fȌ %0FM}M&VhYQ| \LȤ\a3qÞԯjWx6YX{D?x#2K`(mNPGVoX Ќ 0}ӑAAQQkFCζs`R̍@/:;Pz3i̤8]MK:U8( 9-Oiyg?@&K&s2J֖эNt{R2s;]O_& ' w5pvJOw ߟ$'ֆx}ivKffUxBvņ8wo׃_./. ! k"}U|n.ncK C{,Vp>w,~XLvy]]WXًߙNdz}/mVoEawc|sx5r-_Wo1U˫y]g]iЀ1݋v{AK1ng(=ﭕ<*k>gxad $\L>ҩXF@hpmx6F*])bLrŅP@,Te{E?-Lß zsj2~gǝƻk݌Wp|}W `Oܶ/S{!f} t wϑy۸n,EeNKPO;Uj]rQeA?g Ϯ8QSq8* NJ)5chNjʠC')-F h=r` P R3Ţk*[؛ZAQςٖ!65)b,ϯgt/y徼^}^ m7-ӣ 6'?,,׼z 1[0F "$3rJЕĊ"%_X2l- MI>`)HiC2; %LţtN)S2٨ 9l[q`d^v;j'|vGSmݼb`/,7gnܟc YjҖ(%_d2)Кx6ȟG9Z}]==gО7Ǡ\wmm/r:ƵYv}95W+`J~z"%Q)P"m*ogz` $\ vZBt+ %w>`Ftv6jR%otc8+NM5Y-a& \Ɯfjzl* @&i\xv'ULSs;L"pJ` p;D"$^nظ^)EgLcJ"w.bPC)γ\K)3"dIY,-,xGI4 ׶Fk< #.!!gV:igtY&aq[?!a$)y{5\ўe> R~%bH-B֥WNad!D`H`.u!ϗ}ڝ6~ƚpqΘfr`w1n G%p9rs"[:n@^܍ؖNoDN @J >d'A&4e#ڨ! zO  -OV{}Vϸj=gZjN@gAGJX, 32p谰P:} mg¼ ufIF8X9 WXɂ8 ++Dl:%A6`>U!}[i]وڐնf[c&6F{c RI,k.(R1A 7,ZLY zfb&8eu$\.%6!`BLB1jveNʃp-az5ey0GH[c Dܞx8W," C㇪yL:*@4:x !qTGA z, Fk رqRL^ ,qÌu"y@4:tɔ.F*fi%qn AyܺX10i:O.Qa} SCR{G:cZ{0faziP~f$"':/F皚5pH!)D'I)&\4J{7LԖI g0b000j/ Z;,c"zÑ`K;K4͢N[2-00@0*=֜DaRyKMmIS/1ic酤:Jnh3[1w0XkP/#MI_*Uq'==pP~uPoߤ"RLyr+R\IKNAy~uV4r1z=~Ө%l\F%f<2(PVEVrVMK7^}6Dfaw*$ܟ BBw6EkV#r8sӂ{5ͱ†՗_USwVWo/N f5b2*k f_EPE 4zGpcv؜\d7zz* A3:\%*9++)`]轁D.SWZIvJupU•ڶZ`=h 3ڰr^N2|[5CilRLzp α03&D\?qҌy/aPT{Vş I/F^ ?300d(ltj)-hBo3r ϛSoF50 ˧0Ѯ7%uu a04 8-儗P0+Uƍt&,K#_ot\^=ߓS Sss)cgmI 9Zr;AfHP)`?J/a%7Qծ+J;+Tz=D{W@F{%b*QhW_ \G8BBS!_"їY\zJ2pzǴ+\6z, x fT \%rJ*pTrWi_o4 qfc3)[͊յeoUؾC9P̃u87\seBBFһ Gu9/,xcS˰VCӗWm7/d.&Ӵ|d ~ѯ~0DjOvr5r?x 5Czg&0~@`.?'??։K]Q[`YW0&<1SSJ7xܢZP䝢kGæ4w=὞,_;G}(mc^ĘTfQW@\%Pp<+URd?Xr,o)x ϿT2Fh#YǑA3F[-C`8@d>:i+#00b #K= ωKFGFQ24DR$-C)Tn UvRߚ`JXU2X !)CRԶe7SAdKR )t ;/Ɵ0Iϧ8?2<;iQ8>?=.q/ F| CRؗ١ %Ѣ G)TQQAIÐ\ 1~LEV%>zSQqRc *gYwv5VӁmҸ'-6lQz^Fc4ZBU7Li@"!b! M{$ű~Vu59۷r}  i5'̌v) [6dr$N3\" g?*u5mkT"oװʕ||e-*w$%ۋݚH0?0}8H7W>Ds14L\aR05O5fAgv;xN9*8+ QyCjP䄦Q4`8V\hƿ~r,3SLٵc-sQg32wA#P$s1o _χ'Pkq1_7OӯSOiaLJ& -=LNô e.e`VA$O/aG?#*M:s>FG#vWП:|LoSce;Ww(\34)ԶX/3: UϿ21q1O gSț_X׳o'jkqAQ3o4ëN?Tbw~ @B7=4Hd=dly3Ew^]_?_nw͛`;@ًٌg'8ۧwzvT,KR_ ͿT>BA:NxhkSr*c,82?rqޘ:Ĩ %]3dœɄ1m26 -?nW-nR\LU`Q+JL {cD9{DZv[sgHOKh&'8$sS˕} Aև`rZ]q(&1tVxEF{}&b]Eysb}x];4E6NX6FȦs7A2 *Yj ZThMK&Se@5c%bV>A{tNb{.oklkȧg`!xd^8M <LE*:5 lbrODž@.<.H@8e=D)tH]%t3U)1fe{SO:=,i[t2N}'i}ė 9T֥\mF*3SJTcHlpJ˨:+WLQ&!ۖ킀}]m/}QQwHqo?gnˏˁ<ٝ@݃b:=ndJ'Hu&H5p2H|6WRh(O81:YLV& Y @=@>//zV̳lBlβoգ.Cr f"U먡}*L9FwL79LnSh!$lȤ#7K kCp6Ј<0Ky+ sfnW0^0HMp>g>l8Acϲ #//}aʙԔT$.ʶXPm :MqSh7ǣ؉jѢPrcGZsr0p-1قnAOT=ymوcqJN;,!;Y)[OCz쮛K9c"b5[L !DN޵ 5vX}`d|H9aM`J* 9Ԩjml=(X]jٽDO1;^m{^‡}u}S[:v$wNm~_D]vG4/J^\?[-=|ϠwGws1LÇu{_e03wCO<>|_aiˏx7;SUMz۝h/}ꡤq[zݲ<%.'|Wqrl3Npxqہ:U ׊YA4Nl*>aMzT:UcکSŶ0jsb>'_69: ϥgcd {?Uשn5`[u&p52mAgPu!Q@6eX/ jY4#jQJ)'F،iXJ,CB 6*ĝ?Rog&-:FQ zO#A>Tv8cKe8'e9{%ϩf\ՆJk1Y2A'9JTf#PQ~:u@HQO=WY >뫛۱& c#.#tm垙sgr9= 7hXyvFqT1*A1^Qϩ!QgS*8|)BL|iBL|1BВ5hJbuFW|2!8ft ޙA90 jNV~vQbD8ͱ#0"b:0ODW-1ĎwsљQړw_hpT'뫊9mɗ MU1Z3 Vj 9Wf:x i0!ܠh%)NCdR Y;>ܴ2jag[y袭ɚxs5G-`_,l[mfqח4s9H "ȯz7*jYBEu@Ю4Aw<8㤼( fu5 b=!i1x)gnX9ɾ9}CJQ&}5t}_ ]cmckY5B!w,OۛK>!a߇88)=) l/O[){R*N#/+N#*N(6}[5V!y [䫏Р$XK2Ĥ@h dZ'vG͐LD2B#ftVׂ{-<ۮl:Dʼqv wz6kޝXzͲr[纡KfSŎaw&K1y}OS;[ZCM* c)sU?W?@ހ 5wQh&Z,4y2r$ƥ. 6Bḓf,xuK%Ѭ ,Cr!$VjQ{r;IGH\bm~ɗ{ kJ|]j< !P}EQSQJb3Nui`ZCDTR* cҔ5qb0&;GH$l${1=U3@6IK6N|\KlC$6e6&Tmas}hch9dk2%W5ˣj}7ҡ1h zsEW{&p Y(l& QՉNbIFc#/PSXcΠx&wpnWu]GYA~=4ԧ? &Uڨj)`yJYoRq n΃ ֫o:8%~"be.dTT֌J;MIl+Sa )qo{gNG:v?~zgz>^4,E\/E`o@;sHL^A뵣ҎPr֘KQnEMM% !ndD_gIE-j,#Ji\):MǛW9xt;^(E;-ՏG~ 9!Yn0l7ҩʪdpJ Єn0nnS+VSU^ \^[*ط(_M?6!VS 'ӔԱ4)tn0ݭ5bPgk\6jϥ\|5򋞨땀ǫzo/VWoye$Vݤ;xY=`RLt0{Y~`jste:6zigy@E1D'JL2AN8,mrv+ +;#-sswQy0t0u'7[ KOsבɋ y$)y{`w G˖1dn 4ןby5W4]/KX%hz6}MKhb `(Zf"DP;3v"qS'd-/}v>]x]jVV}z!w"}}5Crޤbҭ'z}}zyz`_ٞ-z[m=ov-Ŗ|]V1; J ر|. x1[Ge׼9Mw6m3?j%??J:s[r#{tYMx} aF0 S.Od* k: 8qnd{"fػ<GC:9 \J0hCV8# }pc^\/h:酏 MbS%18jj)k,0$9OR h   JiXVX4x4r͇Eh&)-^fcds_z)F/tV @@;N*s˼!e2 hmD"C%+ y͗Tl3-N *,Q|C_U4kr.yLQݼfAdל/m[Y掳05OHA .ͅEkqExw-{tîGzqoL!M %BxsF4Rk)'MTz `Ax4Z>_ O>Թ5`.Z \0k/!Y>e 1rky8fDpɖ*gWS)Woy5m&=g%ɥ $y$sGJ Ǝ\_e9-W|]P 髚,h!G%Gjjkbդ߹i*9\U8b! XءydڌX? &!Rh a.7$Uػfc:̜ɑbŧSOn&6.>ipnwnNr 71g^ $besT&69볺!y&ӳs&%x$ hI=oo +bd9?K~s ~x~L7oWW^ q_MiD*Ki 9 J^/7V{^..,뼇jN3oٵzzYzSD`H)S!BI]Š҈+3&*5֎-M-$6FE-V[gk,S8ʜqyNm{Nz:=Է?^(VϾ N8ّZ%0_unHWW1t=~6;u?RMieK4g QD|hjpVX28rĦhriEjF! el#$5lȓ+ )yJeI1`YOg1*yw+7L.\T rPkiGHKfbW2i`[UVqS9E Hi,Q4@@ 7s$4rY7ӗ1Ry豗M[oL7zC$ r59P,lcl-JtԟC.뽯.R˹%W% rI{\;_P%|t)e|!+ GXKPӨ5&1%fc3oPKϹcV_ܲ,i90i.VϘŧa>pˈi]~z#H`M.(MGx+8kGVqV>Y z-{_z\;G>[o>?)zq@& cM1Lz(;ύ+8K&2'jY72kSf-P"T |VQ$L6;m-ry5z== M`vWW'4z)Ҫx Ot>+svLuM%51b@`[LbQ#%E[P4Hc%H!2GK2@o.J0EoAuz<]uNSM{|gT`*?\K Beu};N>6ɕS3H諡ȈH+P!M %SwkT*5TjmF#0svUa0 q0₅;V~=ws |vv|nĆPme鍻!+_9B. 2jz3+hQ+bC8vĀeqƗ6=#4 7LLAqm)Fa<\nAQޱ/j`Ԧuq|L)<Jh.45 5IUmDŤ"9k)sf -h;bT誋jb 5`<fxNo:0f(`D ".԰uclQ 8Y;$P[pdrM|VR\^Xh&q 6:)5*Z52k&|K\xtg0sv .i\sZ%{5q1-vu *'BLlzz503F_Fh3i樌w~C\pw싇v,nh<<} G9U}G>x,(5;-ۊp} \_O_#eX+IXuiBR8ҡzoz߄:Ƨ˒ypyöZ|dpAm"57%ɃSDt ]W2O s3-M~=Տ3s,sKq8D46Qǒb)0[j5W@CҒ+.u&G2s+U4dGytb2K;g2) @J=Fn8HWT*hZ1JG3 +Y05)6K )7@2b]1\Ҹgښ۸_ai71q*YWNĕ¥!qE Iɑ]13H7Q#ƶ,b@w vٜWЦ)g_0V?c0:nXՂ 2rxU0,uނ)9)s B%<8;cAHaym߄XV<t鲠茼 (.hv< *Na1#|ra/&|2.^q/"c-q"WL^%TOLJ,l_9;c1ʸu,(8B <(m=g3G3 c5m`#׎%kYfEC5G1YGBxQ Y6'0=0VPz09xn BZɒ'NwsAtOqХT`CWۤob|,zAz'^>!&=J}DwT=t|!Nq?\UvkmIBQ&Aj๏Z}Bp (1I@6qe /^|y)U@]) DcBŽ`*xk"&DHFE+=h9o|OA lΔσh_FW2enID&31n<6-h1ʳ (V^G|gIWxa[>Rer|G)4C&!I.4mKI҅$$I[UI BFcBbka~Xo(r*| qH[^L3dzg-wSg> _ofi 7Z Ә>XƟSu$3ldry.i2J{uLamuWXR-m| g8E5k˧&6)I\! XetL,^,SC>hA42@DB GWprc%$) AI #iԡ}OLN'&e"p{EG;/k"nlQ#xΟ[7C'qqM=g8}y<2bR)8 Y2X6))07e7 wVTYT$h\&'I֌\:*4mcv h4K2`מ|=|z1~f =Q L8K2-%\37G+NR>Ou3I z>ЏG9j.?/%׳fgNTDhUJʆ\+  B`!RZ!rS:&[ftcy_zpg{M86Sx~ G 4-U wr#,2Yb>0ɨQX 2/DkCGޏI-Ĥ|(- Da U"Qi42WZ$Bg` -KmԚ3əy=b,9A]h:g1rPG;q^Nђgaw{+uMբA /7gxY+M_⑚k )dreH`s3o5@K@؈zU<(dv[ci5KȲQ 13O>28Wtr*Utu0]8G%f8nFO0v*k~ɅhrWno* LzQLwW8;f8-.uo,Pc/dN>uN 6>]FF ̶qΊ2}hp` JQvA utVVܔZ5 tD2!j!Cg |gMqlls>{~#A k$aTp!0,OYP`-&itn${2%H?qMIܑ('VaS*͏9NeWrך[->_}qU>8#ŊZ[Ba`ž xzo_} ӡˠ,@1+tЌR={tFO痟ia_ÆO?I+YBqg=s] "u!iy?Fa4IKO~vSo:N,?"'?Nj/|ÿRl<).czzҒv,}^tc>o0{)sN9rn2=C70IK ΁7LPj7{/0%vtm~[Ub3-ȑҁrydaŝy٨R6D gnաݾs͆șN$Jy%ϙAeQ/E=U2[ڀz#EGj^^hCY drsܕy-nm$9Ø*tPSd@$ўhM"Js2Y(́!GU18BWFNvcۡ0n8J6B |DJhu -Pg+%D(jIQ ,Bx , 2J[0à0n { PhypEBT y[G'2cQ(B8u2/G_8"Uw;Y=cQJNd$mFTtTB!(Fbp$ztTyliꪋ9V WpoTt_5d#w}ߕQƌ~FK`^ ldj:%JA%z K?egexwqdKX&ξ~Z}ݴg~;>)]ow>Jrzi.P]'4у7%o7B?ޢ H8ٿg6%W"ecgz5 yIgz/(܊`f5Uk?M쟣.;FGh<-1|n\̻vedv _l⡿0!X_s{biT۷ZwVfFBl6s ܍SFbDu݌wY3ZFZ|Jq=4rq!'9n %H>#=hUFEL1Vz#0rcX\{̿xZpsqWއ Nup$v>3vٗS2mmR wo6-}6 χ )U15N[n]4wxacL`p"ҳ௥Nn;.me0ff`(c$eqK+Ѳ 8ǶYY4`RII| 63QCa TV&(akMb[sHk. KNVld{X"r0£4^I"ӄIRRfDǹt[$)N$J"HZcLH{?L+Mxjbc~X/b~|ߴ1G/G(چl]Y|j/w_yTM)("@_0`\AŎLyٖbk#5/5" D/ J=QBB[C|6F􁳛 rb zo {KDyԡ l+jؼl=O+ lr57{5&0|6d\ߣ D PC<36z޿xVY#Iҝ(7JZ^+.Bf=$B6vА9En]TD"cl9n[=_7ngRroo~u7rI{hv3x֝.*}pDKΎ.֜t{0+{ Ҽ>Kg~pe %ϭ͟7Ұ&ɵfj9LRm+̿0vM./7h@$)W:(mt'l1>\9܁=h DhpSk b'\~⌢GY`[ "@wp"#Θ@dBA RFcDZS@# '763s *V4EKh8&9{h‚ȣK` \'e|;_`-Ң}N{+R&$T6Y)"2 eR Qhl6h.14Xb B< /AQ)jcI&.DXt44o H5ő,׿»V}dև"gq%a!`jY31]zZ3ݙNVez,sr>:$)a8PG\'pxqpGd3oi-).X (Ѥ슍&z͗\ lCWUHJKR/_K+&  Tf ;)r^km_pn/3=XY^h bz݋Yn0Z-f?EPK/hNiRS)%BRv|T%wG=ʞ ϭes3 ;I+f[}A&oj(2 %5OV-ùFwd\l<:ҏB:>w&IkIro]:9.QtMx]{(n=:| GLPr$Md 9Z˒#/RQ sb$fF`lL_k*&ɛ Ee/t.F?Nw|>i#E5V-jT+T^flbkMΣ74WM_=d5\cע[߼P|Mvx|LO=!밙F29OHX&=y490T.OUg^gWgTgQ.@3b(A!9TWzͱgvlZnr)q%.Vl9M5,:JT Zؤ@:H &Æ% .§_/8ѕ^C}jj8>Rn+-~;\G;ƻD$f/WB4eD?/5}r|X9Q\C S8)֨cէ*'3/' "y#Ά8wQNd`+J@{T(puE6EKSb@M(X-'br%Mѕ b1Ȗ5dвb(% :;w'>5;b "'.+ VFR"$q+N16lj*Kһ)Ȝ<%sOi\h X!S3GTOY$@f5Q3XR_~Be[m "bSLmjF)$>usT\e'Vdׁ5ʝj{}62Q"Rـ)c%ޑRA6lbaxɐc3Wm=hX Z_h9[Ds|5"h:\kkA7w>w(\rOVMW:::uдKS0OiDW&$lT9t(iySR6W#u%$"$쑫FT_([%W>ݚEh&Ȧ9kuL{ԟ- tF^ع}NoN#P`0ӵґ]ͤ~ D/b6O qu u &<<jLb^;Nsc: {[Ii,O4%r>{_IPk %0-959*BKG@2{'769֫9$KOrFDyV]@SL68V~ɡؠ2 5JeNGPMђ@sʧR9nrd'rs7ݯ|FyTccVxsTG* Qn&rWC1jmE I'T.9^kUZRձZ@MAj422vqaX82v0 ÂOך~x\so愉~?Fl@µ")_9Jb'BT-b%Bh 1jvmUg|)NAaӇJ4 7LAȡ41 FaùabGYǮv\P{awJc>^>PFDXih(YK6b2yckH\ZYKTb88qE1JP a % aùASٴ0e"biA6 AqlC-ؓ7fz.\@A@7OJ # 0V%YA!hl,Ek:ΔZ8IKbHA{-qѕÆϢ(.sZ0+yV'HD!֐لբ 2sH^ZHĘXsVΤ2" .>.YǮxh2>]lQsu*_s佫b|\pw&b$%FO$ӥBA\!ˤS&N%LR;0X/$ʕf.ʕf'f$Qۉ %lM"1x gHЊu*+ eޖƥ5u%.GN"ypÖr[7t~IN׈)KHmR5B; H@!S1m'͇.q1o :,(S%b$x};f1_Ǎ.щ|Cٔ3/owև@J+{){"sHEC)g۵4W&F_'˕߹,`M@8Y3Ϊ0s+X x1wPeqgw_&9-L3Ҹ>k9nSg.,xYeoj-hu\\6ъY鹝ipW^{stW>bM`6kkkOR{7bYiHk-Dj!"o,%ߎ5&v麾6\eewx̓ }txWM7!0>F\Ԝ SE9"v6tn@Wl2zg$kY#b,Y%W\r+jb"*,e,)n\?bؕiZ;҅Rr.x2Jufaj/J? Pʟ8ɥ' SB7e֟%)uN9>W6^I #>z1%nFl6qt? ېR2Cc$/D}9.:tÍt8.:lv@0Ɍ/nJYoۍ_GDmA2y+4u Tj;w4iy.E륂gRb-s`r`ZW| |bJ yP5wM#pBɋM6<86&I?M<U^5p|v>/{}ӹsݡkUvysܑo-4& j-QsS؊"gI݀1FV>.IqJ*dkCdIH`[[Zb9캛Thaù߃oYoON۶ ^v m{1Ŧ|~{\,BX4>O::;#hh_Fđ6Ia;۔)[ &{Չ7KHsIWi!~=7SvYM jHWݗZ܅6k-pjWO7'9n܍t_~٢w~f⼻#/\g}^Mۻm=zk!i/4~`Ɲ>A9{fFM!l t- 00HQۯ0"@a&^A)*B2&l6Bbՙޚ5FPӻ"He%36l8aeNfowJ<~~ܖ;?{qQg[l_q#x1{[j׽S$6%GKrU\Zg ~ = 5I>FB 9dlcKI(ɖPqau9.U~unn-p|xax^8F2% )E!7&q1qH!$ü~3r^[|cqdA)' ėFxw~}cZ9}0n:L`5 #.Tb)f wSsz{w]t=q6 zyɾIk'G_z3c=.kFN?g7O]N_?>qoޞe'WJ x?%2ԕ?Z^L)479g]'X<\-Gy,u-{XxL]r&R[oe+8޶Y6Dj7?~oTF_,BK %>θŮ}EI_Uh<2=I5fz6.oVɯmE?Ni}zy~oKdRR:lA> )W7kl1_^huh?C~:k^~\"{Z ;8:=I{|<ڗϨ{Lq\}H_|{|f.m|jBi|>ȰU}.C},Nac}Z&Ab=7ZҿF'ƴЏ1m;[]>/~6|3[Vٸ<_y6MWAl\nѐSmT-4]:̿]:IyR<-6v\|s>4ȃ&戴W$ӨO*G=0~ӫnqyK*{_|*;hdd@x T>Zz;Fhh6OލFEv ۊG7 qkc 6d+y"@6V^!Įj} k*ch7$pN<1M0>M miŒӄ,_*^ 7be/&U2PYkxyTtlt%A+u%]MQW^y\)f/1Xfvm y8y&LrYB f8ɭ >On=^.7mhrܮ>cZbdCn2n|zr6G}AUxT텚wSɩ9YENOڝ.zsߞpCũe&J?֡Cg݀r\>G$$[ I\~I-mf?oW'˪%:?ouaL·TgẒVU5<T3'oq^`m`pR!Wd nȦ8K=Jp%`Kht%OpKW+5>u] %]=]->*$@ ` cgWpWWh=adUPtkkktC6\FWBr}aue,"،t%g+ƵJ+ՔJv5E].+]17M.ZgRוPz[t5A]St\sѕТK]WBtue*Sv%d+lMA,$u2ҕS>}WKOz9x6}] %Pu% HW 2 6] czTi NSWZ)(en5 ìvֹQۗ[lX}fecCà 1xq |qz^2V]wٽP_mebͣL4P9ԩ9Tsn.*]Sn?fWjv5e۵lVc7ZѼ;cMU| &.4L'ܐ͸ *;#Bi0)&v(h;a\4.] M~P2L芶z:Tt `cȺ 0ю5 aFhjתפ-t]6\rѕzJ]WBIXt5A]uF`oѕR6bZgWBiMuIW mFW +Zp]*Ut5A]Y^w%.Ơl6u%dWSԕSHW hѕ: u]1xStteBzNA|7ƠІu%DEWSԕ7j1$i?pztзAؾ2lCzɬB[K2;uL$isYngHP g[6y7@69Ў02d9/q}F 5+o-wPmW`pl] #gWqi\] E3RbZkQmEWT68#]1WJp +u:u] e ʘ]HW  ] 5JhJ]WB]MGWN0`*] &ڑF t]MPW(슁R6\rU*y] EWԕf+wd &u]1Kl&gʡ~ڮꊁCF]kZu%.]MPW#||WPB38I]yOxjf0Xs;(R^¼0AAY$eL)\VZ{%dYeHbVe 30M2,@$Bأ e(N D8sgqF+eѕ޲w2؆Wa0$a؝=@Wjתז<@F`ѕEWLƚ92)EWO+㌱>#]10<?+5!] -u%]MPW_Nj Nl+EmRוP995ycɮZRSBj䊮DWkrҕ~gpV6] -$w%18I]9`r#+y}t}Ҫ>E]yP3N>ICp: aH=p0{.[ L ihz^'԰-*˯TM:Yê6Z:YOjf 7$i -%?$Lҥ8|}.oѕٲ͡ >`u5YWpIAmFi։fLծU)EJmFW1] -Qbf+#t%.] ؓ ђVJ(SlPt$Vیt%OKWK:]1gWB EWS銁Jpu6A+DUt5A]9ej pȧqIeZu%b-z]y)'5 8v{0ڱFIjמ(3q6p3 :7jr1>;/7 @.Zҗr^Ol9/L2$i zRCBV9'}q% d nF.ӒN~TPB3|BزPiv]pG32vQ++(ڵvVCFg !]1J( ]MPWFT6b\ѕЎZ0ʍ]MGW`8|+%sѕH]W_c&+4h3#`6\ JhO]WLiuueGTJ3j 2U>] O>J¢ ʁzgUWQcPprzcRוPBj<9eW e+ͧ1(A+\jBdҝ3shA`9Sۗ[@k-01^ Nq!a .vi%LYe@SYl^%K˴Zԓ^ҥ8Ť ` &FW$s90] u% zu]1%h( ؁FWk1] mPJ(]MPW`b늁2JpuEWN]WBLuV?o.՘U.Z|ߕP+^~QmA+f(mfmf<:>WEjُ\kos/;}F跒޻[mJm o7ȏx||ZX Q}X@{W}{ w, r-Yo/z0:?GK~_,Gnٻb Aa788 _^M'zw_}nu­7^E)kqϖ_9jl^|%Olīիlva1b5|^H>S}`_w8,i aplnȇy^)+*+{.#O8{Z.!~a=3AhM7WIێn$ג;Q0lCs/|7|я|3"WHWx-qOvU5[/)!YF?{׶Ƒ%y0v] 2+cWa4kCv)ye "Y<'NVFUC_DH,&pZjS6z\"FQQOI=j^,\} ]&|J'{XMI1!K(pg]!(dGzUj%x!,;v4#H2](_ >6XS " ^TTld4h< uvN㊃RGaDM2U X.J] E˳ǚژ[]J>q+YGeR\ZЬ2vި^џLs [(k)(`HJƬC6,.C*ZD@ %X_;wMBA[EtzRcc'[ZpUTiXYi)@fT `6+(\=|8&X( ꛐIv4U2T j,K8JH&GJ W+dWVb@,AnTzCZq2Fͷ6Q1j ȄكHc7uӝ1(QdFtQ<(XYu0wD `L `iƗB`hrSO qrPf̤cB8*VZ hM e*3[!(@HqGA(x3jAQ 2#}@PSQzr{RQv J H/[.gd0[QЯz $$dA"pPBi5&Y,  ^ B9T Ck>84鸐,5` _nݡbF\*"u44U ULl^vNR'D̿h1`V9Χw& eq޴5n1^tjV mCka&a-A7 /ፍPtdUt4hX=$ B+X(ITxmBV5 exqy{'*APP#H܎m }á*YȩՏDC}^jygE3ʃ d-Š.k Hb"ҏUcӍ޿Y-x}w̻$Ct:N](IWKl$dip/(m*?!t+<!G V/j t q0QHBJ݇-gPǃC 56t]1Huvg{)f.wbn*d>ۤ]bA;ٮIG/ږͻ16zi1}ZopW͛J*\G+:Ӷk)tb}x溿;C|q~xhus{U| m]⻶Mkqfm癩݃UA+97_p^I9WRN JM :(d'%:?~[v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';N(ǧ }2Nd# 6@w8@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; tN'&B,  tN +,; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@T 98'ܝ@2v]xjv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';. ۭ^nٳՔZ^7mhwiwgeu| d͔KZ@LǸLƸVq (c%>z u+G#;"} +I vz2tEpd N"Eҕ=Aںfh.‚Ͼ9շuwPmτ[=zo~ᛯf7}R P 3WuKc"nۆmiY{%&^.snblG~vuRj.cU+?8Kt/5lAWvaC٘^{++hX oޑDQ[ݲw3_?_yWw+*nnI ]Akb} a@/h#3r?ʶ7C&MLqȴ} gNLo~67&?iShRVgbC-U hMCEюwN)-&iLg1L3^J:n$DWLF!BWVs+Bi80SCS'k'uF= ]ҟ-Q te>wq ѕ4z2tEp ]IoϝLWHWʄńJd ]v~*t/+6ЕJ)r2tFP*VWHWPVO`dZ;"A{. teq2tEpduPztut0zJ QW7ZΝˡ+Ow+Zj'O}ZstE(2LWDW')f̅K<4V>oq-mtva67o_+H6G޶cö,9]55~C %lThEtBRרh BTVP5q8!":N;݄ Ο'{.{ot`+zip[QNC{1: v'Еcܡz7%"L73"O43]] ])7_JW'CWorZΝe4LWHWڇ ++':{uE(ud@2A&DWGl"P.l0މ Yg9w\'BW6ʨҕ E?!`?!"ONWҳDBxZLx\镋zJ4M M\Bӄ6siB%t8s~O:9{X49Ƕ4޲!j2;uyt)!?9d'W!z0oa;zR}XCzsM[z}7iS^ݟųxrmk=sٛPlA47l<}ׯݙδc=H2Un0ǗWo@@>Сz_Kps@޿ݪnNϐ O%Μlu|D}mw Piy1=Rc] O_)ۿOSnʈ!lN đ,UU{Kh-s/Vl=נ~т}ܠnK6jU os1`RЁXJʺph^֤RjqF)$ԶڔE6S.WeQT>>1 ]k^t77 w\$}G?~WoK䩇w_#11u{MKW|t uR;zzmh›)wOЂ[?tsX:2,1"OIsP9[M FDJ(S͙ޓEWLnɵi FGR1~ q`(yA.nC>v k \}|%/njh 9mixGgY;u>n -N]ܬ MO -k0ZtNr6jSFzػ6r$W}[m,fs{;&egY$cmdI#N nYfeNc2|*>OXQ3.QIH^%g[iiuDc8 NKKQƭ2#"$glV'l䠴iD"=">8ao 2=2!׎%kYEcR41 ER\,xf b4OZezje &Ʊ(1*SE/yyj)U)vJ-[o;UݽAֱP/% QH突,)bdEDc=ŹBJK$`CWۤ7B'qz5zU*|Ds2'Ypdζr~zr.Ǹf9@lڑanByoc׏qɌ+| \q N H?u.{(9Ȥ87ףo7r-u9Ib:MP`)ӧ]?wWӟ(䊦bm^YZvHbupfi%Yuka˞rkUJK8>4b &Se+?#!<"9}vj~?р/q6~8>[J6]츙w bVCUC{ u>AUz}XoݷSJ~4"tFBŵ>j'EL$jDECHP\}".D(Jc??t>_Agj%8t\9U}O}<4")~v1Lg]g^ 9rqNΖpzX`)bҨ2Cи\Oz_ӛ8?5t@-@DN+? TЃAgrMM qA-yBXU0cx!w$^¶1q1>Qa3~coZ)JE"<HD+&!I.4Mq.-@^'mmT%5@bki OaScoks*܆Xۥ/&(]]]~RLp/A'O50V0T/%Z癠Rơϥ \YL-yupz R1%i,8xK&ZetH,0ҢK1GHcqEH9bE޳-#DX%ކSJ8b>q41ʘ@c$˅4`7 1q!ăw2К*HM ȶs؆bcq}Ĺ+hܻqeg ,[@ʪZw5ּzOӯMAh\H*sIIdxi (`%O9DEX(5pcQy,﬊Re.!A8 g'̤h,AqJ5l8x8u!eg/V/tQ sM%+ZfLr~I>$5˭=tgɭԦ͖#f˱B[۸#lx=(\R&KԄ A57$-xb851tU]=Ǡy7L'*tD2i'`>)  E*,!DHG@k|p1~>Z?=fۿuѧ^@SeK!#3O6\Q"E^ZX,˕5 `GI-qhbR>")rVTDXiSݨaZ$B4䄃ȷ,Ak$*'gJ:zTutXcn>c,~lum4Dp[!numھE^S5\x q \ N%H,{B_{3?ż=ϝL&甥szpO2tOrj_mweXVPWv~n']z&'XGyc~!G?bd$w +G^p<U>na>_f=N~ϒ,vnޤ|7h^fa˝Cc\#pUs1o.<,8K2[7@&,4|=`1SSFW6%T_7 .>tv*#6fy_&btE)\犩v ]ah`x9BLv'6CeVڋua7Jg)ĹZ:o$%2Gd0yԻbNE[GO]Eޏ~׺H%=qp8]< ~ސUnlg3Am տT):Oܦ#HPŰ -BcdKeZ6 Ih T PUjxǷTen[՘׽LL̤"W/,x4zY Mx;ZoξޜjAgZ/t"E,axΔ HQFIȹDέrZL#ɑEEVZ?95I0f3k<X$hOL"J2,>GuCc54 Vs9>''ك%ZST+b1A,@PȏQ'}4E  ^/eCQt{@'>TۉF@'jN+A63c4y,Rg|x,3 "# hDTvoyzg&^Jx R2@6gtTB#(F8%2:<-m*$vA+T r7?tB*橛yK]Ҽ4w( RoR)HVXoEBqZL*>'{2WUsdȄ`/~\ʃFyMrlw4f{7rv;yq,֋O_t, 9xυBB`R<ٯ[&#c*x ΄&x%y|tx2 [r`v&U0Da8?ÝSM4fWBD ι/.";\ -y^&&~5Ů5 5~qm3!-#0@d'uqA*g ™(DPX%ZHމމzw P60=EmM.U>x% LJ&II% ejf kvZbB[QKiYIofzKlhREҼ>ZLSJKšfµj^&wm-rD ]Tvx /ۻZxyRGCD@p$z((mD 2 >/=pvwx.ƨdbo1x7%ՉC l+j^6֜#]W,h[-n|Spt=%椞5QKwpKvYƛ2]KfKW얪UxiYgfn͈s6֚[W\B|2Sl)c 9t1+`֤{zb^ͷV׼Pr< OgY w&A.KYU*6FzZ4E7Wj]F5ϧ[~o[͟+#ͫlA38>[xXn AIRx: mp'-Yhycf{a{e{9So^$-T=鳕Q@$4R3t!ypD;QM2wiԚ"Ɓ ;\F"V惶4Khd7֜Iܚ Fo,G-AKn@|+6h^.Gp jFh6Y)q`ZeRJЬJzc(Xj <*/AU )hcI&. X :dB5֜*H9&oxHq *.)yaxpF F0UC( |{Jsܙ8#V@̿~SUmm^n [emL =j*1'jvn`Hq͑ym3֮fڝ7;ړw|gэNy٣Q,$ 9^6A go4:Gs_Hf Iuf9+ C F3@ሣY2ōFʱtm/kU8E=]#'k>[oxx~ӂ M $e2dp_13LEF(:7y;o/ƻ޾f՚6h[A /Fvo߀ebZ0{yIrC{|\kIoy>/~,x)N_V qOX,xOg';j]2~r~XO0&[S&T`tuGr~% |}Ozxŝ.."=6r}3/2$kסQ^DJ%{>=>Ux*?* "՛k +8LŀY[UTr`Ea5K"Pfw&q"[Wp[iQKutL;X IyŜGN s\ۛ<[r+uў9ԛ/߿0Q|KyW'={1ٱd/T\%Z;1cx}_xm:Ue=^^o,e?F9ōihFcVWx# ɓz0N#D#WQh=:J[!&a,`pRTRKLҡȊK^qFIr<&JZyU' sg79_;wơ*PP˜bRI+Hx>r0faˌx) 1UL.Ȑ,,h pm(9IАb,fCr2&d@Դք/=GgO1Bwy녏k5ƆmsVzl:97>8EGP3nL6 ,XCGAK B%}^yİEڜc&)0*KIpa혠u=03M34Ni:P-,;{1;<6fyY>oJkN8cԼ\FE暠k"D%v|U-Wms%𣫷'WO:َiZ r,}ͯ=v*zS wV7P(h0K&%5\ m"Y(-;4"U*mgtR_2Q쒁Q"B߽;qZҸ24meMC}(YZ $<ل"+EIq4 mV[H1< s8.Voy*!Vq:c}KΗ 5J0, aoQoz%ۗ;6Ym}í'40*AdqSN¤ q)2'00 \z%#XSR* J{ ؄@H&D9R79Gk Bo2&KzUzcao#,Y# w^\g|G=pmqW>oh췳ӧOyjb+y2GE">8ȌÔA HHRs!3o-, /{> &CĦA4 uE(sa؞ݛ/;F9j5:Km3HуD=#UJwZ'uE8f:Y܂M*C7h0>ڋ#d (:l5)"  "x3{eAq0[5>KD3HđȀv]bC JDqxk^碅a}oңSP!SaP06Z (Q3!h2p##iB;U79(:cWbS:{%"g=sэ\x'EpD>#X XU2c&9>F1r.86:C/Wי;@U;J|\\=򋳁U<rurEՏ{Z6ree; )+MVl7 UVt(+kƲzNp%0\\jrB|3` [P{w/\\jՃHF#WY!j IZY Pz":@\i΍ɻB \QwEj:H]$KgDEخqErtnp+\Z|0H*8~2pvZ] PT#WHsXoج$5crsj(7튟w 7E.V 3D#Ԅc-/ݷbքW ;gpp5je\Kէnˌk\^ ep,y W+t"&)^# U,x&|gtOgӸxvUuoUaD RQ5wg)]vWt9s86EPm=NoN߽jj0R~S1,%YjѪ rlL=38$׊Z| R?*Tq>ȚB"ly>Fr]5Ψ֊O8W+Gyછ` nr=Mz:t W숫v=,W(>Iq\Z'qE*Cĕ UHǻ"P HTZquRs  ɕP HRQ4lJIT+ Q\Z)+R`qXIqErjGTJ>qJ+YwEm=`+Ղ+T:H%W#WFY.jZ@W(WpS H|0H*A:D\ LEi;M(՝(oeV:Tվ4b~M~{nQC`48jnj3:*X*Qiל7pҡa/n5LK3flMw7ymRLQQAMQ47L= q!9.a[dz=)WCVh{&sUp2\S"lⱗww"zVPH)CT%E0ҌsVQtFy_u >$R)Ǖ9- s^ϺM+YY@jbH#\v;bZsn;l6{*M% le+7]ϵeW+9W$WZpEj:H#Wx * U+ \Z;x UJ9zW+ JXUP|} *fkH $HW$w%qE*qu@ `SO0Hrm5 u4l:D\v : v9t\J7+c IW$U+R9F\=,p>=rF;ܝסݎ9yΘogj-~IŇ//fpϪ Z#}t9dY:IX6%A˷Ofrj pr@E8>yK(~5ZJmP;\v4TC{G[tR<e͈gӌߕoO/hB&1lmej"o&O9@?'nxY6XV^5ްfwdԿ;m.Y^fIټ<YC?k"Pwk\-!u0Ubdм,lws'qE*qJufGj]u}j:ʅ4qGnX4{8qJ'* V5\Zg+Tqu3* 6\\WwjwE*]"S `Z}:^ Pz"r!J9kW(*S Hn=sW WqquƝq 0v}pErWi[*qu2LY+ v++m-"z"0zW+`#bxj ڢGL@2r:2c&~5ֳ} {qaq}%tP˙#.lQ{V pml/]:EFweyKN_6ʶ#i#ogw^^w)c6(V̼,ui pnjW>3 ZƂpa3nAJF?gꬫʗ6u>/vzz/^SZ||R"A_Y)\.pN:{~k1>LlCyze!CAf85nz=z3TE-iwَrY2LjTJƧ_quhs&W⪛ZӒn*#Ӻ"\aT+\-"{"J>#WBpkYEBNjpErWB I#W*q}:4.^4y b+xG.^?>{3*YW˿7=WŃgj{ȑ_1pc8f`7ewCX Q$$;qWdّ,[nr"/U*4ih$9aU Ew>eK~vY}us]݅:˼٭:R;eOA(0蘅VI8H&J:LĜ&P+Y? f?UOꂯQ]:>W-:Ýy &g.pW gׯuQ<~g_~r=EZ9ʒȦg]:ue9$M? 3_i)6ф$tHE2rԘf>-d_"|Y@&!ؚ8k e`ZDJAx\#|(QQbݪ@U˝gC1vj Aѐ%` HEqlFft1utttE#{],t327҅7LS*==}qȽ8oqcێ탌/_汢U[Aם2b-󁟌/'g8Rg#I7p6si4̮,+Z_˾2REgi5o#<_Si-h6J8(׫;N>YV/pv[7Gܲy5N`8-NEuK7P#Dhy'gF @@܁OdC6l ۔fM2MN׺ {#^]M{>=f6:eݽű]>E_m.g'(X·hM-ZܥIu2W؇koc 3*pYgc J)fuiM9(*lY&ն7i-+R9YznsZYzaN: 䓏4:susBkl\ۿNn{w4hq:+dQpv{=_ʛʏ0bXYW4mn6oWz~\ ޻{Rh66AI;÷ٙU՝erimx-%Uc<]ߏQy{܇YH_Vo]i3F2nу(J睊.sIPd͖tIa͒0XKL*POI@a{"v{*aao+pXQm2UncrS ;{pq8ξw6$a"+ Wt)u5h i1*ŒS֑{Elt1 ؃NI160ljљLm'B,D2"ЇlG4`P/}Ϩ폨}dI< }jDk3MhbRd-4n 9wY`a 4lE&XIS: c_5d 3Vx5x2TP7CψxD{9őc.%JQFzK*/4s640)( f6]lؚj^*gRNXI) %^},:9P'i3WCu&%;=b<Ntr̉.p.D+ .h*.BB#s& oVCޤcW<ἼG ltF9r7E, ]E c3kWG<9!g(蟾[*:L`s1ֽ}`~y-ʨsF4[ -CIwF1"ES,PD|MA7]|U9o!h8C9|rrcHH_H(rNURQ]W0J%"~G..A}%#-фy&Ic~2 WY$^S^a7SYRcdM@r %Sӡ)p8 +yfB҉lN @`J*,0i!h-STfߘZLGEtNQnGe:4Y{P%FUIT,"<2fY(쭮,H"&s$i0V*5|>De: ~ rxj$h ؄Q%Ptʑ>}hw=9ܩ.3D ჅJ)5sh^jFcJHR":eDD qo :}la|Cvλ>SO/&y%nvfw{@ʺ5woM0g]* ,"$[xEd I,.5QlY:DyݯQڒB(Jb @t3bI$S B!+ek.nV螺UgY߁OYRb>~6;]oSt)u6G,.z,Wf%l5TaGWʆ_Q1)КsЎ4lG>:a?7^dΜ/B²耼6:lXo [Z̄#,-}tHF1 rt)/Z|5һ}/e%_ivvюGpv&5"4BE%TV|@25U:.e? 6xJQHB2j 3YiC#d'JowdNo(c^+ʩXIZC*`$`AURN|;[>)3y<X.n}i-vS}sn3=]mhY3΁z퍖-lI,(֣RD,h'E&fƥ#<<F9"nde0!&VA9EJUiA_3#zEIQ*%K`'hm>%YAUd{ו_rSfL4I٫u|ISDJؕmyntHZDs2h?\/eadZߦ-fT ;ʝp<' Js͕JQyC&pHNhxQ~{w6n3=,8ze/RsK~pG3"2" Zb*o O=Q8)(8z'\}d#FN"{_|?~ J; FXm7nZE7^82P&P ̑:ߟ|:;֏'nPw//# $;{}`yN4ZS?^>O7۽1MMw1+ZH^q{O>hѝ:mw:%Yyk 8G< 2hgEP\3\;7Uy ݧi't3p1R; 8S&>Xaq3g^ʣ8Ku00P:}TYCLfU=Zs=]ݽL}=7,oiMɇ1m""Ŵ NxVmSeY%jV)636iN &O/srF4ϵ#&7RADL^]a tGz_0 ZSQ/*tXvs ܤF'//5JPjPjޘ,# EE)u:o_aQ j¸zq%`/u²4Wp2qʚwY]V_vuͬ=_ٰ%Uի[˺m_'OFITUY.{EIcp_oV-}oXNU.펹Yry_ݰhbSeOBh9zsOo<5Rxa1X!,N#BDHnj܅=YTH(J0@L-EETӫ^otiaŽjFH3kI@IUDB z^.^{ś^؊7}>#Z|j<ҨʞO+ՇԙLi]dG7i—$le,=WGetɫ ^'{3\7X*s`~:OAxQOc[i/&%ӌ0uQH&]2ػAEزę$Vʄ*p _hAL }+t>x Qv[{87% >W+{S*Yx:s޹lw7l/NE׃T-AvuVz*rYeMu'w=y4y9UpG߷z;`P^@P d1,gϢ_ G%p9rj]̉l vD~@`o[؎e9/ac-"*k&J"Q0Cy\5ϜH27NUG|=gZjNE,(ZR! ;#g2`a/QWUBu`1.MSnvKj{"WPHGM ]//15[Czs#5Z0[AZSfgm)I=0궔e.vsjCc;CB+G4V=)%Rb 5>F60},+|CS |(T55DJf. GB .(N'A^x\@:8zY{Qۜ8/.[ ~n'2'b?*rI߽&u\ &uD9̀yrǙaL6o G{'[[ɾj 7W_>: Xp%돠"ꀗn0<;Ys~29&T>ME˦~KX;M\>Y0~-B,PڃlyNHeE-Xx4N88˭{"c(:o@z30{-#c(_SQ3M5J7rk[.[PP mzu#h<~;a~>,2w8:väٍur`|M;yȃ1gLį4r6^ )BaȪn) g)I7!(tid|_ @_B-O cqbi{Xk'\z 7KQ;QVi*FO寮o'0Y& z4wQ6F5) n" 3QSpWxHgvo`EY:NԔJC^p[+\6ko"& RXyE,/#FLzi KG<0XZcE ̍dr{Ds^ (WKM)qZ(K)µzqyVӫ6qtzMK:M]K0Yǧb M&MC!L,X0s aׄJPK.PV)촄eCNXX^*m4U aU8RnÊNzU^t#oRefbAQbRqyf S1ZQϜd(fsVF!5eĀraD@#DJ% aV@ a2Lȉe3*X &9yn=aVomd߲z?PP-q4'ṟ3SV xCOJ<PSrs ;>se4RQ&@V*TFm :qP?&`ƃ՛mn ~ĝFMpH"(\cO\AE0A eX6X6mG..vdVXtfȍ]xt/Py&xHzk$Y?Ǜrsl'2$J  G/?>QQЯBQ04\`D$4 8jӂ#PJ {b0Ä$Ι DĠ +mRt%< PTc1D9 񽷜GG@>I ҥ`ye`^,L:SI:u.)2Y, =zhXP8d+jgY9ƾy"Fi@N7x7%K~XaPk=u:ۜ0^$G1=c o"{N.!Lw 2 zD`D)4\ȼQ"Ol ZZA0Fd!c\ j8`1wa4XF[ :5'(xAc@6qpzyv@qtcU kIc$q5>@X;HRR`14ׂRW Iju( UtURF]=uEzr?L'WWӪX?LJK] iնC%Jj*¤QW/P]!+UDJR2Ѩ4؂SWpwY7ap(ŷ|UU] mZ)x:S7@Gy.OsM?U-JJhZ!a.+W,aC88#aCFTQ >E 5 `1N@ᓢ;K _ 2߽&upS")6gF\fw1$S<$/|q%饍5;&ڦc}9U(]kE)ERy$]+UH;6H|cRW}(@OlɅ7_4}.`M_B!t۬%]ӵ*7iʍS*O_p;A $I(mEF;zwm 7@ZDYvMHwRqwH]@*ͣT/9ݻTGl/S r$ 'ZM<Aseg0EU6ȹMp:2,"1rٻFWx߭C^y9q~žrf'6RJ@4`vwOOU 4Ih*!VUIO$)H 1ekB&9Omy11U&=rOyE'^\Q*'Z?0J1bOÇυ$O'xA% ^Վ^$K{xU2 "F B1i֦RR֢`dsrmӚsdrov ؒhS`]C!$DfR)!Y͐͐;s[~QW~?變Kclg ق;[ g7gBtj0Wz#`XOWu| P2'\j]p")o.]n N;)!k};aPеf_nx_ A\.Nji>/vɯbweG]3&?& &|BrW&5CoilqwT:~ĀzҷŒ{rN)e9EcG WE_Ejh >?t?(vs)?Tmw9Vg^n=\]9 (gvrfw{Ã1ƙa>c|΋C>[_}qۗ 4XWͻ˿;F;*{<ٶe)"mk'60S2mne2]]s1Sw;)`i~9V#emqwwd $ΫiKRm7_D^0Iuߔ|Ieٝ1m%?Mv$hOʛ:"2L]V\KMRaW*2$W8Kn73`FUr#Ֆ,_ypu,ʓ9W͚sxY):fkyyl<g\DJR}d 0Ƭ8:KP %+v EW*2Y4`\3-nM7Ӊ$:2I &{};l>>(_ }q;zq.=]otP慠l~7<[/<[֏,Hcxgk%(ӚZy:v2dK9@Zw=S 5Y̑7Qh`blqI+X'[cv/k|cqh9PVeXL㥼uޡO,r~INb^%Zvnp'){``^ m===[[4.*/КyLW\Yv>&ȱbo'Ԟ5hcXGGW-I%UCtکy16m0^3RWIdf1KD F,jeKTrOЬfΟVSlVk8[(v+yN7!4o[omXb)u.;yNUA/EM#'|I#FA.&' }H|r"T(G T Dmu J(M*:F.GԬhGt$Q&%',$}31Pl&)_SxP\^ayk~:KdOfϣ6f" C;37«6i`h5:ЈTNu~n뒄]qs,?pw,̂RK rGP&+WsVRʂ++9M呬ƅ36F AnXsgc_ظd%f3q${ӟrKrIL.CU]w?{h ERJt]9ՔQB. Pc(@qp%>яb6m@0FM)YJxȣS-rQ)1V Xb5L5 w}Xd,PɃQ Ct1d PQbty&P)1&cy c_a4üьJ&P򎃊ɠ*XbB>k_:` D'u: alc*)x(f3q3Sbs^m&&n=%S)t]OJ<" ti :?j3nDfTmfׂ0qYg䬏=L %"09"\9ö :R5 Z`ГC!!$MEZv%x8]}^ I~0AI B8x1\a ľhbJ]wUTef% c-RD v0dHQ٘Đ KT|ރ׌1674T\.zG'Vمcͤu`4%&Ae2qQPsp2&wЀ%v5dGT>l4;r+=#\t,n8&PKVl'llD66zr?熼#ӀO hO'tQbyy؋1懚U.:.uO+Ժc RC$08NwSJ|,ξ^./> Z~9ubv, aw׽iK:*|O8)b*PXlV!$`G۞mw$eUu:}*G>@9z7C#~*sBaogg۟<>ף_=|}ٱ+,}qO`ʇ@=~wkfȁ3O>{9i3/F1&yq*۶>bA=q)z"pԻMޭHM{^MTԻuZ ^x㹬#_ MVgѹ/hCN:}Q7t,UPNlJ(rڎ"w,/PBEgc+{1jͩ@UʚMFnn\4%xQia-Cey?eՄIglX1O%p<[lZ_H;s4(6k)  h({q1S+=9@JO x-[k%M_4ȼ}VCmul{ Rw)SU>xj} bP.$%#/JMhJkx-j WqR=c3qر 44T_B?׶#^WU{r#_嫫׫/kWsY2&dXU x1&V_yV9eh+m能\F|My@ߗHJlbC* n&abkҎSvh'L !ؘzRXs%g䔑q*,`-nQl=UC&NDȂ 5HMŠaɋ,䍌Hj979zGS~cc'8yĽ@ً]W*/lvIqu&(+X@o6 (&[+h(eV`-h[,HQ쌽#D͡#f6:),/.q)LKNZ5i__ Z|(>>Fƛ̎|\}J`&`Kűfq?mz/w(ma< |c zmÞ#6Mf&br[|Int[i[oj͐ MV.h6C.M`\{WӚ`I%V:j!+”WB@5Pd8QI{g?uOjtitqީtTorKA*1痷^W_}yW7LÌ}7MBsFϓh!^W֞2AY< + 5f4#X}Tr*eg! }s2UPCJ B#HN1F1ȏ >([[o&c/4&'w^ É#w|__8 ' \L2 t+z~|=鮮nHX--Qk"OـB"Dml$0\V@j\m Wd0eh+\Ujs$vo#bОLj[ҹ/k!b5pЧx2ˋ99ﵡiPT_)gDxD *"Rr9 <@6&y6ƣ]ň9yNZ,D pF2r!zU%zGn3q[8'lJ{#)VvSdFet6wTTNcMQ^\P泌\/c/*KTT#K!@j}6.G0,ŸSWI[v;v)'ETQY+f>0>` ƜMWw*e]Bm}/?{WFn k>46nj?l|:WrYWv6-ʌiR)^k )IQҌ5A7Bg:'&U%- Xe>+Ah82rɶQȥbڴF [#Xx䷜j.Xl.}è^@|feƯ/q<33xˆks!-X !!7J˓cPH{FxF(܈qO =1<1$OKE=h9I"@kc*D\K|IÏk@ψSfGJU?aZ ts쐻p옱&>61jw:O2ӌ,nUOO{)AW~ѧU[5=O2'A#pSV 1JZɖ42ꔲLO"K=x7>RO<,(IYSb,31ln^ERIwq <\wa\'fkH^('OfCN3:9Rh`#̵̄!1LHbAYkR{k} +Hke4-1 b j/؞ .,#K EoATi83B% ĜT ɍTp^{eڡ+BvV[=@d:8m9;E|J(}Jqϧ%=[D-a)|8{Z?Ro;.Go4OxBP|eLM9 1zAQ 09 3OF h=my;]2s#,꽛 m uy!T~Yr̄:i 'w_Dw6=U~vz> y׼W^~]/D7:t> WoOʇ4p;?MW6P:VN~8;[}:wOKg4ԓw#6ԌɫE7o\, Wy}40r_t穋tc7 6^T_ܕ(|OrW%Z^xb,яtE2+^Ձauϼyh~'RZ7[~iT`y ߝ0^$yYoެ~'^.Æ7'4G@ь"b%Un7Dư,Z*.Z?d= SGW6%4K!AuА{4Y149P'&W͜ U<(BkD{M~<#̇'^B}װҍ|-u t'_#l<)%G'9PoZtQяn]ck9NH̹E%4Ϸdzlg36?o b#zt ;{ =]eܼZ[OF@ͅ ɸL;ϮnFݔcf 11/9w&vsTL_uzc_se^^͠JTJBk 9 ҥZ62Xynɑ"$hFuV+@IZhcZ6(.<d+|H,M/2I X3LDWYՋK9KLM;#2438-mb` !dglDC (m= c¨$MGW(ډl(x$"42Ƙ!< N[Y4edy+c+h2+˪6*S™ `TɬbʕT\"k,hKf,9,XHtQV;Ltls!~(:t akǭ;4m%M_G㚗Y=샙D?=Q Ӻ1p_$]!ZqN'dƏ噈N\}V=ƀl Uʴo{U߁Wmf Uu1WL1D}T*x$1~Mil[b|*#}GiAKd^d!J|eNhU+ V_>:{/ 0a>ri@riY{)2茗Rx)-ض{)T zyFt `WUAkx骠o`ǡU\'ۧ+zZCj?e[u]AOW`RK]`,*pBW4mҰ!]3DW*pv Z#NW23+H!"rD*pQt ZNW=]=C$]`q \*hmP"Ǟ!])e+ldw;tUZv"랮!]iAuig#U+tU֯]_jteZC| J5j԰.b@?&U "g㼇*.X;Gq?möQKH5VVK1d:y8_Ԛҝ1 \ә|BkXWpL>1a%Cۥg,uw\v⭏>*(tu_0?}~SGW'^px"j?زs]aOW=*!d[ *pv ZNW!] aP+UAe骠=]=GGFXg՝+BY骠=]=GBe vegY" J=]=CI%*+O~e?BWV?tUAٶ,2֥tǺ"Q~tEhd=]=z9k~w`4r|q ;Iifdx\ZLL,u}s-(ҫˢtF2z>PxNM{hEVu$*WζkH˺|M57O;?^.. qzxgqn\zXP릭. b-rg._<]R%1`$\yMsMUu:zF6jef+,Z3pH#uSi0nUk .WSIJE}DTI/H.Q1ݲE)8DIEM=fy6g;s A:Cԅh9 m^^lĕX ϕ=(8w1<):-˼$LɦF!黂Ad #ǬYT.!7:KH͝}hpTu,@0Kc.,QkJ:2\{.1@mvH"Ek81\J$0 )E%1`XfD3YpVzP[Rp_Aӏ/&dRm P:Kl(LKdIɤ%Ʋ$Z@bA$eaB(NL6fAYA=Yt1``sLUyhNŒ8&Izkj"ڐY.'&I:p{iai.MVhI$9جjUbje`4xøV{@ш⁤ѝH ihA4')Ē;hJ@?˱-Pݝ7BE,JD\ZJN<M%H2XòRDrrhyw:ܚcaٻ7,+^10v=,`@ V-/iCٴ%JbwI2mH#3OƉQi;z7Eע$S&mI!a##+$V 3_' H/"CQZZpՐR<@xKdYk\mF/C( I:G͋Eէ (хk§tp}E ܔ43HT.5 RBv$JkCvYviF*+:*⢻r -cwk.4˺ F@B+GR7fa=gAGaUhU%`}5 eo LI, R [ljU!RcfQ1LhHpu/8I#`AV$*(k@oB&uTSPN+$_,l*!+u z\E[PB]Q[=ಬR uWh%Wk@ e º7 Da(!$( ""*f"HMwآDy>UFݚ (ؼ($0D!N&燎Y;_lۙxa(7je!]׻Zp v`c{{Y]`!00 ƛAFq:Ҫd :!JrAhRU! 0阊'8$;ŗ,J茸pΠ%TB;뒄 Xut kR([|Pm!Q5MH#{Nw=/ (DhQ:fƮ~6 3*I@d(J+1A2~ȃR!*8vGyPgUE תCQeXYvnjpM"β] [%VM`s?h}Q{H$Q5$YI$e(m@Vӥ[oU4^"{y;orAVmy5,a* 1dkR@~㠝-n| |ho‹t1n^/m˹*d5&P`0uvtrlѓХE4IIlf1hmTkM!JKBq$RVO ]j31iG2Ha#[*3bO*Vh &% xKdFggC5\Qr#b>EףNJdt  J`Q Q%fdi SRC|E7n֣bb V$⤩MkurSp- wH._* Q 2ݡPʢ#6:XLz9zkP` t@ QYj0Q3j hNmS; fnfR;5֨Y 6)jP%HI21KCR 9Ok{i ;z.?ͮ7BU$ж$gJϵq6hvt7UϮӢܻ\Rj_`௷zYU_ٽz}{~'^#&Pbp|1-0F~AoS^ 0x[^՘]m 5{ҰC٘^P oW/԰3zZȭ#ݢ3ͮ?6notنliD߁TC-"X}}CEўGQ =|4܎6I1"JX6*5x1F 3vdV] Z<SpREgO: L:[Sث3BiXEBt+^OnbRTQ|4/CW ޞ M{b: OKW5B= ]RЕezKmW+ZOS+BJ]pq2tEpd~>2 VWHW` T+TxtE(d:A28''DW0uEp ]Z/Vӡ+K#o'DWn ]Y>vBi- ҕ)lt:!BWVR;Y`:(5%uZN1Hh \ $]#=^P\˹g2Z)rM檝{`OeƸ6Xz(=R 7W,CO ֊rЕczKq+I'L5b*tEhtE(w lӡ+%ӄOH]nz*tEhu9xx0Waݝ@.V}8\TwC:_kKQqe)~a规Okߞؽ] &< .6l։õWWVn_Hǫen$8W/n/.{돂t OTuKm۾GzyΏ˺ ߟ?Qp73mxr|/r8w_Q|Uw Zb'(z{[iu;BI;,ϟ.[.OJWlZjZ=js3Ru5mAl/ e\nCmo/Ɯ^WIY7WHͺ:gt@v!ז>j@*:j)&CR`vZp((h\kZn~u~5 ŮwQDꏟWo 6_Οkookztj{?mt|f̯\Mz~q?~)|hu3A^Jx2*:~&;r{v[u~l}0[݅&2-$sib/D6_޻p"]B 8|[؏Mv|>bmpG^tkY__bj[ݗ҇}`>>/_!=ۭ]ϳC(ٶ<|yōx ՍߚX b]͖ٻ6rWX4nJTJN^J膭L$-*=!)o@kdOy!3n|_fW+,M%q(}N?^wnM]jcdmݝ4lu{8_GYqu)y9T?*8З<_*~]:f9-ͮ͝?W/]\.wC|g+[;,.z+\? ?^Hw~iNz1:%"rO<1_LN`)iÉ8D`T E&*(r|]tz_VpuNE}zu>ʹxM.|]]/Cw۟{Au/&'o=;0a&#4OG"&wo7W>{]2USb~ѧsd9`;'_nn'7P6/[ͷ돿]cskwk-mڱ8kb;"WBZViᄑU ?n~0S];(:w3 " u 9Na,C`/&g5TR'/)"I+F5P9!tXcMۘ6F*M:DBQI H[X2>Y'tQcr9cmyD4Yd px[t%m9Cݹȓz4b`6mAY,њ(%sΓHVū/ /ahҨg^Zϔ6KB@Ж_EIB^+—(E鳗Ve01s4KG6Ț@ b) @R\$뒲e3qN]'G^,'SdGK2 *lJ6'r2)k =qU?|B\"[\I~~]aky.}Fr>fD>M_ũ<>[YD_cS]tc ] HQ /* s4ݒER쾏=1LcYt^3uj>g00>Px/~! n.8C\,TJd.6ro{2Bep$k)z!u8*X!\X)u3#F>^{|ۊ}]l}b_)Ƨ'~uJ[O}4/Ly3>̦?ڎ <;Je ]Xf z5LpL5׷০_#m!݌Ѷ68S5#mi MlU!N*|ߛ*qcG_B mW7UUzֈ.%;Rw!ء ]*XU&*Jӟ%˯QUα揰IUNdK1z-@Xi"gaRƘ~aS+㜌DR)?8ɇ )ֆ,21O,B3qN?]=e:wy =lwiq_q׷}5- d AFZfQBADAΠ3Vl045986K&6^ф1h/)6#zd5a*X3qj wh,C2A( CHRxoe@+ )JṬXfGo߀,fӛqF3~R.8Da 9J B"md\(0y5uSc [P ) ~ P g}G+V~ Vq!wƼ_%./ QRMM xvZ1%%!gE/νg4YX%zEYF%M-hcEeFjEtվ8AC4 anUsS{.ɞUO^dt& }*2Io$:8Æ[Ijw;Kj_;/,#KZJFFpH>h "5as$HHzTΑ-QFqZ7GA6#"PbŎ Ȁie* $IYAܺb8~ P4ݓ:=՞669)E.1\}Ӵ4WTfr]!kfI)#pƢ;-OBhVwӓjyɑ-(bX|;W7E57dz}_v5m *SBCgI$c:/ՉB5&&e877np , q4|/%X?'cq62 `^5(TSep*)mB*>]$I&&,=Å 1$SF:%.b3qN/򐺁ŧkLq?&xpR ֊ٲr_ghfe2;(vt}5+UyʈՀ{^oVN( H纺9t8"UN^NdvH!mf(9|'H, KJC!Y-dLY|61amNMkzұm//tk3데D4Χ,}YJGI%\݇.G(X"Št1sԀ%j{j$9zh ;]@G"aN eEm҆4؀ll~9~'-@[wW  5A[iaJ)iNzB VMw;кY-RM(LG}(;P:"tɤeTʠ#?T?V7q[z GL,4$t)@!JC`]pc?cnzl'#3=L2ޜ Kho*0rZ;j%CTD#ǓjhiR'Yӈ⁜9 ZXȲQK lDѲ%)\*&nxO?Z{\2W,67tKFS!Qy YGE H?.(kDJlI\ۖ@ [DIZ25nde&Hl$NiiuKV7Tz?;0,c<6*yyTv=|؁i`^xއ(+`bMc79[uV+98q]TEP)PuVߚzF]A, D^<2\Rz(j-%$ЗZɲJi vR=c3qNmUf\_hB;³­V_+2ObCο]77o7?jG{l0$)c-5.E`b "QY&b*_,j#$o۳[{]C%{)) 6uB3 3NP`K"=v3q{4nڮvB11d F(f5/4%mY1Ŝ fYc_$U|, (~!&,EfRM.8aOc7~G<d(,u(@eYM`7TrB3`'MGEE 9۶I!07&hNjtJ3 Slgdsf&j-qʭ+ sBS/λq )LKhGx;2&m"PH!0fbȈH8islW=}Oy4&ܘ -($W~e9Wznև`OIgg2D 3Q]rz .ǴiҝVޣ{ySmϳ[ s̷\^FHJ3(-}BAw {|"(l[! >zF E 0(k52Y*u б$B&b Vx'y

0n̋&(ZKܦ8OzN(324ޅu='h02.<861 ˋcmˆ8'3EWMLeYXh iic1ʸuYFApD !dglVG'lAi9Ӝ:AztA:M]FvԢ4ݑkDz '11YGBxQdegI'L-LUHϧ6 S˓7B (0突,+8cߥ=RIu&f-w#4.&mqklNlw}#zxyH&=Y.8p'#AW MJ%"jv>}lԒ'Z*ÜL99=T7lr07А\R҉uW[1 B6*Zy㻴L|Aiɖua,PC@^ yP]waF_GA_=3GRϒg)JـQN4'  j\&1eIN&wlz]Ef?PPFJQӖ.dCR$[tޥqّFDTsr3RzeLt^{% D xCvVaȎƷm\56NR]OW,\Azb/zf~)߃{vAo'G cﶄO \dżPW&_?,6F Usg7~ p:陛s,u{tBuꦧ}16ֺ!k%njcql's'W1t0qv>?. ]{Qnt,W7__ g/0J__xKbĔpy2^Nquϼzl0_@ ~xw n.4@د^o~Γ6o oZ?h@/O~xœ/c1sYUjt}}mX~Iґ .|Sh屩+ЧMI&{smނDVƘg\9`VV60W9b*c+O&& YA+oM%mnRR_/z~QrG @7&rWsiQPnA~_{?~_qjVsX%9OʹE%`,2{3/UF:Vλ.ׂ9{yrwF{yݵt9 5B@ w +ahMZ\u@.t>M̴C-[l/ЛC 4?̄@DkKԶ]Iu h+Uv/faPROԍ<x10Au ̿{Qʷwyhd4a &uJJ%"jv> DvYyq qPh`78 6pQon0gr-ΰDa;?QK2Üj)` s2Y&ܞ AA:ײRt @K)o(WN:pt ˸5%=^•BʴǞ]5F{*_ Pږy ttЃM--0-ԥբt(quAtU9tp*;|8]!ʕ9璛+,."ղt(-Jpè,0״BVBWVPvBm QJ j-)T!thn;]!J!;:@RB2C +E9•Be:zi +l*./ƑRwAҕQ(ڣODcACQo!Ķ+eV>D³5f@Mc. Ǘ gT??|W-v46.fWBL*ι. 3B?hn t1|7-YΣ%\z$|QZ ]QJ(%PXn6BprqmD JG<&e2 xYy<T)G*`N]i4]r wl],CAae^vʒ &ʵzlZD7t<qx?*]lg'AhZлy=vzB{sC" }2M'$|խ~֩ k:*bzXMLVlNaUqȏ"hʧ6+,璥*\v8F"t$rV2^#Qэz=˱- ?#~:X6 ?eusrr)U898*e i ܃%Vv?(>zP@6 `i"BW:]Jo9F]!`i.poZ]+B 1HW3ݩ.]`+/thr]+D)銶"t8\ spcseWInˌqW7#4RYHׁhXKȻ=H6qC$9Yfu=+ӍGqhp 8aJNg|;k,A"8CcʰЖ40Cā' {ĨVR@RiM PVMFnǍG 9.aT)[we_;rl*J[*m,Rpn*к7OAG3_qB+\qB }1JbvxxoOO|ܧ2ya EhM}ăZb( n Tn'J%QVP~0zjȬn}˱$pBnj9Q~呎HHM0`?AWrCؼj=(oNrơX{(]ٜ͸H͘G椙ax#`wJ)}q-HuwQ*g;\Ns_ xD!yi9"C1]NS N(qĖP8ʡv|5e-|Ln5 ~gknr7)A*3L>(F{ccP~ݮ d4v􎔦-In[Id8Ԯ5VaIfH43%)M&ƛBd]lD.mž¶Ar6~YxqDŷ;l,bl&IeFɻv|H,UQ5̖mI3?fh1B 'fxfouΜ'K$Kp 1e'Gy<$ 'ʥ ۻʁԶ.1Qd"H(bkER' 1\@U 0pow9\o6钣tw0]Wՙ+HB }4T=0n:|J3_?NWWC+M00bWWz3$4}ʀt§ v2@}BWV4}"QҕcnEl 3OcHUHܴ@ӇڊF8k SX/iW4xAݯZ*Xd1(\$EmfY5pFFX \d`΅Y SG^hvֲR*A! f.TL( "DVGΰ˄?ir 5Z49BctDW؀5tE(N;)Ԉ'HpA{CWt>P:Е*zu€qvŋF}S'FţZNpփҝ>>>P*XM7Α"ׅMI\` i6Jc|E. ބtH'7t4\[~rhzl:WnQ$›9W-j5/'I]csx^/^A]4sr f78Iω .6 ֓-.5Ԑ@] H0& .|3*[JJu\V)u)\߄RH"Pˬ??}glXOk'RcLQhǕI'xO&L}$ȋ‡L4QYLq\Ud* (r:#D) dNTHE`S1/Is/;,<7.yj,%Yl 2?q90Lj|o7N1]]ڛ.fLI*$zL9LP%Ϊ!9|2gebbeB_^PSf˕vW( ,4맗tZBXΡ{ЪIP8UT-Q\Z~KTURѲWEtI.'R9Yxb*9_:[nKrj]5Y)~O7+kZD+oyWM0 C*pJ,4Z+sQd,tԤQ$1Fh57m]F"f-oQ?}D&DG*Lees9^Ok K-Ma[+*aVrwTT.B**#{ k+_ر\ S.G) ҩn*9~,'L_ָLe?#_B-gLkH!W81Abw:v[m$i-e8(l[liWY:hcR~ct%qY}?EvR[YS薆n^aWC% phaW,,쐹}zzq bR=+ dǍ|>SZdS[*!J00a]$U=9Q뀖}[+lY=-5Xږ`Ѐ{ڵ(tK-ĖѱGu _3fAp,L-phj^vߐWSrlqiMym8 GMM)ufZI)V:sٻuc,x6}(zNEq!ı_$e[iS%؍p#=Zяjc KΔsgQrV,f!(Yi#GADaZ(FK"J /i0+W{O޹*jkiEE7wY1l*L0|,\n]_ /r 8Kzy~Rir zѯ26@(A8]ӾSNGh'F4K^)BrNw  m^NWurl7_ pD+E]Yݖi$P)Lt[RZ(4WYɳ(]RB9]>k8g6A( :hWJMiHl/ʜȇk\&LpUa :HcbC 0Z?tTPZI"`e@m%I>pA[q2ֆÈ+`[@% qg-,1UUƓpnL]E)aMQ= Pglaz_Ax:t$׋TЕ@82]I1nCGRd Ӿ湃Gǩbu=Ю=~.3@+)PT>~En4N] 1&Z<3rثr嵯W%!j@6i2F|gejȀl3>#+e&c$i?EM")"Y[N缾nYDbM0V9v|HtjBδ[}qV4ӪWZJn!rmэWqƀDUۧ `NE /l$̯魵QH2pϋuTD+ l1vAWPg~'b*+O7m6@s.[߭I9@-$P 激GbB.}5UӮ=&L*7#ЕEQ0 fpgbi*nO@<4 Q,HIҷ:ʊaHaȃɋFT3މ=KرOV=XHJH-OxM~Dv1&:eApNƱ-@YV/n7{1BR URH$'q6eZwiԪ?_kjcҗ5E.~Lw?@H. hd$+O<Ⱦaߗq(GjfG{LV*J^<֝sg7Q +{ {'LZ-\ gЊ-⾂%/ù2\--A9LnIyaH*O&H"H0ZeNuaO cw kcW`uKNfqyB~վ7{ҕU8Y!9ue@S.PuFb@Id{X 1Yy~?$uh"SXd;0] ziiZPG0YoҗSo'Lus)w\EY 땣Cx$+9UpP*ZYO;y^Z 8ZXEuoa#O,Uk?2B?" Iw#e/fp‹Cï*g яbi%룵 /w @J&*i)T& re(/k[up4U&ikx2m#>R?{-A*֯uϓI(&d㭛ad%x%ouKXBj/VM?917,QҦ^?ngR D )ߜх^.Q˗7<~qmzU4U=\feLCәLJeA52P3;GfUz# QNB'^h{~;2_V !9I6F 4Fv+mkUx*AJM=ދ~TIp ܊_1(yrߏ{E3Ea|)=nHi;h_?Pwq^ D2uo$"qgQofbM}$\!4wNNSAk6|6)-R%!U׉ST]Bwp#J!dQպ%3v=>}Pr@KJrAYb-3z+50 v՚y"\u ;h BQymsSRο e4ITRLJj5-@pN'jzCȶ;" VUsNns%0mc##J4eҥ~VS7U TZjF?FfH/K(88ydu^O%d3 wVeU)#+9񦈬R e+ʤJAxfXzEpy*;_2NAI||0dLFp+`[GY`TUq/h-cxWmڇ mM %b0R "^0EhUK /fRat|ȞW rh' ;oMi].zw w+vg| )/VҺ i 6zNjЉ ":oQs5mUP' Y:hl]m, FJ[q-BBE&xq2_A|';IA~VBZ@|9ޠz.NGC2qĒO*ObU$M&٣5L#@k &:!S8[1= *Ĺ!̟EbZYaQb3:x%M|: +3K֝E)R/`r6xڂR,< sd)Wj?I--l lV&)'ny'FD4(vi4AǶ>4ߨ)ؾx*Ɠ,qQ6p| K"%5䪵 M]KDʾfZdk+c jJԜD:&~\圠#7_ ʚ1 zDU߬A1+RpRΑYSWcO"AsWTeoյ%;焔lE0̩7_r%_aEa?+l`$^otMFrzU|ȡ8=J5 -zӗB{"(!1 OP+M7UĔ4!9{u՗zd>a] sij(Z8A>XèESÝkmd*ǕlW™3Gıj@^4$4d8/爓SP",F(M叚o 6S-"񓂀 aRUmF]GoאV[-<2tu3!NLzaJ`)t0gӖםxU;cMNh 1,c9梅+}) ru2ƳT^勷ωvͧ+1u=y K392# 7~᠜F 5H 2"'$$t $;RYВ_5Z+Fj ?ֹد^?j1{63z:?44b֫V>-b)Nm-$i8YÊ8n{ȳv/Bbf`ŬWFuO&UuSlOߖDž-R/Rp3CoD3v,lxҺ@t#hDW(fQ8T CN/<2/IJfSP!礹o67=_nЄUJj1\b;-3//'cJo٫6-ͦ~ "8Ҥ3N&^Ѭ]xt|$ $&2&(/p2JNkm6yhH9wW/~δJ|N+-!Q h)uvZMEH"1_{ 3 {_o`J+J;HD41#u $gdC 'x8ɱ Fӷ4O.z\ Q&N$-I;7ѩ`Ǝvאi'TwOwQwP> vi?w'7VYܗMMh-<|̀5gX'~lmuSj^Tf4<vT3SMjSEnPR-v\͋q}h뽳hmSG-$NT HҀ 2 "f;+ut|{mUAu!~p?y0] p߈ϖքʄzY qd̗[ _S EH^PhRD?3] 3xHV:z#} M-W /̼o!&3Yb _K3n|'\> Tp:Y$Rs*e݋ d,R0$K>y'̳ȇI2IX80G +a=wx;X* *؇?h\)$x~ 6OA7j{&84<(9uHm4mKY7+L6m T."ujFH;T]T&_ ${70[ Z5l^,c5_+ü#vo ,-ANtRn8l1匟sW=;ze::B 7Q=Ӆ;u] 4zD4ߨ$ƽTWv#1`@$bF .ThJsH$)JVivLɅStB4<VK*"l4 lT DP fT%)!PLH€ hQ`oL©3"mRu$wf듸1B7z=y8㈧pJw@^ : ^*^A,:rC,9Q [A<<6߉1&$SB;ڤW粃NN'%Uy:-ۼ{acDzyfh! 4!P@H4YV:ʶ58s % ^U' zND%phʋ&،HZ qǂ0<ԯYzVqeGq) n(|ewikeۛjhjkiP9.~9@ oWajhq KwNwˍ}$<%TRTr"\zǴ_9@B\JGsXQvgn0eS'-<ȒE58G_^-\I JKF$ƒ( 46M&ords +@[ЅALO2+n)JN 4nhU$U.b{(1rmjhJ00jO4f[HpU2=Z89n}vՍRDTdy /46JH^Q֢ϓ,&BʟD38d8z0u%c >m1ý#;њ=lz}CfG:7p/tJv@i;l:krhѧhT Cp^|*(Z )TѠOF3kR&z Y2ά@>"5NޛSy/Ah$;MIJ<h^8fuBSu6?A:/&gɬ5~ʎK)p [r /ȚsQ8Xs}L~fΩ 7’{)tʅ_!8 zGw;ҡtߤ G%Z'dD\^S8- zC,Z;'ncˁQfO}bNDwJ4 )9'IikZvsWbIFk0Y-v\v=!k@оGW[:\|Ƭ\AT ̥LKADp\pT+fĀTUs\4wխ/ɼ[# HOZѼ3rLe#p)͂ӮüMqzh &TWZ=fM'[@Qhm\3k \R]乷]k y0J 8娭awP!K#@7vS}*`zS]nם\W5q6؝z)Gsc&AIKYܽ{ӫ|OQx?o+[ "+9~@); uŖRMga"DDW>&Wg/Qi?3+6̓qNޗ2 47?o,&+' d`>d ɬ^-̴|1HRZhfO|ǘGRS]ߡ(CHqt9RtC(.%%<J9/Oh?I\_: Bef1^pLU$T$_mŹzV_=b6yLR Uz_IQ}Z/2{ pFplH6cY`/{aqAfǽM" IhҽOAd@gN8a/PgK&%)x !%FE+&)b o_A)[O2=edbߴW)blwφ%_DRPԨӮ- Oc8fpΗ^yg>[J&CVpzhUT=UV᫻s*3- Hu`"@*LQFX - RJǰ@$/#0yح3c\${H?:Xj+{(m) \$szH6s_5J# HkH0׺k,?$ɳ[A_ g} }o)&fARVDyh($IA ,U1}aRPBP)Alsg\zI4t:?| a:PB_nÏIrsڎ[c!ٯfVggʀH9,[mI`ԕ__AQw1@rTqvzpv>)tE%MCdpC'K{ѐgj ˜9 {qcTJHu ((@\*#{H <8jXA!L%뺼3<`@&e,T@ӂ)cQ f\&)0&U)…4")R) R KR j-&ptA I.~7Yy0iAΠZ½*Y!Qg),ř#Nd׈~fu/F'lC 10M+Yˆ ޷U/y4 a5{B%d917h5|1VԜ@uL1u(Q,x)Ace1 I@JA \paL_JTQO鱊Njc4nCTQΪcbgʸ>t:ϥyO+qН:BMdgA(`C: 0<^X(ad,Wq=f bߩ\ cUbpQ;=l O͓C+=\=S^krspL(n//.Cj+e0m 1 Z9Y$dZisx׼l[)퉮ht:gQW59e&On7 j>^r;X|=Rd>̏bQxLgl9Hfdd|O oqyh0HA3laSJ/^\4:q?Q^fW9_$v7ܿ}݊s9z-~eyZ =>t|hx8 xp@09Wn4}f̽QI,Ɩ (FP\ ( A֎JWTKezQuӻR^ A`q^bR=ebȃƻ-Ŋ"<&0JSzϝP"R VivN񟧽R'^ڴfȌ{m >MfɘތCItF7=/JD^?&+g(U챭b\k:V?Nr@5iVŻ{i^p>h>i{jw[+ɐXV3 /?$ubvw9}Vt?݋%]>φE,ighe C 75Z X(·%^̘4g;!Ӷ9u!,omPm*8+R Y)ѣ$37JϿ@Jj09%gqi0rr6%H%]5<wTnۘo?],/`` IJ2ejd߰^ѴY;uɿԥiC4.y*1H;VKBbآ/h4~Vd8ЏʕjsVW]!up-8WH~O 2@#0aWՂ4#N`L /K0zH4q:RhV)PJb9,%w9TCGTc)#I^F(Y'tZ`"]gHqŧdîb7PQJ5x>5 8ZVS7B>K!N5Ww154>`VFtD9>F/8Sc!>dINd]{DxSC2gaQ2r`A`-F~$-)eIuq؉q91qC,"xZ^>\X8N/L4 ٴ0CYq)[F*;vE*!\ < q/ Qj!9#)o94 *^t~.b{A]'B J7 u؂Z`j 62dQX# aը4*TN!TE_v> \J]@. ¤\o?]\_54Bj.T֖bJx/]!s Bp?r  µ[0yTEl5D ,(l!Ub!*S[-V5BpײL4dvq (ۀt!5 ,$ Rv %P;h įrel؂-:6^ q()H/ Y-A&rCP't=zAix ƀ6u>vVDʠ\[IDP:'@1쯯!ß b܏_?,_s,g){\sEyAxK1iTT@iY b+Դ-a9] G^Ƕ*\ = x%$N%">wm#xC{-O_RczȊɎo[ x4SbC)d*7]IDh<@FZ tTkZmpt;%=J%^+N%'lk7U[MWmS*=.Hs&>lfOi/̾ZٽMKY$Ի1A'-zW7M3(l\rD` .zηY&psgDw_53&:%S2{ʧgZ,f4_} Hܲs4~zJC^} K?@pu12)h(^ޯ ppſ3g?v!j$Vkb"2eXr`Rw&lR4t[ >ԏӒhzOkG>)GKIRs"Mt2Ke` l"E LT[pύD霑۔+m(&_JX[#!Aa 86iy `./_q]xpJdV .jF9$+qsC_jy݌zX5RTw^Hõ*Z\]H#5BZΨuOnw_e1 /%}-v/UGE#5!&tbo)Lj'2,q9kIvxjzcna< ȟƄP|1&I RDp{`c7=Ac.3K%SKxЗq+fեucAxx6+\]<5zH,KeހM m9qϦ(VLg{J$C jĸ5̎&t%&(NsddIj#P l2$SEE4Ř6؟ D 1t6g즒⳴ۡd^w(qQo-QbCmYA]t{6!},1DceăR +[-/I=R =2/AV%ԑi~84_)zjhӵ2MkqR6(o |JD"d`X|4cIAJ"Sipj\ # ʍ256vl>[^Ɏe5sͿϵ焳>X#.͗I9Qo{dYrO䔖bf h= ʜ\e(]$7>XFe >DWS~-!ʒkhgS#k#>k vj;ફC_՘`~D54}@l-]Ūdt3X:vcĎլQ*ieQÏz̎-~hsQ2Ub7Z>h .n ; 'I g2k,ƒ=^`Vs5D5O\L;$4}xOd *Qj9ɀ54HRfBndc[ۿW 8JeW54ކz1{mEbj -y%tYS墩Z :fyf)utȬ½R o SD݀qLݝV+S*b/ұXڰ9 bk~u119mQ0yޏξ9FU#*6 ^uBu%٩?)Qq{mɉ[nƠAQo)Lj?_3~ֱ5Ԓv1W$r 뤤lA`,wR A(0/Vfa b֜l?5~mKij7<zŌÿLg4g-e~hM|ɱ`QeJdPL<쬟?G_/?Mf[C<;Гx|zCO&EɰwN:i"^[)¯plac.EZN0ʹ?d]&v vZ=][oG+5vG` a5*ѦIYS=/P$j"wOWu׭ n'Oa%P}fRl< Fe82 -u.!>5̻Q5]f9WwΫ%qd"3LI["Kٝ!D1[I>Mt._s/$~P+MY x7;] J%hɗ{,qPl:i7"(Rkvoh1b}h+iՕzE/V0r+<9xss|?z?{1dɑT@Sk IbBRl~y7Lae$x Hb(Xr \JAsJxH Dd k10Ik3sc*GccTs$T-jV?ʕQ6݊2̙Z䊳$XP^p5Bye $cHOX0y`Tqd8uICde8EcN).Rb€i/1a|/XEI:!UU0d"+hf^jf^RX f1}_nip`Ti)4 /[u g#0bfWgo5'5ɢ /!_&`~\d?=`yNx_qz̰[fd+l v\۱v]d\.1q~Igvg90 lgN⒓ = KR_{RXѣ 3ghG< XF Q4`Z07HKDr Kwa7V:tWQH3`t1kTM2}1xgF7i_>T]x* %yE$Njoy}ء 5F\uj#UbЏvk?u/׫,%F!\Kl&'JsdU%ѲFc??{I88Na > `C7G8REFHiw_y}:'R)9C$h2Чًyf?O~ fj|5ZUu+ǣM84 R?״*y(~5O_1Eæ(既`T&"l S# >t$5M5ksC:5uj vQk^hNڂP6ֆ2]ܮڮ,Ej5YŁ|I;N{]Ժu$%h1L]sV4ҥ)LJ v fm0[n2ag gbJϧaU8[id؅wuuuy1zUvGr:|G6֡FS0 `y4ȍi*_`]Hr8:Qn]$\huFJ:~8ZgfIvͻJ9>Qek1@K-xU+ְO,H+c(۲f/_J-|Q%ݚrs;)3יloGjZ&qVBŘǝd~<ZfQ[Z*'q[8Vں(k&Tr"tlFBǦVzޖ9>Mr?p{ܣ6(Pd_?OZ-s<5S(eskc.*nχW;h:c[xndSqeS7Z+regH1o˚#rJ ŭ]) @>~zy^KnQjea}m6 %}í_cU'r0Cy-su%P,Hup q$vm~}{GCӏ>KAecNbXt=(EJb^L ˹1@Td"#3,")`N*R,7C0A8# qD^xRȦtYsmo}*YaZlI|hIi4(H,Ls7շmQw)x*MVu8k\4L_AeNjCvcn#^WJdFi~=Y/~pck^T+|:кHT;z*H5Rn8AzRD^#X*$ĚUxgN>#(h Md*ưU)lGrDqB:Mi++D5y|VC:KsdWGH!.l|%~! Tc9X@?B`}*+q۩+ll"u]Hěp)bG\˦-&t>3Sլq˛~a-uƮ(i.w`#c7q:AUBYrK\rq tP{˙8D\.O%=N9UJi rtkÞj2&[Ҏ JPTҎCQǜkI!?5{bM,*гɲ,ER1.rxK, TGZ;׍` 2;&}6X dVw9k%W!ƝXX`?vnlia SNW:vBB:7sc77-yjWRLXpT)%R 6<`WZl>`+^SL%4U]6l|0%h˂a/]I0\G2(\L*P& Cr&u1<U-6qDǮ}vU!9# L6wO?z2*.ggE0۶H u$GvuiӬiֿb5v @Y* DA\ެZqW^J.c<;ĤiSGL#dl"4#=| [%lO}'H]i`u QUן_kƗS gw üOI{/QŹj9×:^L-s<>ĥݘ!* ~,6 tAajw9xtI[[T?{7l{4d'Y ؎.b?=&Y`Ɵ oz ^fnǿK7$e? r|on۱v-x\R.W8 >uB:뽻vɧ3sɍ;)vpj-Yo>ކLq[0w ܟ _v.)줾,e! 8j{ !mm)ŠĮBvs}vժ#2S&߅UQ_Vx>nVՂ 2߉!8R2Z1pi}R2SEUڑemյuޕ)UT!Mڣk)`(Q|v}Bbmga8]#.Լ j7HU1R,Ԃ42Ǔ4ىGXȶOCrؖ9NNDZeFs殾쀓Roǀo;P=ëu񼒭TRj4>,(Ր略Rw9U$3*f*Bx-@q!t.jxR.ꉧU%RSZcII$LOTr 7/Y[;nxƪaUaiN_s! uU~#UsB`J t8.^O_^g _) |Ü80藍0wX:f4QI"NQĨ`F"zg?-r)JlJCޤ=dq0.œ)osc6Gwr[ (*g*j)8h 6V cJV]FeS4cr%KBs.1D g FIB ;$d;8:%LkP(B\28|vzBKr1~9DrasGfb0VHy0=5mbqE0Xy &D\e@j*cX߳QR'J| :H룬4J!:NfCP7r _V!Fe{~ϐ NV `sPS8oZwHY)\ a# S)ad1pVџyy2@+~߉ʬr1Ēx VB93)9Ka9ՠ~;rn5Kn߭ds?1s~6`Qڐ5d ^eciƓ)HQ_>ݶ,ds)T}uMO9NIwa,qqW<ڋTy>.ϰ̵ts<-ՄRn7y;? vkŅopnH ?qk9(mZ#@A-2+#aE*ZѾ0՛5> }}J%7RHy =6=ySLݿ:e> ׯVof>@wϊ`G)ٞ"d ;O!;E*]G3c`mG laOS]pEo<'f]4NZ3+X[jTtf iR{_|FW {@jc͆NAaej͇c%/)%>`7MsA@@K)<D ]&G|>mf(qf{nxd~˗şLj$i2ƍQodRz^Sܛ v*֐膅_0p6Et*Et"x iɒjakZb%cF)B*iiٗL@! ͂s/~返ϑ`3XWTXLVw@٠,P8&!ca֢ã~|[>i}`b&dTݯi0r.r.S)`1Únzƣh#rVKiSdK$f;g[m5$mg1BP܋Y3{X2ک -G(lp5(z՜Qoslk"k_뀲Q۠d{gѰ-٦Q N B J1/#dcsNڬ;![֛#\+Oq2cTtGl}\.3d& ]%ʎb HR o򎊤94Z|y@i#;$9-d'Y˒rD.y]m|fsOeO< o$wь>iG7ޗK|Dk]xK4oOm01g8s Roz #<{3"ն:ПdƜݬzuW[M>瘂%[O8sy;y/S'V7-㈁C t$BC )C[F-_3l |B^Nz/GP *s)EBV\rb7#x#v+οTrHȣ?=#po_V(ę5 "0?&f,h0|y_|.%/.hj=bl|!q3Zwҗ[zU7ߪW~^z巚V~UͭU>bX̕TP+ZTS-Z~lwJUXӉ+>tާ`DƲ $u/ bNP{4kbƉc ̑WZ3XbWe\ E􊦮kV,rCj-T8dBsdl.s=Ea2.TcoЙ-hjB6:'&U‘lG*>L)dO!S)d|ܮ9"LK3lIOjKnR>S}c\ߦ:mj߶ƙx6k8fgq?|ox[:"g5RvkN -e#@y1CsTGx(?]RkIѧ5#KnV0*,+AbaN`* x3(Yqæ]@6C!7& nobb2g.]/fVpjwn%Hv: Nk-CբPf8#TJW$r%SMgK9NU뇘/Q(Χ(H,:]#zA(-9 L*wťq*ؒ!PW *aީG^!YmjQZJiQjƔAC{E̎5g0SO0;br G.>C λ lR<,j:fg@ezP|*rhljKGY bB"FCbvPV4k#`]Z(:,`X |rДVcjsLe4/:sGKR|Ql SH|Ɠ!3@& iukpʘ|Utr hXm[ ֆU]KTsΜ6ˎYk_ТR%5ٲvYUެ Ȥ~َk(? Y/_}f ,S.x*xr~5_-~lQwIfɞ`KywC_H_:_":X"2Ytzƥ3.UqK5qΙ;rΑ=wI_vzյ{)pʟ^a^sxMIGz77Ot: Gҗ$+F!xqק;ӽ$"~}If"SNr sߏz6mL`v|>LRda=!}wϭh,GߤrjdRoI :[5E\֣a^o͇}@$Cx<h3z}0<{X4aL,}('3rOwNB:R@Kp{ڠ cs;Gbxs䉭YcJg z3!ots1(Iǚ;rL^?fp Vo~P-]̹Nd~ `ZÇA¹bYXIWMKW`zz 7mY IGw2_P}{茞0>zo{3ZWn|[){[$"_n䃙 Ɯ~?wmrqWJ+u#xgpR:EY@;]GZmׂJt(MDT修z;&=p'N;2Ho`遥| dQ(eg:JOQr90;HE*bܺs [יs􄜣уtQAOjs6M*/d@':r KM!֐+ g3d لWT `mDk>Z!F阓=tդc*FM؄rkbgU![%ўCEĤMOLN Y;f4nx{zd 7BɐPFz {5}dv+\;9C=n8FJ7ԇacEOTvP Dz'S!/Jt%0Dv ' T Z@ )aEcڇ蘢)IhZEZjR g&Y#_$Iwj -I=\?hmဟNY5Mi/Ƭ#|;!+]e; >}.W:o;7"f!p}ϝ1?HO"Ë)>mA^2JдF4y"͏<ҌuHO)ҬÑB[y4_oxz:/ 6OL5'8/oKN#Uu0;dNm\4to<b(R1.{ S@ͨ@HTJ89 Wp4{ydDwq)5¯Uf~ I֐}kF^"Ka% &c5Dy=ke9;%W׸D5%i pRY1YVRs8:79(*g'{f8aThIYT>O9 /iK^IdK 30!QLB~72'{Drw|W^VR\}ޕҒAhnsLuRlrJB?/󺍊yT>4ͬbm# CjA0 c52Zݬfe7+.݊ˋĸF1;!bVq9dT:]a:IPSÃ)#^}%%jw^l&j}h>rj&JY/2&(:rSKlZHT@$fR5JmRKgt-1a^w͚;(_tWŹ> W"R}\\g$=7Չc.\G)3=iJi ͗pFOvLr`R-j ?_}wfkDR_ӵ7E\*yV" \L wr NtbLiZSiyFӉ20Ã쾚y"~!#yib@ƪfgl2v%Td ,x[Q1Q `w!&<.SdH(ͨ{˪1 '1,v%(LRZ%g~@ϖ- [ʡ`پ-M3 >x0n#$ ״Xl!smq`ZMkh2Ҵa!/Mdhud× ÚyUdxY 2gFeV"|Cۼ= Wc~[THkohf6Ed 荴Z3lD*v1Q1`<6}0;^.yxE`%8eU1_R1 32ܧѾ_M)gӧ F5 ͕zWD&cø2Gʱb'L2V}Wo89\iU"2M +{g\*Zeːu&Qn}gOnU`i>@Ȧ&f,M&ָͦ^\Z]-8GіLT* h ϷY^rC;cÏFwL9,zxtb%2@}%x^Ǿn1Mg#ni9.w.,I< pkKݹ-%vYb1>۬ԢQewKQ77jar04*V uCOQIS/=FXD I}~}G^;Ϸbm,UU Sm&L%vԴIO{\9{2g,]#{oB"\JJrV|@ٸ[#5<:VD n_q6ƹWGpJ4 ˘LO]X-zߟD?RKxn͂wH" mQ)hdYCvb{iJ0n!͜YFNn#9F#4:w5c`}?Y aV$bG(98d z䞌9%gIuM;NPͿ}zyR(Q^ns ɹkEF2o@?Y>6Mzkwp%K&F*k/y y #uoA*C;{mbP.`mJNuDNBAAOb@ԷCPd_ỉsE퓈5k1ixݤ_޾ɡ)8{((N{Va8YC,LWe̵.JyHb5Vgby`y6`Xw_з55ZD1QwEG.QTVO(?˛ؽ\}sq7`'m]׍[eKYy$#{g5dh&.z%\={Åg`OyK$~\/m: !kl:Ѱ םKEM*P ҆6\wܼ1oq;Q3Ә<x6\w3쯴\<]#A əu"I5n 7o=xΦq2 Er,$ ]o(^`ga_)2*Qner7`wy.{f HR< 7`D;7`; \0lgRހ쎘7 J7`ȁD]@C:0y^A;n|0?T{:H4~U5>m &T4EY.IqC*Y5#ScqTv Gغ]qΨ3y֖?)4 <5WC+%MJor^Vצq"MH78rDΗ?YeONs=;~~@ (C;-aw']ڝ쎗/mKs~K>Jӳd>$^Ρnܼ8Bu3Ng܉{RR8-Ǧ fr[)N({0%1|a n* }c?3 #Ҫ: C+BxV,%q+~3ĩM,ۤ&੫uP^Kxbrh#}lڛ>|}ЇmFń;؊1jl=F&4Z{SH3瑩}G/>2! K;h>,X*@'K^BPKka7?{N#KxM5jit<`bFsN|;fQ8AO޺ϫ+.₥F* a4l.hG,HVlE20h\qw:!]ae1FTRH ڕ$}䈽yJiDj% 1K -ytQ#=bHe;E7裎~l%U#\5ݤԴPZ rֿe&5*oͅ LM Aq,!e܂j`ʴy.aڌ˘6"yz@qeڼHv誻s zMMd}xg !>^,vIqn rrɠ"U"Ĩ}vq#VtOط1/G(E997 Gh/P[AN/ɠcBnn>}T޳x,Paރܼi9rr{^"jD@SeQEAi*Y(b4]V}i#Xc6R7_J# tF1O/)"M4őخ($tqm䨑\PdJ(JcR0, )*m.S:-.3e ";`0p+5Y(p 4C:qwe~?r4{7a%#޼WwSKLL4]-$gCx)PP|\Kbz}uT[X&KTBuZ wZ88c3amDnAPjf7+#󬅛c1Z8'&cO6 Kf_o?w7kƮʧ_%"jxc!X4&khbcdzcu TS 80Pj:T/(GF\:X(Ȣx4*kf.t0]7wlqv9.Bc#Mn"Jxnvh+%IL=Ԃv8>}Յ HqX~Qp30JF-jŒg.BF9Q+P@[Bfb*ޥbK{q%4vU{g  s* 2'm޴ *wK\-~q0]Ǭx ZziB]T f4ƽy:`SƝM42yG ?;to=}Vw;:kF}0DobS(U1IvNW\79φஉkxl7ylNڍrvHă?u&ёφW[AN/If(ǂnN4"ҰUۏߛ֔Mٻה=WrOW`Z-aanwM {JҴij׌?a^qA?#V@1/&r>{ *Annt/ %Bqu9eϑqHF8qb6.#җv@ј6Ͱ*ֽ \nw@rSȳGޤ xJƼUr*tFKq*m9UYXJ1:]Cu{"s;#f!Fɢ(g5HmVK"2K0_e$][ɚ.[RVhPvF&ք!6$-'4?Ȏրf[pNUE/jRs$V5zbI=m'kc^,M bgG+^^?%%S܏庺/r'JeibJ@'N8^Xw^2NJZ\ve@A>Had}t8:\3k}Gz^g.ґ9F lq~aG5òfXN>kШ-ܠZ1sV~oYo neT[AyǬ'G:{+BOzij0 ػ_XwZG_ܑQD)fvŨ-6Tbk&^] Wzؾ=hԟ:I6K"y(,`vtx\ӏ{fNT*A186+[A]ioG+b>Cd L8;QMd&%5Eb0;d{{SN>1*)oy y}q*ΣrFU#ݭ7 C>iy,.q|r2Byуqx1Iô <6Di+>D?l.KVѫJpCJsʋ77=J婇BSrb6"tMDRW+`ePY.^>a2]\qyjmG?|/x4WV4]UϹ5/N ^G 0B !Š(qx&nwAcB"з)FpPgι+ S5C1 A5'ΉW0$r9a.RL}@^c(k`{z楐> zӧݑv џzj(pE(C6l ݹvS7* [+5r ?{v(~e' j.0p/&'%^)WwWI] JKIW*G b?&5ef2K-\x"ϪXۛqҨ:ήZJ $x5C4DY\ n½Δ/^ފj3ߚ )܃76 "Xd^d mxd3x on`v\?jR}w=01+kebI_\e, b{X0%pK? A[Xr-M<A'Ipk'BoS~]_u{/dgvLonxWyWW  l۪ΨwF*b_;l(F9D4W⚇jFBK`F iP.r$Ֆ6 0go4[%x.ru zc}"*8e"Jk* -++f>'(V)j% #_e@//UB{óRr ©zxIɘrbۏk%6UU=Tk؞E*XgA`J;w6Ǔy=1A8a]g?~'X8>JB.qB4gmg|!Gv皙[ dWК#rF㜴 [PvS=6r Xj{ 1{% v cLO<4"NcdP)vn`K0ѱQlDqYڧt2?Ʃ64H*TDZ`o[D2.Iz)P5l_t)m"XzGDР+< !OʑcO"%v&E C/0 J"%q#bH|vIuViz@=pOHD&yhHP5"ĕ."{ݙŷygJRyhϲ b{4Øo%p) VK>_UZKوHgi$:'.@u\R+H)ŻZ^f,\\ϔJ /Rh&\a."P՚8zs5g .JNg#Aj񝒈:b#4X v@Q=L|:#>} 2 )[hNa.jrJ~ 8 L~"!UrB+y KB%PGҬl2%Xػ)A; (QQUq5q`)UlqSV4*KCIRoD͋r[\ ⸼xi{~70 w~ʧAB%CK~;V3[7B:c Jﶁy2 hCo45Bƫݏx(x:^\jDQuB Q`#$:" "p$DJ/lrrD;TÝH ^uƶK\U >c1Ɋ)˜8b3rlf.Ollf! lf5'S,4(E QBCwcl۱/1n b6찮͈@=5̪^&{Tkjf :Z4lVs2VVGO.0~ZmX!HRgZM yD MZ D2T?";W;>ӊ(ƪ6[U&缚(Cn蒣 r|goM Z:η #LJq~Q&" Wv d` žBiTǰFi kưFi kTêJ p`s@aI#EHܟ՞ij5x=6(a^`Jʕr![_|z)&U\Ǽc)eC*7 dvz.,/kє#&v0fkYfBs \Mc:-aanۓRezs3/|{oDL7oۍRnvԶUvUo%RQn Prre4]L9)gDd Ʈ7}P7}aϞH1X4a\K9Dq* Ɲ)4HRCbmZ76dA6Dʼ R;Pzhe >MVB!$pDA^-_f,]ucNugWQI2(yHGwCfhV 3rg1^/=ve͐c?,~p$q8.OP NI3VG bIh 7EcDcJ#&MJ B**-R)w 3NGEG>5aD;<{cpI@LRQ¬&V )^^MVlÊߍ'YWcu"[mŨ39)p\Da1U*aQN* 5A;-M83j 5Ҡ.Z |?@rsI#R:`UW9 9r^T=uHjF-$"挜tGRE+]T5TW0%ZBVWط(BV'sHJsBꮶ8I#46-:Eg!^AZj;XaXw ǹkJ$@]xJA '̅Jqu[g;m ӎ̵ɤB]WpmhH zN0bs(Ò!(iAu^#T ܊aQO&VI-)!  S !kӈ4Dm\FIXpBK(7EZpO TffU9!E>9wǬtrj 4 K%Av鹉IhĂ,.jMvM|EK# U#a nZjJF,yE`iav)';M ubTy 6d()󈕁  }hSHI" vpȳ;lG2Zv7Q3N_PJ qb)}tsa ؞zB*∐I:qVJ-+vBL,";ORbD)Ƀ3'H? /˛bU4>/irFAČ{(?;Q:o"'ڋD{x؄!v݈0t%SҠVi-y |E"Ϸ[xS,(xc>-p_s&j:פ<_3?Kke6y/ל^PCN wr}B}f;zq:f?lTߺ}6?^MvozhM.6̚<'gٌ$~3/w_M䴗QG8(G BwEw'|[G;bvpJjݭv&MFUM,ãGM Zj_Voebi?;ѱͶA}:Mqj:}j?}.ã7 18۞ %#qЍeZS 19ɱ7؛~~{rlE>{bؙr7 `%yWJi,/Br]9|;sq^S7b :u1.p4{*d܅ s2t^ _`iIrc&(Ph`(%"!4XKm8"3"RA?l^al>Oqz:iGfS*by ,ٴ70X}mӔ/Gqk2QvZizZ&,0@P#4' -WE$R IHRR5jU©+t؁ZS E)LNHRzOw)f)HD *!!1N U&& $&$d↔*HFC̓/]9"5S^B)#:'qj!P]3rÂwi >'xj\.t c=N.*Ќ T+'04Ƈ;֮#wuO?| U"pWL.|Ϟp,`V  q4gA+ﺚ?C S_bGϯ/ n%(|mvoU;S-bMێ8d hVNg 9yD8 l"i[,X>†Ea %("ec`]/ZD4;qYl#өD!DO%Cb(66L Zpbo`!cEp@ UӉ?9laۈ\e[mFJ@RC!- cyhbRơւ!B211:FP(Qь;t.ϮF u Y* JK RG$sX+H,1va4;!hP*HI*u$S-y }0N f?j1U-k p1 *["1Dq,c h2KLEJpwN '>oy"ѭ2^]Q54o Ry0e@ԫ9JmvP (B/& G0D^&htcMՉ ͢H*|iz1zTvOTpZ -!Ci]$ !`1 152K(Օ0 1ھ;C}PB_+w߹~_1Ɉ)& l &I'Sd9RXc#(B(N2jQ$זs퍟&T' `!?'++P :63p-'vϰ&nߙLHJ*+ũ?$#,,)$VQLv=}^S0ۼm}>(HkhIn"Q( cYcn+"@[ &H3E+(Ŧ ]fܹ;\/˹,tgϣ8]{e SO7)*=_(a'd (Xex1{tBA>o6Do xZ2ɋ?=?ߌ<7t6& )suѿ1R}$WJ~yϽ߹F;ęf7n1'm^Aˤ9Ob|3Mn/4etK^32w9HDUx.;'n®z5[OFJ| )4,}&$a@`/g{\ cKm/!t;;x]M̛ibHWQo0 /k|#C F2Tfv;%:2$ɐl$CU(CL'v ӘzaWGT3Rd2T ;Еd汘mX2(XW )c6TAFoNx{K3ۤk?`Re.  HD(Ђ4qh쉙ܑ6mGƝ!O&i/9Kx" ՘?|-R|+ b*$VuJzs{u^UZ|r$$!@oxwxM`Vܯ23:!w9ׇJctW?F3U8?h:_Hi 0;p֪?@?451*"Ӻ\gdrL{<{bPǂ} Ӽ5?={vv{bҎ"t`5aNׯ(g*U?GۙJrta7ܖq#`rH{7A7[oNM n` 0% ,8(6)J'ۘ9R6 SI,-I$XDHbȄb(1c itE:GզqRYpiL X[d"ݲU C JQ(ց fZ$G:4&`oNCh;*\lƮRjQr[ ++Fib -yjDe*Ivhl 48/yP5mÊj&㚦c_XW%B- ZS[o^^XZx)`*5UE~\QVR}lC[og{5c>T Qճ[px߉Zٰrk'uuoEPR tP`F0mU܏; q?~ͯT\ w 酬z$3tMzmf.™kPt-m\7Ъ7t/@E_=z+{7Vm"Rs(56]iը0g*s^]kxp]Pq.m_ڲ/jYw@$yo]?s/Z[Dlޭb%x] ~>8wP*c}jxۀ"mb J2"5̖YI(yI[W#֟}tzF 䃃i;^^}ޞ9?͇/%ӳ'x #GԊ/^t֥y/=]xR8]NWlQ/]oſSҭ>ude=M۸s .Q|: ߥ d4^}{yr}&[FfFNS^ ^}ތj9sD@=ŠSxE~ :ZTm=dq鵝_&iOA3l4Xn :b:/ ލgLƀßJ= ga3oc淧Oaÿ?{fb>=M.??q,Y]N/~oΞtr:]Rc{ 3S;ӫuWg 3qW_怟II~jQLL/f.>/璘@ӿ?~_~) ׫TAwP,#/x.z=}6}x>ItYFhYEY^YP\rۥ3˗}RYuxy.qcDP8``X B* )? Zs#" `mafaWN+ Ɨ"K[ioa4ÊMvӚroNRgI6o9Bay6^_ ]sFWX5Zmլ줦fNIKRFa3ISk"AAKe$~G~GVG ÷oC?=XGE!Z?Wi ]:ț?믿!{Oj!O(㝤R'qyCTI-ѽtU< T0@f#"v{B v{n<:@AU Nj)`P8TOP:}JG=A搋rQ=E鐋rQC.`.jǚ/1c%()QDb<18)a\| *˯ @Rr3L%-`HT:vUc-h}|l+eg⑚W3s:f(v6STĜ G 0F1el5NxҍܧL* $ƒЧS~p"H5ʖr~aasg1U ፟J<ΘAwj!Q``h*xP8:'ЇP8 GU G)?P8 GᅣK(KR`KWsVH;Xvo(5g !h7)WB Ph hK&s7ZEWhWEXɹKޫ{hXF6';}HIS>@Ef޾, rfirkAniYs-c."~{g@[<>s*Prգ!/5g3?^=>lJ .W ; }w!n7n`>O*\!k)BWO;X~ʛ3[LJ,89Cݣw`ƊKǤ`n*왼g .Pd/?VJ NiaxPrBFڡ>Grưa$161 g>7N*|/F CpR2x,!N%AI)4 ffD˘q=&V:lX*Vi3I*$QB){m A6B)(N2#PQJf2:HuT'%Դq-t`oyl)᠔.u$YL`'%)c!9󋓈q,4N4DPm߬Q2`"[ƾfb$MD',Tf^JO$;4KbbR3Q֚J+j)x;K,q&eJqZqb 1eIDqK!bp[|8!04E ΡVS5Ik3 ʢ^fs@i>'@Rcpa>-b7@tR(7ea, Vr7cɄb"c瑵%XxD!NRAg*MIS1BP Q1IQ"`KYCMk]=sr0L: 0:kS]|&X^D b)Vܧ-":VJ])uK]#WU-‚u'8ᴊq`9.F-Z` ,YU Wk)R*T<1̦n~>p~) s'}&bK^W>sf 3&l~6[Յd2/Zߛ달$\'ݻ(cʸie6a4*H3Xʱ Hu!zIJT1p&w}7˳O5=>D2گ):^rPhˡNQQcY=P%[s ޻wFW+"휔  qQ|:svu ?q?*?F&;ۭb48:8|6})vdsQΝy2mfDǔ6}C{zlw8 +R2 2w !Z]F`2_&~ 0_KwO}:dS}mź` b1y3OIE`jR0z=dM M9SQN9mĔQ߫Clu5ǔP+jHh^ܤ|cZ\G{Xij|r>ycij9:qLGxQ5s89L1ui-Fzs>E!5:>E!(( /Z,5wؓ0SOuXKLOu,/lǴq 1Mh+smkpt'Q:!0y}f4OF9%FmsMr.պOfF/>rl)Q>N3={/e/n>I3HTzǴkT+ dT4cU bsm |<-(Ֆ2Lbv#ơ(nIC۴C5%@]l?] 5FVxmtU>Jr׭DSۤ[e??Z*<U>hԹT$& ƗNOʜIO5+n}ysoCY0 ~ȃ58QMIDb*HJcZXlLdB֊`6$/d@ϮwB\a5"KDc7Y S44TQ6]fUGtPku' QCm4TV!-0CΑᔰ1g Pi-hV RbGF4eec RJc4T\,,'3Xl:6;j/βo&Ա[[nzɛĮ zo>w7MZhʹĴwh;XCh;[M|_Hdz}_VoF}7{oϤ|XmWP1s9DZ20q͖I4J.MgKD&rOЕx2s#Xid BICGvi'²%5ܣ>eܦ_f>\{6,2iHCrSЦu^?j!Žp@$5BP2ZB^:jY!;'jY? \-А\EwtJ#Okp3\Ip`п]| 9*WhC3OOxqfoz,/mt ,ba%y̵,A,cĔ q QF"*e?P_gB9m\/,?Ez.|fIRGk*aJ@ F8-iͺD(!~Lp`ؼZ {8C&-9[}*ڜ[ k Y{U!3(:%?md{KG>8Qr1?{gBZSK"qMI"Rc Ex.cw!o,=vz ('HzGWhR"IlJ1L\Wb?{WƱ /yӋ*ʋRI\}qի2IPhKN$كYɔmY"Y,#JɊwwJ?Id|ꐾٯ=RĜI7cI!ϧe!^X ̿m#*h6|=7շE-E%T^D2AЧڡ 3wQy "\FgG|KPZ:d|%8Cx7o27dnirs3K&e& x s Eͭ!X8 PuN)M]V Ru1v a;I fNrY4)۾lxyDGxOP a*?P9OϬZLZ=Rا_<ךbXihל@/~[я3!+(M@Hw{u`ŚW l~m!ԍ h, 4q1_ioEw^]Q8F夨Klr7Yܣ$1D(G' J0'W( 1pW5ʜu?P҉A|VvݳvUC6r?GL-V-z~K24: {(˄8_'eDC@\w*4nz =TJQfD!G+Pa6R_L|TYB@BU"U£.P2ܨFz:8'DL1"?6'Jf`5Pw"}Y&(K}Y~'ZFƋL҂ӵry$vZäʵXDKtߤ_)Q HL䠴#E%Ϸ-BDDؐ*F)x&E 5I~K0ٶd)&t;Qi3{H*YE#àz琿#e9Cek,DÄc?QB !bX0;0xI+7VSb-Fzi݀Kмنj֟ozXq0yg3Z}2L)\!!% @5Kwj1ݹZFS=7׿ɾMo~3/DldD2 I)Ms"J@--5#$I&88į@B~4wל w<ߛloiaM\QKD% Q(˓HI5{G%T0.*Q]]?Z' #]Qv0Rה%'%ה(E1 7AD H\2O2핍!I0Ł;sM>kuZ'lq-A~)ʮD-ʁJa{-ĊHL*<2:;óV'p*D +B㭪{â ᜲ:a, NB,@2t!橄`$u|AZA˘~EUF&x3w,#L4j$wwr S_%o"&(uɞFR/,55Ǜo.(yl8-Kgo2y7wnأ~"n;d\5\_tIb $zEng7T9Kߜr @Yʪ 35ua&LSAAnMwaF5CNR u~gq^8ĉ]!([ywOk=!wjxry;./ 5e?VF_/{徍ho/b=&8Ky.K_l5M;Hu RwT/ki/vl/d/fBeL:bјJ;Ԩ,hV!By5ٿb)s9i V n" **x0I[qSνW=C&99%~|dXۤ~嗣7BʡbD`/J\==@1D_[4 w`7\:!W\v({Ú8U9y?~ˇ'jbkWtpJ9Q‰㮞 WWa˖a)4OAC!*hV@Ye]psXdyC#(wFgd=Ҽ w+)fW; ~ZW:V Wղq\ѥhtDS"JHT >+yVb בkg=^(Jx _LDc@h+1lbqI /[ tF1d~:j|G*V(Җ&_r@rf1R`QVZTQD 1hm$gi>!Y?ler6.< *u iN7(D4~)cd̆1sQDr)\d'2dWR4[@iz#uuJ(},"@ R+ѮNnNl2S~EyM8cev7ǪnOZܔUYRsܦ:Z0bsYEѩ=+jCQ`ܜmY~nt*nOF]FquVj fA AH߽uXZ WS}֦9ݮ4>ZF'N obN5AD5RctM; 5_Kr5s%%=2k[+F2׀:7j|9>H[sU)S 0N(ʠ*_!*'K8q+Dv+ DQqE)F V¸ -MZ-r$C| KZPù m9~֚Q ;qBz2n5.OʶB/ۉ^ }٭z]^|a5OinA sPԣnjcw ;n.pr<)ۣμ4k>rIj}%ar;w꾾g3bJ0*+$~{yv`|ahZH =9/u&r@<I#> :')On HB+\ܭ^FHn:2tڂ\# U:A衙-Jd oD_CRN1)ZAy׶xpQ0˙ $_]ڠNmg8"r(f<*-+"q6}pHA_lPpPW(;;D?-wZYQ0 pܧE~P}*Vdwx|HCf)rڏ7ܥDX?< >Ow^chFJ3Ȓ\aqVKKFi Wޖ76,rT>]Ж|qA[mh5|0hf/_!ݾ~ן5˥<!sFj3ZhGxphWV1r91t.C|m*=ɫlnsffSZį۠#ê^eu^%!.c]5L8**HNd$U _ z^qAxTj87jՏ&w|"B.*K>GBjA=y> v]vfpJnMhEtZQ7 Dcw%-/(&K8~R|zt*F_(:;gN-F'ǟѩ}`a%W)*{jǰ'o-sZ-_J{"Ǩgdiw4kP WCHwNp=($;$c]_z;(8\I!F2Z^|*݌o1y;c$bIVBbb<n42=m{Ѽ; $Cc9?3jsl!rjXJH]؞.*%eJFi9`@o+8XT, .+`Hwb1!wxiP+ӄ 2vX.`N$M}B]!@舯:253[t:"6ז9u#ΉW<,n||gxj.w h(8zdփN $N@@uSs=|6KfƷ-3iBBxtT'AP4$L8-UCePA]8h;WSag֦~;X\q~fӻUsߧq6{9{Y6V7~_ſf=6ymXh;cq)l9A13 z_q^M|q>[K?&d^^c1Q-gGM5ߓY(uv>,g]Tc\4H,YO#kt5D2ӹάMv urW JmtI9(5N(}pȟE;StiVZӰ+dvP8Гf9{I5|Oɿ"!ks@03,d6/ lR$y6E[ՒZ[W<x,ubXB[ B:% cN1co)Ȁ9hpt)גDڪ11bdi{F*v~̥f5zq cD%ȝ,⏴Q?XygPDu0VnC*RGeTNpE B$jßP]pwzCVHtS,u(zBߏ: rBL -zSsTƹhȥ0~ҌW[su`j߾y BLBd2:п݇T֐󇏛Y>FO1PSoS߲Ꭼܐ*Gs?O۞uk2~{r9Ttu CP›dISfcBK2tEcJZ+7oJMRʳp QdKsbuY_T0I81ىwr@2?[|)srV-S Rhzdh=Qj crDB:3:) JC>@}wR1+MHZg7S-s')_Bʧn0zoQ,Sg<~_\Y6R&9)pW'N&f>TgWf^w97 &֌b"y_׻#H-2;[dlՀPN! AiCi\Iu5Ⱥ&(cKMh{}}XY *HԘ\1} QewomئA۔k!eVti܂a#?(G@czpFv2YWWCMԫ;g/ů~AKpw+^.Xt (\b5)ϕGaOn1ɿ~oWu 1K%ȩ3%v]d4g.qV#,qfjkT,8dƢXp$qZd: M&L3#\$I A Db[Xa!T2]в]g463h[R8{AE nV7'zu5h R{S=_" b\"|L;_" 3?{|5jS#B0#tI5>3`Uɱ[hβKe5lKjmطjkUDS(>Fv#IG%vҨ%6mq%*ոĜgy9:rUoKw}^F"J{cKtM,a>'e8DnzżmM"&lI/P+ˉcԋa}" s%"(K/q W'WdfaࠏF%*x5FX*|*dSѪ!-1x[ñ$H;AH&$W[\R,xi(ŃH'qMfpZ9&it*gKe2ڨ4*'Z2e.Ěvq]_1Ai0ܯ-:x0PB/G_݃{5z7i&#A}{ի ̥jhL|ӱ 71zޑ~ɳVv>(V~?AR[%A6aZ׻]uDzyD8_jNd |ls3RWQك03U]@hg_jlg'NeI3mn!_db ^d*`LMb;??`k?aЁF}+;kobA"tp'h ?ͤSw||>6f>c3b2KR.sg69焩,U"Hai0 OrK1}oR٩8fikشYݹV!,z[JqLZR"zl1+AظjkgN^d_o澊@oa7|XSȇV:E6?FA.|P)Yڐ? ޾m{m̭XW7K{U/kL{@vi}^̀@{ҌPÕr"欑9&c..(DSߣ1%2tnkU# 3kGWI[1fuTVQR VQj<(r*[ n X._ x&o07ߺ}o+: )|A]"[{˅#űцe1+;bjT(ꫤp44)h%? tvd\4_}I:DB.'J=(̊a0iu"rӤOY!d>2pIN\Bh8x3}unC -s;fvl(M7|ٛFq.QfܺY~t? [`aa7[ ?_Gr/KЫ~o83pS̵}>M#-ܝ߭ 4aR 1F߭?x:&p_U8x6YʼB1$ #~86pDZo&;)#Gb"S !Q0bb6˰)1wD,rfP;[ApxamZ@wA':vmv EA@#55H">>Je&Gí0:er !qRZP6 Ab^èz;WxXr.MA'Ĺ\22xeZIBfa/YfPUܪ0z ß8|kl|:xlHBr)Gh7i-!;G֟KMdnшj&$+=2%Ɗ i÷bͧnp G~Wө{džʎ C'zLutmʼC"tS\Age,WyH gXɌ12dNS]JE(:l8p|肥(c̅ԝe߯w k!92SY̳2\5tbOW;*]ux̜Z;>f"i9Gdެ/ӒzQ,F HW.Q2%-y4GKщvӋtӼmhvkBBrXLVɬfΫLFE2cY'YF8JKlR Gn9)<i;~͹R)S<-ҐV&̚,QN$mBAvG4FUc'PXq9@dzpʈ5<;EN|$}ez0C g$$\+ M.1֊"0!'RcW8wɥ7)n)+iͰ#Y~tf ͘0R% 5sZS+=zZVT,3 ҝ;S&$T!)=)RT椨^#OAD<'Od'1#9ERΜI^ p°QwUrg#Y*_/ +1PsŸ^+1fgD X]#u2Xj}2 4 !_^,Ṡcegoo4<7dOx%'dҕ"yw$|m\Ods_: "g4d{bogglJ|3qLZY >5Ƚ~fA.Y`s4ՠТz)S$2ds$ <׶t|\BYKw.UGomܷ m=|vS70iW[.Γta3osߣeȜC^CT.Rt)^efN5d0DM2)V7q7n/'_|9 ͩMT oBR3=|6oݐ:|.5FB0MVIpM]RŔհBֵi;r (P wY.aÎYLy}Ftz#n3Րy $Onp~sGrf{j31z <߇9%%I7p.@AqJ,N V_\.xqufIM龜~\CjkiyC4 &H)f 0ƒtE;U* ^X\vt猭·8tyoZz.R ӕuX^=x_ YB0Ճު߬!ټ|%fvY.[7 l P?b@t 8x8 w_˽credZӫS[߹~fѯVTdVe5,6IQKJ#,[\}$J-&^*n Sw[%^s{=j)5AQ`{RCjRuh{|c庤i~DoGfU H9\&9RwהJoVq z5;>5 }0Kj|'5\>A(&X:EUt`/ qwVqGlkN #_K V BQNMֈ5ܗ \T_%1e\cnh}-Dze-"\rJM[*X2'E9bZ=a3iDR,|"!-AU6:P KuJ ̏þ6*qemR ftZ]&L% 5Z܏N6Ռ e2+‚JX6UȆٻq,WT~ڝ%UyJgagS陧)6t}D*"ETvʼnE8;W@Br2+,$ViΚ]O1ӽ^3D)r g '~5qǓfq|j7!RpDbFq7RED![ufvu ʟѻlU:럚AصT֯'pz3<ʣC%%fAm?Q8/?B$ֱQx&dːem&c^({nN^|ۋg8Z} WA(&c^LYC:P2z;N훟&_?,6=!B4B\b!J"V'ʝ_Q&d`2//Z hϷsIMg"Po˘9c2Ǚ gi#XEĪ#컃Y<k3o.g2nQN<ĥ$2Vz}EGCA"N: S ,T V1,4d a bQIdU39!8ñ RK&Z8>"k 6bm \#U29iI4jEYiԎBKp[It#f++{}44zxoMCۭ@He?N/O #AzRǧG~j *n+E1p0ͤ'5Y̩њ^)`hx!N֌;I2Ǜ*,:1{?eFk)N)5nP._n}q,'B5)wej;q桖zC,>*vL#s4^uT7<4p~Jg=/`qubZ$Usr" _4')bkuOb4_Ӝb2Z]Zp_ i!Qw ɣA>|4mL=xrOqa17PE4խZT H9sl)#i!);9ID5^twB(SQF۫<7낊lLҞHhRHN jƢRz#59* fY$YhL ԜEaIurAI5Gc)=L90Ka4)!iֲsN ]!HhIzğ`Dɸ #t)H l&\}-B9AqH qv3YcLKۉEn> v3!5`ZZLث}Je݋D uxR(fsVf e|鴉$ *_⭄5`9{+0xc-}YX-ogK orqUu2euCS~2_-p}(t Bҭ:9 (EӯW.|n\t_ LY!<(ur]f 4|Z,Yt8*޶гK:(sAQ~}ti "j{34MiOJ NiTz "; #]#{V|qC~_.tq%e: Lbj{ǣm[hl~rQ%xr;>SLkf$9!f{xnoрMF,[F$ɶ'lA|bu<_8/',W<3ԑ\gc) $ ljqg$GΠ ƭs8;bpԫF{u FʸkGH9#Nd7Rfhgr4h? LG4jd<JPRv6K$\!r1jkNS =3etD)rJߒk>~e\T`x,TSbF7) k+R^;7 `S~UGi;4lTj*66%.?/Qt X){J %>$f  Rj vauUoC1F!rپ҈մ2TAFQ!SkѴ1jAV+yNl"}JRW8b̕jH[O7.1JWJ1?訚_<)-f!F/׋/d{څA|^G{u>QX[SDsX0ϩpQOA J?"5 |ޠ_@0.2:ЛY<1f}qޒ8wFϟ_Xx$~i3 |M7o.W~b|kЛ; y~7_4G~pa5ퟰ"D V1t;_X}'Pc,1n1/9)ۯ0bLօΈh-hruS=Ӄ|޺=ܼbVWz*nzR7-⛼a!:"L=V]uqeN>sjXgA1x|!٧G:u^R B d Ql%6ּAZAY{_ :7wW;]=3 `;~Z6*j[r:P#k0lpquei <{6>1|Mc;^zT(j"쩒̾(щA0$nB O\/UD$t|މٺTUݱ=~qXQfzۉ#ouekEz, nnJb3n1o/׆0C=$>cbt< qt aM! 擹DڱL9 urAhpز%Q) +T[ЁMܴ 3ӁhT 5|9Ut&Bqu4d cl(҂&MDXL!!i%81jԓ_'6-j$Cp@mZMG=^PP{ǓQSqd8!SQdz$d0{:-A =l*BuҽajF C0ɜXo1sFQһc|xmk#+F0Z ½RFNTq6oP*po$ 5 { I!I8D >;"'ycS7!(,!\y4VgYN|. "[6 6%UԇERV*AR_{/fJ"{n)?tDtZ2w/ iR. xBwRN0dHDqJ1?n܂?'nws3c%i3sQqbtk?Oq'KYAq{wGKfVo_\ d(yg7+'0U~ka<^ 2+hr4a;1{{mn5ϖoUZ)R1A 7,@i9K7ClxsirCA[;q{'V>g._6}/1ɵ'$ My\0JLCŴNHm)';ڻϊH=G9 s2Hx E'8"!ߵ8`N)Kڧa> pI`!1do-Ms͔J XCCVP 6:Ƃ:/A3(IiE N|PE",b3n;`PsQG`vDi؀}*&D牏FtH:HA3Vsc)9Wh:eL+XQ_ Fb@$84Lg;ygOh25W)0k R0`"WBvpڙMշخS0@+4'S M~vZ2]{!X'6ױ9y/)g+D^ꔈM[f;MW -(] oH+q~?-L0 fw3F4z$ ߯II"Eɖ' U_UWWsl[1JI҅Jm*-+);!\^򹣩jzM0@F=0dy7nAm VUٮL}Fs#A(~Q~;cFاĂrZ.~_,{7لjV =UGq㭇SG{210[31~?kߴےc+W<{o"id` s_aji _ ;?z3~98 ߅c~ybV7~I^w6diNrݟ ;Xh%칟Mw6d:@^nɞ;'mlvjM+>kHT )&吏!!Fؾ (Ly hLbBK"_ pb/k֘NSX BNdՍ3Z&uT?A`$,UšLؾƻpV"usd1yLrC36)G;^Sh8|4S7I<{[o6xdxqeJNb"`&~Y߁a+~iAi\2wC@1^h:FNy`~/7%g 2իg%-({B;g9yf\<}_3z/EYᏣ!6)L׭t|#ZTQcԱn#2-;ukhUv UN[dú)&Yn 106hLκ5fv UN)r />*t:QAWҧnjݎ!6ҩ ܴnzbPial8F- Sp֭1}#hgtJnš[FT 6[z%!!*4c[vBQ,JGDVwoSٛq߽ G<`E^xh$ ^"Y]np [}| .'0\&KԾ/ׯ}&QDX+gd8?$TRU/,loqT1:Ӻ+KD;dLQ6$Q2@J;&.&0< '0[G]2b7K(Kث?kj %G"2$BO} 7ܘ|%&4Lg\] ~ڏp6ALP)3Yh `!؍?Eߘ?~-k=]Nf7{Xo5^]9x][Yl}[\/RZ b16]m&yeKa;jxolWp]zr1j)UZ0m-*r& ? |п/.uJ.ϐMk4Ec^!Oy4+P㡑 Τ߳~k;M4,y;?hï)3xkYjxJ@SIaRJd=ŗ7lK 1Y!YOSR2k064SF"D׌ynDIbDw0oa,sM1 "A!Tt٘1zkQTv}R]~ӆd%4^BfBiFF,x`dLBx,U.zZ엛잸h^mo]IaQt5OswSj$qNUV#)KANIᜫ7vb#pȩ1 eQ 4#Hc-(Dʩ&KS&0mZ)L6|y)NTHn363:T8#R 4xJ9x1|?m,CTpYƜ(؂TL|6PI͡ÎRɐPp_ddeRXov\" dQY $(U#J*{(Ţvܵ5ޘP n`;X&q؁sY>rcT=E)rhꇹسY$~sSǥN,5T\б0D$-Eh7IS4v~w~og9m{eM../ vCޏmM~8b)+xmp5ZofaUll`m4*3hO(< 9pb2g {vҞ'JBA""U$7pd:vSCLa =rz'IHYbcO0iXWSFL DјTkX UgCa=l4NI旚q%$s|Iho1Sc@#R>@؇^P󒑨*M)'`B6mJRp8 p>saޞ}$L ? @|h9NS֙#)BTT}NK?9Zَrn,7?׋Ώ@)^t,?QȖ h"}c& g[޳$+vC~9&$=orKLQvvՒ(d)oփ` SYU]]2"=9Nn PwtKLNWrG7[TP&~或b zXٴ|TRtGAH]~]څWքW*#7̡Y}r_!FHЍ)|l 'S}}:h]4 K& pbxdɈzB2|#;CL'uBΙ~XТG1%f6flͥ1%f0fՀ,P˅bręDE5A9 J6Δ60.$ڗǑ9brGT"zh`A'h sփQv(^LYn?K A9;A.Bimt7n~ߖW$ Kod~0p ٝ[g& u|;$JqowtuᔉRpꑐpwTP' BcHLb '^SWR`NȪ;#|8cׁ`ɠx0}\zByY)MIUqi}Tmo;_яYEpa{%M@ecd ESN7@pYOd:cdy~x>gO+RqbB\y #)r3$M]=>P!wQиG.)qUAWJaoSF3=fg[dz&@ȁ:Pu͉x `{ߞ))CCVPu2wclJ&2F͒ a1.xU:ѣs,ʡƻ_Lu,Nc_?1u1To_PjbNӨK`Poϩ9w]fgNn)}R2F\YF!{L(M-X pz7{W L"N\EnCWwS #!wP/Yw?@'^-RU nO[d0Cܳ 5;,l8X^OCg1ӭ>>얐-ؠ q:& zM=:?L)r9nGCݝt>uͶ&szכAMҙ)» | tG)82RdÅ~>jl[QmՊ㻸>HW5S<lt~ۂWRuEuX'IX{2oqT1fh J14|344]NFM a+ڂʤϪ+k虠f<.[/^kM,5Kk s7/DPh5>1Q oЛ CbkTS}i2g!yzfx.Ur^o֣f^.D;;5L_fE(ru" }gqX!dABHܹSwp@gJn[%}V؛k'`Z CܕXB҅.( c@gi3sD uEOS,6()xj؆ԄFy5![ yMpaDPԓR%Mw;R,5x8avÇUb6FN[ 0e*ʺc$J.{)@Eeb<7۴=* rKU1C?6[8[ `\t4(2 P3( I0C1N wCG|‰.*ZA)$n/NH4; oo4[( 8WRdp mx'Ypd ^cl h >3X/Bc#6@Je (mI)Ʀ9VŷIOi?mTV&+ ɾ ޽t |ZX|e=M2k [ NҀ@խ5#㙼MT{P ʞ+mVk#C01]#ΑCbdMJycy e2VM R:TMek͂õ_k?GckT{r|js'h)AH-rq^~*{mgRH G͓[$8(X:v ŕ\1HuZo9*95G"6 GM7'BctPmE%N[x*8r4Hw؈p)\-~?}*Vm0TrcKf-h׶T 98[xD5%>T10 9氧w]>l݌]iQ0s~_\2T*dϗ/*7"I}<\:'_N^u/fz]_nNZѧO` Cun/R)F񶏋8,&N5TbOh6Iv߸yh!O*oORwZmUP`s>׏|Dbꨖ.|In?Y.|~#=!> =ƳV[W 9G?G'9[ܙL2[v3rR#Tx~QbAquSo"eg+(;? 6wF+A! IJ6"(3[͓ RS\ڏA=H]b[qv t<Iu u$w#*3һ:Aawco2{n*iǵ=θX@I 2sYTCx=}(^-t5q[;f0r\-';w&5$EFIP"y1K@<Gyh,E;A#)cFPP2GXBq # F8,4LO塂M]G}d^H]e u'g$ϻgV#OI/_.[/,1#8(h 5;%0t&XI=jep2 2A#DB}Be@q@#F"B$Ot(r)J)}Y"CkfS8RDp|e5dU7ClP𷫫Ms@ppH%Ƣ1Ř3.Ebu]fXGN,<(-5Ũ?mtL2?)(Nx.JGN=-+Oz4;򒟔ǭŦYSgQ+둯?'8g8ݦ,vmQs-OJơiheRgvL݇B[PT'CQ$1H Oxq  8,'yx~fQ("uAW.dǻ;~Fqɼ2tqyQ|Dɑ }iT/IE1qD/H{0(/.b@:׋;"Pzqf"pex.lC1M/"+0&΋֚tMcSQ*`$zѧot^TaN7yuJ"NE{#dB<ԽPƸ{T)fFVs{6d O{R߇}lo/O >e&BRW=HJ=,) bNUu,I\.UdjY5r4z)/ntU5LScZs]zbKC߭zn9U/$xtEG!n9& DPDiG y ݭziHOji=)Q@!$T¿ZJgV1؉H:(*H@)j-WiX0"kc~RguTes|HaVilխsU[~zVFXNw NA7K޾AE@xv "OG>|\ ;8f.;>iWow.{u~'B!ۓSV\st~ 7I I ϴ6l䬵ºiE*wXʑA[|- Kk+U+;T c^m~^]xmo[z4$?g&߳LN@I ,3{u77."t2A0c7kwnxn |j!є l*Q@1gyoIz!eH`n 9J^!kdGbrqvvu>[~8;I1٣(J1JS%8Ig%ɇOLYoy0$Z, Ryp$$$ux'hOL#1ɪ'tyWK-< [Rg;$= n$N͝>"sfѣ9ډ­H81ɧ::e:>y)R]= 3:$&hҴ3'`g(R* LpL6DREtK 6:O%"&t(N5T" UkGf+%O3O-R`z+ͣF$R5+aarA(I0 pSϢ֮}$-ު<t~k+:ga9QP1Ji>wğF$bLeu3{ًtNFWpP`C^lF<ߣ Niy,"/λʘMX }e!1Wwv !lEh&I9KpFaTw YӋie~XNt&5{XMɬyw#_7$D"5FMrӆ-gWs5{GLZv4n4'chr֣h5v`ܒw> ms۟x7Ti l6*o0"q{~N1\x XP``FFڣԣY[ zwNzNn'=52=^FfI' p9cK6Iϒ-N 6>b[i|F L0MiD2+,bXc$dKnK1l_V2#|cgN IN1+[u I/9ٱƘGe)-sn5W}s)]sRҠvnxS V(ce'Q}n$ 5?v0ᨢqt˕lM  ucmv0(R𑹵_I!ױe)FMןQZQqm5~}mn[+(}P<[!W$4jvN;ͬX"[D@NvZI,ЈFh:ZDBzZ{F:TH@y-[0T>]| 7P v_حmE u7@k;ŸEDŽ%/dW\10 H"@TdA b*|$u+Ev$h)5E5 Vl}|rtqۊk*vMhfW0Z^^- KPWGG,ѡi :?9,C':v)ZE/23\6{C`&~Wᘐb$ox ތ}y*H߇)b͛ޘ?7Ե"݋;qi=;z4kw$ޠ^ VAȸZb /.'v'L(=䁹w\cD <YVnY{6n ,bgHZ-sBРƒ Vf!iP>1Eb = @ߙi?@'ZueQNc[<ߴ@p9w-jh66}|kSUTR!>byAJUi HF,("^Y{a{5E#}B PKj R- Z1W)mU;pǚ`mNT3E %P+Ev]ETUxH>]TMj6VeSd]^-&q~q6YOS|~|0t/d}Obz'>,.Nb,iFZϛOa5V'^>aMfz)wbY7}i,LwI? &(Pb]sT)ҋD)瓰rmIBTX 7[ڭ9SF6UpO#ڐg.%2%Nϣn.n45iNiGVhvkABɔdZ\?Ԙ wDg{H8;D; t/tO' BOr:a昴QijhGsq1@9qiTЍT }!F(fsBLL4#洧i!P1Q77`'B׎$3INd-d ;)E W(];; tΖqtgېg.%2^vcA GtJhB݊nmH3" 6ǹ$ң ie h xS߹Vhւ+Ar)]nOٿ:L׆In&livA)uL&|8 gGVO^GPLfU`Cb-j- d?03B &sDO9 ճ^0Rs5b_!5'`_Ր2=/ ,᜶ɒ`dfŔ $p< jEM,g6Y%}k{dJfuln<_pLi&Kh^eʋ!=nQyqZeIYS(/H>Mh6բ!< RtKC` M#JyhOlբ= (V DZ' 93߂wr7mk:W(g[r˔ Ž+FiЄWh?ZQ9JMG,fä"sK162iɽD/~zUע+c")Zg qXzx:~P]^ nFJb·>8:D"͸w$S?9N䏴gr3Y5oFhS)uڃР"hAޥ͖m烝z9+7-e"Y 9nߖS{Saq$ƹXL?ٟ<ƢER.hO](SVvvVk?^TX.jORިZqQe;x_>|ɶhQmE/Db{ W0B΁$G_`[8n*ZrͧD,?LYM DVK(X3JU|^ҊקJc (Kx|p̏NghQ%e;Z] - O"eB!VWL|Ca2yyc\ ·WHQV.3W.[}0˻~zMVSA#] ~ʃՊ=!ťD-V^ g`&T|}1* gaYUw4> iRW/9Ϫ00v\ޔ 9xg-P_% \nB`bp⁀D=ǂ0 bİ}D@u! (kmF/Ůd_E72 /g9IrWlvKnwK V]ՃU*܀ ך~uˁZn㧀E_yXgVgكiQ1#Ό<39ސH!àsbQ') ZdQsZ@L1{E6q4bЖwfYcx#6mM>_Yx^=+Yћj!msbKejͶOy- x?rֻ 1#fdS ͧ8Z O7'W Jjڮ,6I ˣYh }QLhIae} Fk֔jDx/mETX?]S!~AR }ݦ0OGƌ٫}OZL9ɑ7r(}4"uuc܀|y\êD6ٛ ?ز˧|vAhxyT\\ij6bіwgB2AeR@vZDpkL;aVx셶*\yAƃp,dJ }1{\< x k!0r6lbiuU5"`r4+n砕{/DS(-SzPWlglPwu.Kh>7ՌX'iCF4ϋh^t^.^_l/>DZ/̀ۗq"gL$!0Y}Lf6]MF1BaEZ܉P\WT^ }%AEMKV.(r^ULٳ̖̚Y4Yr1yܒ1, YxrF ShBZ0.K{~;/|^.li}+"Z>=WcUU p'_mp^ ,U4/N8z`X?쏌7'deiYrɄ IBm}{"?EsH&nrcop%MprPCb W) Uy8Xz7nc %Ojnt8m⿮XGfH/6w8-\x )Dm͖q0-!jOVpՆ5ngLW4!g}[FAp*JDZC-J#kZ?UQZg7)^ֻGGXZ(@)9^dJKQ !i#gs!pNNТh5 5( EQ(@FY0Y&0˒Wa3W-hȫ.dN/$Z!E&0"=dZA +0{MQ#lU/Gg0"GDإ+-ǛbtɾQ<8#KW5V2al"ڂ\\BV ƔDlP5_Fds(5{k ޑm }Ҏ2@,e=mci]3rnV^) h)/Q^(%HQoR^,*BLsXؠn✉kז 4JIY;wخ#R[=ݮϛ%oI-j.yK^Rl2 l%҂WΌV`X [EZN/%hn!yKl7I\J(WZUVKu)R_H~Jnۡ`H!zòZ3 b`Ws5Au( %ӳ4<% w@ p0{-M'а3aVFË 9q"_\B+ՇߗZ^w'?ސGKW|1mX/%Q?dPz5K3]-.907,o-g]ψi Ay8QGnOU#5`5h^IC҇7(VxI,%sQ p6A=/#;=e>WzY|x6 5 ٧1 o,C]N c\ŝA Q(|fiP$6AgNa [ ZJ>ËSI#4V1V@m4[EHL#- o3TfNGOXPĩЯ3 v(APl+g+F ٯ+kNpVAIs]p_[) Kˎs66]!xNoO3+ꂺ rbp,im9nAp1/ײB7l$^p`h$ n}wyod1lW>jI/>i~ K`ʆQ2[Yy'aȘWww塧[&۟m݊~eRl<ݠ_2˽kfDVnx@C,WF*'ڲ c>(x"V[S)O`Ѫ;,Ja8`blP ŁSX{òp<>]mt) <.)}ھ*o1%,]~Ę{Kg5^rs GPeC` ےRCr5RwU/6ͶPz+?|aٜ- 'vBm1?|oeCȀpWJ9f\9rp1T\i-Kurˍ&݇aJV(~=4Sm̃|rhP;s Mj:uAZ)E Q DL9PLPNXH^r2u{/Krfj2;Y1@2LJh^u,хg ~Jln*+ 5^膼#!QtDNuacG`kDGJ²;;J @!v;5ӵ67_[4tn2aʷ^64>r6D6l--AF qfq'2rJ Pa(my֊^Ypm%N2R// ]KIJf%:RBRJ#Nrx8k|8O㜼12PIiq 3r =)̠UFy`8~dz1[|[V(P iU1,s>d[LShrOFbDH.7h3B}_҂6"4CIDZՕ$Zȧ KypOyG4NLu-:XyYycm,̈ pŊ|߿OU{~G(9$˫z_;pE:#._Vuv V~\]mFEX{Ǔ lvfaI[dO[-v[&dl$ϩ*YE7O˂ ~ABɧ鍟.cbZ:&S*cbAtpHG[}CBiz<-O9ۃvkQ8ڀjN e oBaDBn [Uj$O/֖5` )꘦KJS*m} :Dڽu(:z}r1EmNgt*h_U{yѾT7,,|r_OWWy_[fy~7R&{_+f^]qiz2Y5C.MjZj񭝆K Y釗wd7Hהñ~w!V͹te1ܭZ7*AB]"ݬTo`Ww`ٜXgj鵝V&'5 -U%N|2 ^|],M'˺%[4T4ohE~|sٹnqm6/2 N۷ؼ SwfޜK ;o4̲Qʣ.+ _|⎩YMww77_f?-lWjĨKo~>ɿ} cȼI+XƌJ=qFY{y܍PlCbDk!^shNvo8;޾IHL8מY/Ya}wl(˷%+Ew}%>v7{Cvϫ+BPA#3K.kyKIy:r:5[{}d#zds)Jt6򅖍qb6Y%B^&%'Uɉ!jq/9Ѻ=⓳\GEXw,5c!}䄗+_֕F.re:ZY{~}|-|[o=[nx0|G{3i:n~7w&^G8q;x26Ti ~=!sOGexZhs݁8ɮ"ϵvW:!Z!uC[q,oE1Qx~~ONc0E]iavT9=CLF:"&O:{~ʛ?=hJy$&~:*4"Hg?JGAtƶPɣtX6XKtzU4krnƮB&hM*6iҝ ѦiAtL;5 6X;#ZtT ^ c.>0(Er0TQBbV3o rpc0@7 ̡giBx0YEiwp]\FFP>vPꑔv*Y7 E ^I 5@:hM" F:΍Aq7Z(Tc=( շpiqpH\7fFQNF땁pR Fa Vm 1ݮ$HP v|_oJաuCN>|‘pskՠH뚡OYyW5rQ_W}ꔚNM3$#F?|r`I`B70٦{#8| )~`fKƛ) Jmpҡ*+p>e7D ns)PhL:p:~O.b f!ݎv<(dygl b9^Q "A^‚XƉq[ O M"ש}?>t;ڵ!L+=jH엯~夸, |_Aɧbaa^ÿ {E 1O-j$9B*C3$8cRZ+lv،gCD]SOu0اZ[tzf*9PM shwuc\QO|\%k#-QY~i4݄+W``\$?:0Y,hُbY|8_m=]>` F0Jqǎ `AayvgB)g2ek$ 3yQƩ a0vsq|&c|cqqZ^d>G'9Qw& JKE{>C98v e Tڂv[2dQGsG@$U˝ʑ6<΋D`rb`bA`bA   . + /񲸲#_,_,)_,%_\_,_,5__mTԽ~}σ2"A )\yWDie 悀8")L1@pb%`b`bGi/h:j<=_9_ vc~C/8Z_1k Wr]GnqJgBGw;-fws/dUJJr_-n_%v@~z5%%^']z$k$c:ؽ=_"G:1=ײl(!.VR{ kZP7^c(>ס齦nf;֚P^s1n0ɠKY(.?v,|>ɍ-5Wg[[0]~-(‘ ݵMgL+=GƿLG?]Fm{^ nŲ3Nf_s_? &x;pݚvvbғ?.>eZꑐhLR~xW}]J"ZP NhXx뢂S֞u TnuHE4HZ{׺I̺`B1":mc2Ԛu VnuHE4HܮuӊwbPEtB8ƺg-ʟ||֭ Q߻߱nA m/Sp*Dk-<#uCB.A2Hc|֍+݉i 6ZFgsLkhLn1`׺ώDnT6n;*>knY[2p ):3c84J6[!5HE4H|nuQP *N`GYxi;n'Z$d"$Sfc``B1":mcb ڋ=hꐐhLQٷs|k.tkOyj׽; VH]&(vX‚/ev5r[oʭL}z$Kyx)VM` ):57{eMZm@xW ER~f,h꿌c?bgZ۸Mvw(]mžTm:~8Tc1+жru54| !fb["9k4~@UĚ$-(w<ҳM/1ú|vaa43<)q%% ^WaeIEb˶DBr)w2 u3ϧ06~٫BW,OQ= >}u@ܓ7uC@ُ /^ڱM^GOG~ a+PPTl2G7~tLP qd$LWIګ*4}!uXLxoA(C+{&a/`RZ SGBS}H'7Ч, oa20GK S55yrO=nz^~!pEJ1B!D`5o#{wOMI+QrJײL_>qKzo?Z5JK~nE"  vP5\ I_`@?ky:˗j|0#U>Ie4 ?pwVȒFDG9R &Om>qMA`=Uɱ~^/"=)9{2d*S ('CzTYZѵ/,e:24M%·z*XC4ByOX v#8ǡEp9xmLMȗ9L%GyނRFyOP`ςjyX_4۷N&#\ίkU9Pp9l):c(fBm?d8l'Z6s\Xf2ZldgMqpJjl`ɴ!K"t3e\e12-UަN; q,>jK2PE8]O˜){Ggɮ ;׽.whJEB>? dze'_Ҋ',}2 om-07Lf&< V% X+Q=5dOWEfc1ԔT>y ,ۃ~Q ~8ןaj 6TnXjvl>*'=JS\·*nIBw[CQRj.rF=?AE[DL8khy:|VWHs_F )~TcRcE:G93Ob(g985;OQ)YOQF7k-rE-99eqEגQ o"1F8 oT!D1[̎Ln(nVDZyɳgv4g=岁-ҩw7&Ovx:K>OÖ?R₽gZڹ QH=>o-c3KKB-T'aH5!Ӣ},<,14x*Jknཟ,K"֐CAfXZ4=OǹnwSE($-hT$*ү/Js៵HKNh3D D`GKz! Lk#Y{4,2.LͤW:a3$5TP#1KJh$UydH{B2ZDLq$gN x\mE=iq rpc̅3cbPx(pL_mA*2猿 9:=f© &XxK/,GX {2i pดhّ .Cх O4vQSacD}XX|:^ExdՋ6h4ku[]c$D0xMa|5q#iY*Deʣ+N\cmp[%'/aQb>&$WV)~}D֓E'R:I_J\/_@yZ6RAby%R|?m%OAaU?Otk{TO9neZPaiUU%݋-GY^vN}qfr ybMN+D7L{57m٠E;uH:wBsONIryIƘYcBa7P4f g{f)KP].Av.r'ѥ}~P%"_ Fқ `I)"$r%AL^1.;<1!)k*LgX[,U 9IS fοh0\;ܚԩe]q'D!+E>WLS^o|V쾫tH= zSX)^c1+&*#Ê"⌙b{E.;rM)/螏n7w5 G@T۩U_X߆ ݗAz}"'~/\Пӻ1*b7|JSX|2_GD/b!WˎA'+;5YbB%,IU&+DOv̈,{ RT"8 0ER).qΖL&9 ɥ&Ybri#UHj5&5PD+jtBQZff*XEi{+5G7}sw%qcXdb;c ]XG&q/-j3YB`*㲸 px3)PXQ\KeTm&Xءe!+<:^< hV@G:]Ǿ:og H+&to68}2B1!'؇ϿU!|A\ фiڐh + ZuO#"|z3߂"Y)QIkXD1k;tw!i]Zd(*MnN4 .)`!X&RrI4áXl]PG)Ko?-Hc3-(KWYˆS4RQkoaš^+'PxB#.5g$6w.A]fR(iP!ʝ'Ȩe[U"B~`dbCx ~w[}nNp61I=FdMF1DaT0R-b48Kf>XEf<ˁM+75m\*Q ~stގR{t-^oZa4m,K i鵖_,R͛t)_|DrH^.CbVQ AEI#sTP”P994\"[p<{1xPV٥=%ƞЦF oCZ&pۄ cd\72ɞ>6;J_cI`S'9 q xTa?" eh&q>hj: m JxSws&עdG-*;W):{0Hݙy^څg+q3RC\'<'AilQ3Qjw2:Z?{ȭYɼ!d`3H6ٗ, 6/cvly"}-_ZRk[#˭n*v]YN6qʜ>4KT%pB~[p&OikMma *ӑ͎iz9HtP﨏z`Gu},G-7~iZVDY\/ 1::t kT/y V;Uݑj!ä )^" ]zPUtͷzVd'J|NFL?{{"EdJ^P ̓ÿ$.wp{޼9=%̢b4ЀP򠉴J+ **jzoYp3:hA8ze4y^>gBW;h[`|Ր6'뷓3p犬̘?̲=Q4nS Jn|T0@xS7o@"N&)( EuLׅO D)C * U&H3A8C1*|ضSHL%Ǖc ߊ5&qu6 kY.o2mR&on]mG:(N8]f:զZD Jwym6)6`V 2 f %ޔXEU}pSUH&l*MkWM,p-/\?Mi6m)l/{ YeR0j0h7m3kw?d*y5Xus\F ˼?{wu{oO*<-mp ^#^׼2p}g|ÌO:Y |+ ½SV"Gg}c^EZm}L:zl>,j\z#sw!˽s2pbpF\|MCWu[!lmZ+ʵ_:Å'G/Gљݪ΂WxoA@v"|Dp%],(9yRᡊ)(a  ԫ UTTځC -|pcG*5OjaFt$rC)#ZK4bEdTS[`>%%2""V+RFI?~_Ӊh#UD=ז2FA !DtBlD"x;.u sw 7F-=BSFD<<:$eH=uTdv7K{Kcŋ-k1Y}û^KoJyЛts̶8ru|Z ' ޥ_s%eyNq/힅aޠr6ӓ|"Z$S:‡k7[)9S.m r'nn}HW.E2[M!hR rD]hڭxڭ EtEtLơ8K$q2$=Iz1ߞ~wOyBI &ɐD#G{! 'VP0\(|4 UYAdTd==IRQm Hd3JF !PO!MʖP) ϥܐĔ)I΢ɏF*1(l4UhU!۶{2ieR-̈́zf_]:\N_ :Աn]qk.i/ɺ^?9|T Lvp7 ]$2w|fg`9#+sgSV GU*}<39H1$}VއSy.8+qZޢ{=mjrmqM}o|zf R1zG͋ >aih9E+,<}d9w~rp㐙1YBdڡBWD MXcdohC{f zਏJJޢ_AqZNrgl*%=nG4Ӝ.͇`r|b 'd>MvJokT#PPL?~#7 ܀nPezٽb~ԟhk`htɂs]jl3hҊ& V4.ݖcW<\,7(} 딨F()(aFHCo(REdhOU$L#s ^ 1GUIBM*-Pp8OkLI =_OM8ۇTbq>뾩ۈiNNO|r>7~;w%@7tL9/v}ӭ숦ʬ.^K|ws_fx%٠_=Yb坘FE>T)* 6j x.3:h !Rl=8yQ /^M;L!0i-.+c[gn%]Sy¨A[3we(w>0{6 }0p!BSrظ?많IŲ;uQ23,H=cDQecNH3I37%mrq//mxKgW29ʝod!mvvldE ;Ng.N=.n P&(u?u׋rM0u\p`c¹AhqPyTS0sQxg):{!0 rApg/<ǹmJj1rIueRI.^p9h~P± Y2JQ۝/qh}Z;ޒ Q]9ho̲ŀhۊ+V ;> IK X$JI2Pycv*݃3ΛE2@)0G(U DQF֠bNv5(tdXQ+//gZ㸑"aѳ|ErpC>Vlي$g,_q4#,zzf$aFC`UUM^w)Nwe1bo ǘ1; OWW7 6=OvyReHCнNa (~O߹et,U'8{ t[ .?* sx{ÀuY)ejV+WW衝,\m0~)iBJ Y}c0G6r ~fN5Ϗ?}, `sm %2th/ .6(2pde}CBS-smV]^y K853wkU0#2c$pC4ףyBE_wr)AT>رR4&{::Pd[ Bn>^o7-=”5+=KڵɃ0H/ GB)cR۱3):,i0ʠͱ $\V Kڠ}l%cKJ1DU<'^$ eEߙ[>k?wqCfC'SHٓʂNvQf?'#9 O"9vNQ1<_X"XO6YHxFyWDBn}DgU?JIo?8kӲ K,?iK(>fmP62d-HExthZjZ렕z/Zi ]kyt:*( "%(D&"K!Т=Asr`^ ;OiumU1֠F+xNB1F)s:D>GAr"XNy}6Ds?y9}~;/\]1J~Nݻ>}ZiroD67Qo[~פw`e',(.yFȦ{O߿}ŞBĶڐ}v͟H0Wgڛۿ,ZNX0~}4iZ}G*6Z,[MiY[@0`Z׏y!VVtQAƸËlʙ5/8l*8_FpC4I7*mA,+DAalH gy x1^*^UBwv- (f*Wv&tNU #quno?ޯ'zQP"JzkIK*KRKg"vjba~DadPiuJUkLg %ݢo֒Է@vt~:NvjyƑ0v~sZ6;rAUѷӶ-~ >A "x*t?KVN.w߾B/#4t h+@I}x&[#I6jJ#g-iHD!s,+j fgO.(+J@  5PJif<NՌ8VK7U,xRehzϝw}n>@q@Ct>|jc3>f~炽ee{{cnbʝ_/FaTM;OE9ZIחec+/u8(bPBz)ŐbuYtvf]?gx+I5}Wg0T*mF S$1l\)LJv[)<#F6e7@F}%#&oVFFg|h$Llye1ǒ_[L6ʥ&%VǤtIQch}`]EF'xtMYCk[II-e x+ƒlx㤉(k t 7_D2:d.ǿ}MhuJSAfW ౣ`Uѹphg[h̖ryp|IڗQN+Q>eq!݇XUG9hzAp"-UFpoԒH%FZ )ZL1N ʺűgpL2>ydڤMS#DcT;de"'imiEfެcL˜gu$K_a/3cwp{h$6Sq][ދ@kRQ:4;m*w3 `5=\H5:p\kfv~PK^*yQԇnJyQˍ<8R]o-l{9-l4Fn/̇+ݙ˫-C(@ .zT6gB~gC|59Rځ ̌d9`,q˘^}h9\N]~ ^-x3k6F bRJj&}͍ZL0ΐwBU"}}|L}@G.b,R(Dө}=~ ZRpk:c/y"]\)g52' k `%mQNZ$YD} 7p' 3cwz Q:ёDQ͌QJREHۻ>]4w͆AC&x"RƱc`jwOl !n ܴ>S!c[cQf`VCT2x!i RpU?sN]W1gKy=[qE/#W(/NXwD MB\4΀9{J?QC^e]bD3O1}#@wR}wtʒQ2_&3=TVwO 0?էݓ٣w|)k9>x+PGWA4ةzzsjHT#VkмV8@&EDHZQid]79EoB {h7 j{,C04>K 72QrH>R2[hQE0#aۖA`N ,Z"yIK޳(bt^;II)u2@jP",hFtԲIo1|"Uې35X:;1F,?>`rlh*!43 `-E 8{7F] ,2l:; o 8$ dB=̐ &%IP8TR&I)tI*Bdi%6#l041>>kRt窆W@sibbJEf.KTx2XE0:[n}}: {E)*Dib)Qپf@⪘a$SKXM((_{Vݬf]H^݌s>F=Zݼvn@J&{%ys41J8^ÌrhV*+^~G5cݓ+hkJn 0WWOdn0w˟u(v^뷽F1nj 3WX&;K ,H)ޔ+lTo \beLAyD^%ٹ9۽l7'X%t'`휖5832|cExOlYi+kn#GYҸpw1 u.3}$%xH(HtaD"ȃDmG#q9+^8E+e(M&m"ܠ ю!x%xgy.n+8/ԭI`9u2G>x+q׈g*Gj!Z"ܲ)`ֺZB{i2S ݢBYczentsG[o>D{r$p .HG Lj^‚[.Rju)Jp 4(g Z(( 8p*hX4P2f ^ f&79bpZ@ӌp ˃N]L*BIYcaB"7"U)zyv˨g`R*!%ʙq2G8ߌ#'BYan"F sDS%fTH}FBB+h$4>erAh F[5Fö! gƺr:Ҩ9h.5i=o]/ c/sxx˳xk1J=< r:Ifڂ__zUr4#)ou5 7r5`z%wk(6~*ʤƟNw4,5}&Q Vky2vrtz|}VGHN~dzՙH1~`y첝j {eOn n1<&k9zOqn9j8.[ Z^.>78'0T\% ݣ(Y-Wg0po1ZR+dfPH^ Ko id| U\F|R_@&Y)P-14} 橷ưqDHFs}OoV  = l1_*T@+S5+fXsU?k(sEmruōg-%(+⬕h^HhFt4ȏo͡QV?u\^Ylfo~ܕǟI@j^_^/^<jǘ<:<ǫGh]]VyW3; yNӽmA0*??wY_0tH(^7X9DHBrFb|H ڭѩvۢ5#uFcڭFvBBrMۀvӢVEVAԎFmE#oV_Pu!!_.LB:ӊ`ȥٽTa.96/Ň߳y(_NݫЯon|͉7"蟋XRb|Ŏ}8yS8yhw0tia~ . &, @q@{h(f&"]⠂gԅYBOCcMX<5Zl'‚eЧg^HzH]ы5ܽ6Iu=#(N iFx:Ko{}O?jN ["a%LŒ)g\7¦҈A)5D`K4mT1ȾtA`$/?7¥XLB vٜRI^P6p+] ?oFʧK4~}np79n~߷tCxȜ]6Aޜ\yh|{`L`~2<K˘0&DZNS7 S"WG\3]t4BKflCd[CUuˊx}=sK1T옵WGn kОWrRqcԽ4HupϽaGAu̽8KJUϭ,k_Ne%eEnPp5f¬v%BiBh<< k=~(g\I˅9Bj$s)^?_堊i*Ï )yQeq!.V7y%5X)NHrj4"y%%IK,I1"8!k ?ZU]j1inr uva\"_+ p]CênT=,ZID-&" VߗH$׌qw)yGQR1vKwYk+"̊]*UrX;-# Wp6y +Պ[(JzjQň蟣u;CrXqrXmgӠUG-z{65zDmjoQ6jm-Wꏝkm %vo~Gus[/ P4@ MEFe `hi©Lq*QDBiٝjW11 $ۺvYĤRzg%M k(tLdu42 "1~5՘Wus;7[<8LW1$ 8q# P .i1RD.$Q[߷.2H?uz6[V2[oݒ) \2esYbČ ]Ck^[|~y_oz/bvf^ݽ+8ft$Pik@5m\7Y]dߜFTc UPjb% {E!82@6 <~l_,"΅L}#hti"16(H!.>bqcm FmT'Qm)oQkسM$A@--Rv[M=z"R\;婠qxAgY^H#PJ~ ee3=i;eD0a,Mo4rQ0WF@s$u0 A^h3PaX2U|A:D Jq]oѵ/\*Ж%5kA^ (LazB%wFYcs^k iМ@NS]Du>8"Fw0Sz\a\fי|}ӿnn%̟)n?Po؄4U߿c} !\M_e~ię^9`1WF? Ts}w3-%wa$UoN W2`tb=:91/)_g&@a:Y*uJ`mL3`/U 2fHLk[mlX>F1R'g}FC .D^!d(''x۾݌P}ܤ,^ܡpYrj?hh3ST&~ R.-9ek켔Wv|}{<+?as%cVc3W-,Ƌ/_y^b]2ؚctߚ'. -mkX\beg=*Yezc^ V2jwvϔJtώRaP)Wa]N'ISƻtN/D6K|qTK=eS:Xϕ@ſ ˥$)(S,5~ 8P*$ߜ_]OMam&+z^O4(*Ucj# T!4Fc%,(تokjl?%~9zbl|n݆]Xf/]Ǯ21jm{ʗ" #^?ۄ+^+"s6AS؛߯|ٳlO}H74u^pm;zN4hf<̢yf( sɦ(6#Vg_n) Zrmg(*zv(9JƊ)~^tK#!2*`Z㞀Hp)V{yKEv,[{{'i-Vaގcq]K0rs1@j2"ePsJIGEH.o&PV 0\#YP[XU6 K) ۋ%#Ռ}]%((loџ?vQ=:_#;ZXFC1ԗEq,(8eQ]Q%~Kn_(J@=:`cfǂ,AH(X@=cr޳yj$r*$DM(hؗJ-DRDr'Kb L`4 %I<ZEm`4B}9aR\<$52in!zKBJ& j6X(~5UV)16HBAtԵ3b$![2N'$_1.ӧ(dOB o&xV㨑 gtu9c?ٻ6v$W ,ؖy+ Oby, v+VdG߷(_Ԗ%-vDq`ͪX*EԊ&#,c#Ic2YXv3 ̠Go/T^ Haz[!5BGCV𪴨Z:G4ґv)Wm P5Z #o=$` Yb'ڗ$!tl7$^>ZzX.3gkf¿[{bh?n_#$b?꯮Ɵ_ȞHXIcƾ b~ʂ> ٝ54 2TJ6ctBD+L$~l"#AAf ̂췣8H\QX/7Dˌ%#=R9WjWmMz&gԂ @߄ 60Բ,mkVYٸHyWR \w{B J)-c#\Jb0PXh8{ֵB$ ь7ҔgUYž5S=} 쳧Vצťj zfQO4Fi qZo7Y(z!RtSJT+&\e%uZ+,Y̅Z!ckM\tD_+4`fptjR<]_`WL<f n T\82a97Ul\LڷL緧KIns+HowGrc|;:v'CRk:;µ6:傍*2 I2X|Vӛ *v2d`@4}{*vfAP(wdh!J&׵Gu^6ʔH3C톑tϑ6;achb HYoI_fun}L;J|L擥vdș5ʊLB)9BApT0}J$rFJ32rK[d- w72ӕ4)OUc,*pRR^-yʕb&g=R"[r緅l^a* 1 VH)ja_>rƵ02=q8.\xm\')Yv3;ˠZqi |<i+5 Y~drl]) W1쓏oc1pXcڍ'׳ptq=? aw''\/I 7xȏh^]=ĢI”F?*vKXl. &F㒽l-ȫ}'zȀ5kY4\2C!yp XV5xR/r>a !<ǂ'`<`RiR ֊ ]ߤ֯QXW,"JF@'eaǔQ wTiK&](ҨkN 5jˤBy<ԫ~+LRNV. o҇ڑno*?zhMN3 3DW*6zGbr BBU9()jde3-/ Ũ'W-bl)8oxU%<~|&c ^ C4]BPzи(aw~o@BI9 vHwR~8JUlK3?#4nӸ N68mnf// PzSD[ :զ62 9%b*L.}QVHW%G {[ц`*Z#P1+Ј85Ǩ kD FmY@,M߭s9 >[t=6vX?eeMUi,9ìGpVYJ%$s!XZ4V 6B})l7*v,+x tHQψ4+oW7T7co1هvILxyLc_]]/l_=i}8v?lt"!7tPi?zhq*lM`*VY흡=hA1ͣ02㔓u5Zt?$!ZhD4+ GC,bOuoʼnHTh79YT {%,ͳud83@dQQ孔&X-cAVl(]rx^JW) r<`bགྷY:VJWE3/i΢([%Ng5lAx8Xy#7LAF c51Jb;An7NY:Ex㕹=O,r1aV¢. 5w :t6;YV SA|~;|(˒8+JQsņwKzPO!#$^n-*)9z A :v D^CA%v!3H'b1ȃ(szDݺ՜U< UUFYnHj|ZDCip -yKH *`<`q\ \T)BLjx@VT#[n2iSU6ͳ7f妅$x%JOzJNБҁwI(^:pᆰ1^:[ύ`p75UjU?OM(?pvgү!&[a$t V-+eAD0].Zu>h3Y_T -Ad->3$=yB:G̎㊫3b-`!F+Yh:)ۀ+t&JY.+ J)1Wmp .Wg~~1Aj=֤yZ#q6Ɋ **=ON̸ 8JqJP<EaF秓O}c QÙhU;8v=[lQ$W.\-oYɔ G*)F^R\WsB~H Dq%<`BR5B1ycoww3Hh}:(#)kH[ɣJ4^!DLb,~%X QD9'5r(ie(bgwW?0qQ j$TܔbM'gɗɮIG3ce Jc=0Ɯe9Vpޤe) vUp̞T"1e5|QcZHOh%w LhO\9;ceV)Ռw֧a*vZ!(̊cW+]c\ʪ 7aRHS;?e aH4W,a%"rIOWj B + 0(usA8/ER=SMSpݎ7ŃpX}{Vs@# L$GyĹHY@Su1|=0\-v+4SfFLhۢ0ÆHy"'Kܲds6[r< 6lr{lN tL1z5&FDXYf&c,\TZ~yUުm%-'3DHPu$&Q"퉎b ɽth+M{h+?n?ǟ}c4bZxkz/"xpդCfDڎ`pQwY0k\!%e҉q[q<6J)6L"Z5G)wk{[I70qwG[Gxr9n1Ă, Z> V.0fkr3^w'+= ]l .Sj'a]6(]%HUkx7TPY%@슶JY&r Fv=am ER1 Hw"H18.Tm6M*Cgw}݇`Hܒ|ƏiLh#NF`pSVX+ϱu3HkǘZŚPݗQq@vF'}ӊ"̽iȀbF,~j+0b%L +aptRUw=&0ԇԄnSxt3'7{GW~v_qҷ)e47X\sv}ޫ7Ϯ!9>w`W+~Ͽ{ϣ5IGzr&ewf֌otdMw< 8_[7 ix땷~;zuT(4&-tA<~!B1yً6ä܄zjdd_/0͝8ף/@C zۓѥM~ON?^ţaZ+0JVݣK^5hx`#O+y7!`AVnҿdξ ߎ~{6|3%cU}Zp=`{rG)OA2{~Lˑ7HgQlL!cw6~uV䬽&~=\;t5. OZ;]:y;%>ls7/t{@>n`?߾{`ԽHrt/.?;O}G#o߽Yx⧢t^/۸7ݵ ^atރj^Oa~tg3w~7YL߁A6{|+O_x5-הT|僄7ax4Q&&@Rd8P FwI$0&& JlqLc0 |lc9FG{d&DATYDE-z/aR3&2`A%&N,F 'Q2Yj5o1@xƺnd]uX׍uX׍u9[^3NqgB>_a)< (xʣ|+eٻ6lW>Mfr?8qn`dld{Zm4)S8_5Edfki@$6:T'6Bi\lQ"l[BP CdXj-!M'#Aw-Tt3Qrܞ ;#i8b{™ΑmXc.V뤽aR3'6<J%+L h۰mj÷GYM(|]WYfQA+i*g S0iELT8~}eE^~}e__a ,9pUrUɁ./ v@u8@/Nn)]-goE}L_dKvw3VE={m8$ɽ@Hc: 7Ļ( 8yyGaaԴ dX9 fC +n:!g/(9ѓ_t'?~'Qh iJ?r<ɋg|ܦ|Vg"xک;'pc (%& &˚)/%9ORcm8߾1Lx/i?k-Át*XUұcb:v'Pa^9/v MdO,W/ . w;KϞ _@`-O7HaIåoVpFbBUL\4Z<9vZ5(^|ٿd+rU ;4g-LA l9LƑؔRpФ}I"=vc<úadࢿ7h-5ջggrIa"T!װת)p:iU2Cי$/d3TYC"s+ ![YpVV.Fx!"耬r8TH"'քBp9GnGuN D"WD BS)P+"1\>>S_L'e79 &y#MmSIh.!1<:a{Dњ6)H ~Jkur֪hM'u-%j p1<KVkxT )RR|ZBG [" z̵tv󯘐 ʻVzD4A}nł 3uY+F۵3EFb!ZKahʴF3=YS*l+y%KL\! ̬2(+:*fY41+&&'cT sɭY@QE/VUM@WG.43+p3.IxTEY8q(TL> `G|Kb+j;q؂CN7V]8\qBs@Z#(U茠DP.Szk~ L,t5y_űz[#P.,gqȮ&6˩ɚxHND, \T SsFsAB2[WV=|̡sC77nAۍXM]4j7HYpu4n}-NLxJ1p'9@P0~v<͈#G dY:[y6~H)կT~෫м) DU}UACYU}]ƿ\`wf78J3S>#Qس博<<&{Uz YԾcK|6~ɿT^p8Yo{q񽧣؆f/QȌxywd=_3hE^:Էi gq︴Nw\:QņJoS%&2mE_I/>5RF43rM7T){YGaon~N_m' ۟jt[Zᶵ۹6Z煹SB)6'Q52<ӌk4[(;x Le u%.Y,m.M׆})SDe~ڽPk#mF•-Nf}ߎ6l?)LBsԐp{ /t?NS_+grBʩ&6x68O w6c$Uou$''c_e&[S =mݜ7⯿ŕ~V] 7FaY.jٰ=\_6,:k@k̭y0/n6Cv@s"]AC8G84낯A bzLڰ@lv iЫe߭+^)2q$`iF1## nlIR!o[Y=kr.}͐}j% ]sϗ}ܿR+{ּ^_ oba#j3+Ζ N<-_usGF#0?Vދh'ӛA Oqep\GZ?)n]lˏIl+w-3Kx2Exk$ TZn֭dٲvb͎G X?%zD> KKIg9u8}]Ic)u28IT1ƹ6L '_:]}kfm٧ )`MYJ)ISirHV3b$+.q;Ot8C2M+Yl-Eω|^H\D@-H*.4)oѹNҌP\ե/Yk&PF@G0TcD:E  s9A P/g;^3ab"IGr:HJމ_9n|{䀾 ůo8+j#3dug1f}$u뽚M5 ʽK1#EMcEֲЊ,}mˆ}EL-xR~iׄpF>]-b}b'@3ৣؖmYv,oe{mo_u3Q8p}xXu,Q@K5\ txWId|d}Z>[*8(y>k`=[wnNmbf  }MuhvM@%YPl-X_si(o퉼[먧=sf_~,z7"ܢrxzj:{l? F? <Dowٛ:;Kwfy(|{>9~CGUx=r=X0BkaG. ,%)95%WOkXRdE|hD}{}y4҇3^l˶zQ;):]J-v"aaeJdkTq:bI.\sw " U]mBuεw9/7+L1Rg:T38][J,OMCeR\]aR)6/SN({1W$*新ĆhJDCRϕj ;[` 2 W6S'Җ9] +dRO?lƦE#+)429gV̲h8e'%$ǜ6H`,P UMDĖL@} *JN%sAsWyG7痯.w|l+huf*٣7a KO^biv,9eʪ V/VC1lRjqQWQsWE G__c6ó6ߨ2y}L{ Ϋ m&Z68ծ)8Jj]HH^jOX$׎Z4XS}]'@7-^?}Eե⧓HӞ'س6( (Z+7fGxi0~={HOPǏ۫]0s3ahx,!vƾ?z^cӮE^FY/=.gFʫwaY06cXn5І?\þ =@)B э9HbpVƇ%d`C>PHŐۙgZB$r󠌢!5va,pDgEù >,Ɣ9X1I`s{d~VD'e)&?vOL29dwaud݈dG$^o#ՠ,JqI,Y%"ˤ:S!ÐWrD~"ժ # 'bv @eVPԝyltuϱI7ƕmNgdӒ8iz4W XϾLtAXJ66Nzizlr!e_ތVu,I @a.mojIucf$hx1ٹgFȭn*mU7V}e|y= iYF_.F/ήK栋%فf3 UJqqڙgDgD͓Gi#ѪE $2gc$c)->!Z#G]+g񖶗aBts۹g  Ip6NXh;0_?m=nOǶӱW:dg -p!'t8ytS*TXmX$x˸ަ~y /zGűmʮ|磫ouocO@g8\>W*1۳|(_^ʈ= 8նv*a3SNB1XޅG_bJAbU ؆聇 n*]awVwU{W+x/RxqJXjU/_ny M ^$ŗɆ9Pjda2fKF ۿkjCS)ZH vh$% f&@_@Kqb^\}ۓ+*sI&dA %nZ$7Aa|O>SCje؅Ennqs*_5o69+)bv[ʌT4ƪ~I[)ifl\س3]u>ܥHQbI:D6ϵ7ÑXŵ23LJrWn:qGKh쭿IaSF u(fc\kO:0V9}S}WSUE) 9&>aJ1PckcTOHuWe(%.ˌndmf`j S6: JٷVt[j(hqFuĕdž!E#4 VL s~'Lypȳ;щkC\'Fy!b9@'CV2nAVѝQ4lK<6o^i^t7Ν{hzy ,cwe#E%TqϟǟYvI Eֻug.Uw7AI%f `6{Q+1:bc] ^ʚXF %?+B$˔7SyM{n E =u%'+A-Y 9RS@Wb|"OX fՖDAY^dH![ 9{$. c<Ύ;zfDJ$ïYwJ?I`?%+=&g/;^wt *[Rl`HDGS,@#3>!oxo9K=}ElD'w]r#{9C(8˚fb~n }4FuXQ3jXD,<lOΝ}zKp`FB<Vv(f uEva쏣] `ЌƁMz]gtΑhLFYKfT\ Ν}ht%I`͹;Aiyx|P^!7IR>LZۊAq4P$DۊYF\e1(?9wk@߸!⨠<`{E\{o|U io{crSB[톒i1mm |߽,^Oޝ}(V`'^^o.'d NI 2'b(NE]e "QY;O0Kq1"?9w|">y2<&1CL] ǯqh\kIp \!Od63P9Vk/%#X. :cɻs@J+@iٻ޶,W,0:u0 iS ӒH=%6%RQdQ H*~\hO]OHJloN,u/0q *+gDOY>ڎF>M4]lN[QQnS4t+E#-~qHIbvsY Eetߕ#$S4v$cY^yDm`W 7eܕ(xW[&rp;pMUNӅR51sdŢH'k. Cbl+FY[11%,S;iM^3tֺP `?zuPռf}ul_8 ܾ:AAiZ>&!irh8ݗnQl&5.P~}|$1_Fe&YnvEhyFaJM)dOmc>]-lS>giTCtrϫ!>L`=*Aw3l$p7s7gt~?qt{Ba{~O)*n?+wҟt?=w_'}*st+|#Hj82edJlzmVdRn\}땕w\)<|_g8,f9\N{Δ. PZ&w.RzR{ WRɶNKivn9EeR]OV+x~(Ok-b [5Xb!$4d"oeȥD#<"N'5x:tKy>!󉤒Q u3r3?5O8Y!P ?3P1L&&$$2oF&5ϨbԮ)<(%Q&5n'r:W$ XCjeTDXdRpT :QވLnJn+u.H#)q͡c犂Xar CE<e6 .Ht*c5Qdp%[Dk"q;pA(R^jUxx_oWp-oש >>].p߯GsWf; ow2wVgcRaYH?fȋUwY|?7<ߔs:O$K߷Ej*+*g_\N"OßลixuVj𓷱ԚYmS كZַn. z:EoO̻ e֔Z I] C@ IAeZۂDV8ηrBY_U%3L]WY6g^ܱm@Pf}TlB$ϟZWV.S.}C8j;|%%poj4eiلC&)yY[ w݁d,}l #UAtD0(.kcj1f6) ZbOBybȽE)ab( (r>yBˍ2_F߫4x;Nx~ҫa0Fis'r46Na@R# QHcDtH{`)~(TP1|M 0Un6ջ۷PT' .eB]$1IŒgT K0IGEԈM⨱{@IcZ#b$l N23P91@=w&Vv/HpJgz#*r(d4AB/SΤrp*Cu]uAyf3Xq"O1*"o:OZ6*FÞR7i7 Lq\_P=~Zѫ Dj-CMr ׂ` !q˝dVSě֚` U.5m}k쥾dR 0ܽ~V-k{O~078瑸1]6a=OwGy{ZEn-"ZNnK8?ܿc~߯F||\f0n =9ԵLo5ުI{+'o.H SZBS-&6~#'_uL\f ؊h{9J3xBtnOt!q|开PHIY5n SLӾHEU iEFE)/nH*@>YN[tTR*@Ż% TE/0+nHfD|mWf&Bs)g\_+$% &<361ĨD8+DL+L42OG3fN)ڣ;ꃕeLD^>?"&/ bfŞ320_{73@WjA.RQ bB%퀹aL-n(: 5 K$Z"n(hIE!< bPWK 0PC,|Ѓ*ǡQjB6R) xRXG=ǠW:ɷR*Vs!&L>ʚ(߳4/8Z}mhwƕB{0JwT@@Uo*Q#' !>װ2CqqZ:2 0BD.rI z׌BcL ݦ.M-@߮3=˽n/Zo &ٷ\72{)7ޔ h7 CԬڱ\(m\JS\ZL.@r~oz|"tb)/WU}e-8W(ž̯i c~֕ԻO v]=)}v*ꐞma@ ppM }&$% ]WɗtJ%eLlYAGݢi (ި- 8PEx5{W7"lt[ \5jيNIDt:T7UpL~g&Ħhs3ׂJT؝^ш!wFt)j F`N9TY\^.rՅ)2%7QSBڶЈ(z'92GrbSey",t3 s/Ծ/yP!kWzP],5Zvmups8yŋ ˏ<[& ~kwi,U>.^(8_Y~EMV~xatcrAV\z^J\Chm|dW-?Y5xs'3z^ljh!z7D1l@TtCa4^d@^LM:ϑr{nPz)u'v]7kf0@d2+7JIdbJouVH0&-:~NjmBzt@/u`ڈd2~ɣ(z&-L[?PCbw&&A9"3CA'< Ι(Rc%Qg| XJ!;{3! fFC'ZkoYQ,@(K*d -9B/%_4gAƓ`z 8W(F MH  l>Wr$Ŏs^00 O'`sɜQi=jotN6pYӾ/\./$}6 zJ۽?g,ۇ*3"ITAb ?Z*IkhCKvCɡj τ3:f. ǵYm$,I Q$2жP4ՐZv-1 dt K z> /1eLw&DG+M!=m\tFvPѻbGx w~">/~@اw ޅgP`gooj1>3j  ?TĭkUGIzmq+I>= .XRq2Hʭ j7[l!vfb(H_B#.P%h&:OX#FPW|sAӡUW^"?U5sՓЙ|n(kL9?{6,f m;s<->c8v{(9؉RIE=(;wCImZJ FCDfoC=uEz-Ҳɴmo2º۞uHKD0鶗q-Ҙ!Ln 7,oiEH}&둖T4Q[_qi4IpJعt~CmQ%T5=bۀFX5~;p$t@"k=v&i:}maUB ΦB΂UV u+8\.QEF̈hR%H׮3q/q\Zֵr VU2bDTzׯq/if%jZrg.̎m|B 2 &(X 5;exDwou9kI79GŖu}R~gz_[ 5=Y;:G; ;╝wjT0]nx<zۻ lpn됋m]tlf) lq(E?q+-5R иQ~kveS$ a4oS^7k7}s|쵖x@_2O9T2 jIƚ;1 `}6!ҟ/`Rxt8;y|&qv1k`ƶd-DSRV-vhkt*D- -/D¶~ "HJK8ja@{Š[qX F&n  f7fK\xhp)4|xBHa[*B3!HfB3+ZA)Gaj 2ޣ ܫ) FwKzf}ovwrDﭔ<NF%t!ǣ;BRc ƻ^ :̰3c-{F0IԒrԮf>q/b#A'O'|RI!נ/b>Ec Nv?YzX׿i ƒ4f[zEfͶrehN6 gh@״g`BFl m)#2"K8S!i$p!vVR :ժO>/pH:F|eBnd؟,w//n:>AqKyNd` ?]KI~;D Ӑ> IyѬv[p܌M Sųg,?{#|[0iо_ uxכqfZ|WNQ8B՟GwѬCKҥ` ;l0}v xO`yr@ ~c?4] >1_>C] ќ fOeBgd[Ȗ2?_n~YB y/Fq~2Jx6^,]ߗoXhV+'teA:aNDU:KUfBs2?B:j*2m͘(3 k8UvP@]6U k!J $4/)ϣ3y꾪k^F_DvH.4R7E=wۜ\>D/ 48 QpPzxԪt6{zR{h΍L^R.ERRhyZG!TRDq&XיfBd$ AH(< ] z+v((u(aIk֡hl,g$R`XOW.BOCS8 Yz@qI.Y+pp*oDH[(vΣD yQEQQ8ʩ3?=_(1T%2!&3b,xxEZ7?ybEEEWj\AƞA5k"+ii9c+@E|23'%LyےD>B W";A=#I독5k)\5M,D8г isA "IQ+ J=V.Xft ΋Q>'pX3h}QSYgg +gՅW3&b/sWn≣W/{ϒ,?[^*s¼/fÝffNU|9Lh+[߀IM(|>>@Bb74D Au6ІZ%5p5aF{xGBx]GXc:fc#;6cB$..5'Dqɍ1-Q3("FUwpc@ўi &QJ7U3V/X)iLr'E~ir_·74 8װ0uΗMdē"Ry׮!:URb{JG.RQJ*x3ٱ4'tyu2!$^Jh7jm}pi 5J-Rox4*⬱i&ZqхoxJ#Yci8u@դRNik}(v$!t4 9x EbU =#plul;7F?_ʟxcH}߳,5lbx:+δ+LJȁY}ȭ~ZF2(KDY RQ@T !7PcLfq+%vy94mh؂Ŏ%2AeraTHAT :Nscڀy.L#AJC b:K:&ê+eF`?&3(ޓ90`5._J'⤖#S˨Paj Q Od[k0TgdSeiS2>*rE ̳JI6! TrgqF|+'@)HPS h2̠@"7H:%@0>sbB G7. >@M-Z%X/^~yw­$׋)b:A(^TJ1 7lr1Z7fbR!| |MC;SkЮ14wohT`K#H4 "cJr>u`w8<`>lVHdVm`%uJ70o >?!'g7)JD"FTh%,4R6O)N 8`#,-R15ql# Ͻc@y0tZbD ;#yjuVdh&9iߧ棹X()U %1AB[4ӑ)JǔbI`B`bE%̗x? !Ii_;peha VB)Aayl0'##20Ccj3䙧K1Q9[bᱱ;br󬥛XbRmcaœ]P6B9qǸQHYQX35 (xP:i?Th+оBk@aq ;5Řxм`+ :^D'.~sT_Ma!uY93 }u#kOأ,.H[-ab]^,A {E]^P)|GmK .ۂP !{"O@5DAqUbJ%k)ݖB.eM>,I$p&c2)c! r+01%BL EL:-*$(d$F'dR1 nFXIyĉz0+M⛎Eݼ :TjT|wOs^?+ILo@3QJy G\ͫq5֎]{X=`s/l\bnA h"K:^~]ye%㌭h{иhvq"~2F<^5QK2kZU^^p3۱n1`Y:gѝSvðg֧tOn.m\dҹ{wԦVPm<؀Q^"e^HbNr Hy VB EhU#AdUmj4i޸YOn0Lc*4Ux يoeIJryAOd?#VaHpX4nh•6p`B)y[vT_(KTlo%T|<5NHZ5龄ȋs< /A=a"[.B+knM+\2$+|j0]d9}FCO<}Y0 Qu߅[.aURNcX6ij;8e ʊk1 m4#ƲROeVt\Qֻr r ͹&IL/v:{޼M{nT]#ג+_8}-NǽWi@>l7NU1׎hfJ`)D?=tܟA7 t;yt+hG"Uݙ\6!S>v~Q}AD"ԩ[2|C 5i$]E=zjm;Ԟ+8Rgϖe.SO)cEYh^$"'qˆ#j1 jpLĜ"fk"L,n8DtSIJ/ALZ?\KZ%/C. GȆa|86ځ , Tf/ 3rʹ&ŝԤZ$4ȟ2;I.::.Bzcj1qaX촊Rzc_Sy͙T8ϕ@$nD-ˠkBإ/ ]9&ssA|{ n >SfHNߝë/~>ُ'ſ_ 1;~j3+7檟~ p{g$M[;>_e5]!.Q]]]EkUlW(x"]?rwJţ`WEP əriF5Τ3{uHUomo@ Edq(̞'nC;C[.-~C[Ɲ7' )jt:JdÑL<Ƥ٧'_F!g^'~^cZp/]^e85ggwE!^VKێs*@;>ϓ,?:l]dv}vdadˏEDBU`گoaH{.AK2-큔݇!aV 6Ļago:??T?49v@/+nD~oWz۷/ .EmoptHv=x5p|Bӟp.Nћ_f2}7u2]*ْھGIq@&7p?'yQ\ؘ61_ g_Oa\2>){ÿWŐ(}"Ax[:(BoCI@;B @+9v"EF19HHJ=q#Zpʢ9 p6zggjCWgDh7j&[R0k0DwJ%9ھzKR]3d4¾Ŀ7ߣ$ݭG׃7iftr9Fr7f 2$MXD֜:V p8JbRbu1]!lc/J2(uEnP"Pܑˋh?1~s}˩ J~<<B2c8S Ns "F() /i,UKG \Pj zrXjN.XQR5QZz`7D<U:Qa j1 %*# lV)X8Ri\58:@ x DEZ(N8jifbz`WUr)lF3^Rcsb!ԎCJBDq>Lk|`qKDI)eӚȉ7{-QX'zSc. aAVTN~޾}*Dܧf $%47X@,qwa UY)%q~(qYA.^_kh,_—KnO #XL>Ba_,Q*uQ| v Y 7𑼉 ݨm"e˺EEnZW졼T͜Ma+oSi6[*xfcoӰ%^*oAƔӪ sur`OQ o ׅO- =0^4,56W,4iɘ/>eιĥ,wׂ/OƀcU:*Z1q]Ib@&Ad(=mN(4'd,`Qypu:7XXzڌj8FDGbgLĘ!0uVu6o+>+ʓvj?6HpJɂ#ꚡ>l͐}!0B!{by hȉRO8G@&}h; h*A:/bVeW>^lƣ-JAl|G\BF ~up3n!Nּk9? @͔pvf ϝO8CJ<=Lt=OAY:K^bٙ{`D2JB JI|$D1bZbқF޾ۊRنc,}3>1"~ps% $ukСYKO1?!|x>-Š =`fG"YNWr9U;[ALO1b2q)|,n5/E|:;;8yF),\;⏲v!HӉedqUfāhIaWęǹ>ԛ46v#ھSQիrw{kGZ.eH\3rȏ˗6Ҡ ))~.{ioA:#{ÉH0hءpYD=#wqOygco 3 ڴ%L9e(J ˔NR#,:ckS%Q˚~Z`^:9N؀e@jmka3/^ U&Y-8%UVv#mآxo9\4 /py1/[!\TtYm=;˱];:V/5:#oYwHnV' ,PN25`5I2ϒ- T$"M)Mc:]Eܒ3b'vwB?O٬šI?-1k֨cŒ}ni#^tjP\'|w5[6U_@bPg#%+l4ŚIH?G$a\mlXG^܇K/ q%QTgOy/㢑ܠc$Ohzuiv8/:k*Ɏ9|̟ǁ?- H ozb0c@mƠ! =m(D3zGooov Pywbw,WA+#+4x=٬J.2ƣ35\uTL9ao^vp<[ͻIIprlLZi\Z>8?ÔQLҥd4*E3#2ng٭ˍyMޛQ^ܺ 8)8M^_֯߻ra!_U2C 7ff>-1`~l+.,V-~ /(gͫ0i ~/>.wKvudw.]/٭FmbQI|_mJB5yH≮͈6S$g\ _}>_/WmO=vчˈ!ayB -9GOZHtZO~S,3e6 S(FhdtwiF 5 :슶`y#~گC^u9-23#5)V)u*̑"cNSc+ a6 `$m'j-N~X3J0Sݵ٫-7ӌ-v[hpIMl PMfO93rV%yh&uJ3H4X,c2eH!w T 3@pb8M"Npdר ;"@sͯ+ :FىBk[vNʂ6y"Ekښ8Г\N]j++wUޮmH {jvЉKcm.-TKC2g-)W Ǡ<{[)uR#DKޗ c8xF?~Cup"fmeх FZ&I-U E^L m+< X[1s-laS#}VWwf# ' /U'lgJ%|xtF赃So a&1JcT/E21`z'c+d}@ld2x ; ?lI&aD_Y~ipC=rl(T /pHi2LJ㮳_ Tiʉ.#&Uj c"تP3~Bh_N@mg mF/Feb!c]O2G3&EѵlqvBtl rPO?ݼ % 1^|ş/G۹+c0yhsEt1ѽU4bHʭbJE*fMŸ}^pMMͯ/^}6WIj5DZ]U}<܈9#:7ZP_:P_r u܈/14Jır`2I2&[ 44 ;ggFm'junvx"b}$TRceSkZR+7s~g 6M5wA匜_XDwk }qY\mxСapw ,,mIq`Cٵ zḤDoYYout'ozIu5HWNGEL1Dq/h7KQ FtD mIJڭj.$䅋hEDL1fS>\<0@Q䆫 $B(,CV.R8x;SɥVIWu'?߮syu}WdEG3xs9>˯%7\. 4c I oُSܽEKN /`{/+y_$Em}ƌs_" hЋ,߭a=X VtLIoWp?Xj #R!:_Ec4נX]kx1JZHĢt@٧=)kl`$;dM $M i8Ժk  +B&A pFЂk AFe`knH0@>\̀^FEb7`TՋ[A4Lb 57 O>*P+Xy|,1G@U#y=HQ~bcAj+5ucJ@1F8)p-zÊ@׊ Qa@2J:߫ʈE 0\Hbq!ߺ욂Q# TEnnݠUJ.s 1|mGſ֜!}<kUy6](A֣1Bԣ<@HPa} '.WqUoхs2*؀:`MNVV*׈y:<cW ˆX=܇C!2Vx? kI႒gY:5A]u?YʗnJ JI dnU3b7f&s U" {R.߰lߊ@TKuRԃE@2Hpדѷ:@t ѢE20>8;O`PO}rq#@"fH/>Ed2ˡsU- ~V\Ǝlpts<}̚mM֥)Ԯv2Bn`g]o2yǚ\he\HBÁVQDADNhPm>z6kw%mm~@; jcVc?ݜEZLT_uΛ=u4s1{Duw3--u}RkmZHzA)OL/_pΏӒ3Pֻ7373737ܸԎIH8\QalѴr `-kvQϬs!תk:lmz9DP5 Z`UB*yr!ʠZbQ }8YH%3l}ŏw#tZD-sNlZ7gj'T w%2 ==:`x\4㸫hbue5:k+ef! i- 턕"? @?v%r stYI4>|AmahI!T.I?vНQ n5rHkF(PJkmk9)jϖ:@Ff_'w*}8gw9L.n;=9Rf  .m;(* .zqec{OVČ.~5ƆHݹ+ZHue C@ 3m؆1hrfMҦƇ;*:mjt(AONJ]65nF$ =}5jpN.dELd+r3TjOf_*`y qUeHᛀ7u6lλ^+rv~tcK̜ls2?HRӦ勠LP09BωFNA֤7 "/汜ٝNIU; DM0p\Xёv52Ј|@/Z,eu=D[#k$E]B$%I 8Ä2ī( WtUF=?^ f5SE%g*>P4*]",f^pz@3N>TقmQ `:_Bjf󋊃9{9gq|l?+kWTxs]aF95ǕhPIXÙTh KʜWDR`pKtqNԪ?n߹pPM@VK1wV#ܿTZիN_?q"n\m E^=(H\7THbL'H%3(f( m'ߛPDɓJh#)Q(_$gҘxO%w0;zɝ#IkFRԚxh#Eh0U)0vV{W2jX (.im1@P4)36+)TVcG2DY scH+\VF #@$;*S#g0-׭2f&țqSx8C7+$݊"TRkDJZ-5)e^-ڞm0G;~\%CeBY+b&0MǓ#$2)ZHE UGblZDƢ0dT>]!*_\~/dj֫k8-+|sGcjKdC[ ss^o!v.+r IIԗE5x6*bkKp3¥5ݤF4 >/u w)F.B%Jې@Xj}̓f li6=sOaڴ S2i̦%Kش J$ |)ㇿ#|zbXQ58ݹcbZA'~{x6' +oE<_paU娃&*AzU[~qp=A{A}0r\<}}珃jp;[B$ I↳AL++Y+kGxw8EԒ|p?̝@nd~\˽[v8z8GVmF_~zBsi2b>?= {S7{_Mĉoq ЪގA's5r7B~~ן^^??s7GLJ[҂Pb07L ,DƼ/\',mzҊ*@!S?ŕ*5Y,SMSJhV"& ׫{TWޑV(ZҎVfj6$>%.şjiPi&7iZt^F ʬ! I)PlNF"gHQO4XJDFT ѺNx Q^mъXjDsnNG2+q#CfґeP#E0Ec ϽIyr2ma%ʄaë7׆o5di7E~LL€]@&q8),f\~cIL֨ЀJ{G|0ԫHق$ګc,ӦJglC*"JGbv:<֞a"#B>▝H \ Y{(*k+SẁdŎL6tթAv<'Pl)z7T)᫞UjH|.zgU]o* ~^TLXI4\*Xt\T^(YAS7^LIdѢI[^z&`Ls&n_y A"pOzU.(C ˸'ލ&rTa&NhJ J q \&(TM]c%Ǝ,*b ޮ5˙qxoU~$|.oDN.V0\e+tَJ0tP8\[>\ Yg[R[%pވKbzP1_sϡMc!k- ?=ҋACݪo's"[)EcS'4|ſ]}k ՌI;j*e)uR(vN׻ש<PJ#"n\Lf~|M4>Yv2s~,-[/fwe+0j5S*UZI@(ND8;Pmi~p=Woؿ˜5"Ƈ<e`K: _b>QMeB~kgb9.>I ?~( DZUn ~q Þ #7XO#SE0ד {4X@@BpS&;euDs4z6X2g#_TFD~@k<B P|\j/:A <^PmNR!{A}_h4Pt:8rn W[x`9y_&o9۷dVUgW/_T`ת<^&+$zz I7B"qVeC2g,ۛ]LrWJx}8Gy2pq$~s@YW"oC4Q5^+o=t*FiꋂWQOI4]joՆC@–9`A`rh3Ht; gx|`̙W)G_11Usf GC'{Wҹz}2yڑO=sۑDq3Ё KR9o1"! X!1"YTíW.]ICǚDkpCTQ(KsFLl6z?\P O D1\H7p-]I:afwGFVT-/zvq/*ܗ\~L%E|&˕wTbf+o>D@?lMSiʳ5My<[7Hq.">J<3;g%1/i0h"ԟ_uޣ{?8_- ʂ҆OR ۛIVE5Y/#~Zԏ6'/do~uf'"/N\ͯ,4Ҽ֋/yj]ҙw9oxjXl-lu/):e1X(']N!R}%9BO͘yyR-fԛg%}و},R?->)*FB4ӭiuE)Won/Βpn׷ß>iRMIR\zvΖ-qL~Hxw?NEkӯW_h ʟ[S2r3Ke}^T_#wg텦@Jpzz-OeVğ(Ofac{%>?ۻ\PƱq=h`'߮A;.Yt].Q ڎvh]y.vF}W+#ulvR=-~lEY7JӁZ(;n6_NiG:T_A]^gr'–+ډ^JpTKZܫq!Gþ+vWqTi-i# |z'K,վɏ(MB?\;|ד0:2m$ٻX_M v1gb0ÕvT+Ֆw2͢J[P?,Pi'iHWWqxz.;紲LOʟQhXz.>D_=/C~F+{ťCZs}6LCxEdqꇡH =M߿SY$miPF W操+c퉙A5:N֫~d_c4lʅDG.NsF\8M(q@pwtS}. u)-bu:N:A{pzfsy+, Bry Qxc;c)d4>444䴮;jZi$\#D#)GH(fQ`Hن#9J#8NaiCEw K׏>Y/| U].(Ӎun>/.[l׶H^hcF,"A-W%[@ ʌ"zF9#6g~QKG%Y>[ %HC$ݖD^,E~ʜ3iQ1K竔qn9QmiFkjN# L x0}TeʣwΞJD(XvVK9Mndi$'P; eXDA|C5ZGU*RQ ǃR." "- <84Sl2oJ-HZ`"HJB>}+c +H#*cEhzಅH9n==^9g/fܙş7I@p޻` ԓx| V#6] 6ǽݽ.j':vowD58^1ۍ^p]'(ݣT\+cOrw*ļ9pY/4i}Ƕr\/?! ۫'^L@QL??=|~Ɩb2F%Oc|ﳯ>ADTWOda' "FQ@{ Bn/1wd!:rP 0?ZL"5O>.{d8VO~G'_(*]8Kwy}'ńSM ޱrq/.mF9H[bCa{:= KP;ȯɺy'?_Hc L1(YcX+b4&_a>h#ܢ5Fٹ5(/* OV ̖6N09b[̗)C}'*+.~&߫}*~Op>x|@K'÷ [^vsq YĀJY"zVW {Raᠱ sz`u䫿8?(Phogm^I *U`]ň^`5rZHW!cA|a*-mP2 Zu~6CNn^k 6q^ 'H§k#`t+q"@ 21;1<;b>۶k!Թ^4fS{fcJ0R[?`2+ GLex HBҵz֜T&9i[1|PA@Zh.0s,Vz(1`ly^ [ )5&8fw !-NsKGaCe!*Ȉapipú~ !8#bހW KH-Q<*36[?J*ҏ[8qll}*c2gq3i._$4+d$Hεw7FiW\#$RUd'i7Y ׋A@V"q3M@m圊ɲaĐ8$VAl{NREhxR '&a+q!=޴gZ۸җe-{uU{T'_.k,QBRr߯ARҐp蕓c4~h4~pRRbCvtWQ /@r #3e(8*3>Z"PrfOTђw^YlNZǔW6~Yڬ)j{+.v!8I7gť Yqb1itbˊB89֠=ЛB; Ghg)L1kZ8zSE=c+0,A ){u@5meEY!(%RĄY'=81 yP(A'`X9 [ml[x:̒-5Wx ;(laA೔@±Q8V+)`ehn B@z:<6@gѾ+%Jsu%+:!ԞL-Ae9;Uv[yg;j[wNQX7܄.iXă?H#eǡ SI'J7{Y4rI;Y\ypw}%h8wJ=7z)]ҙAX3qɳD%";0_xg9 zf 9XV1J/,UrcE*`j_4.12jaGcgI8|3ˍul,ipAjlp b0ƒAQci+Pd#I9K>aMIg,ߤVϏHʗxI.Ɂ24HיrCNey~߅z4;W[EH1 9?vy7,ĶA' !ëѫ$o-4/f~eZ}t0_.5sʌ۠xrXcNo~9qrfѺ=y^cqLym)d]E9XAK|uF JaS'S'Yp6wllir=ID|jqkխ{a5/q V91xV2zc  IN^qe|^~Okb& C:dDF}uY;qn0FP >l6Mg~}xqGbG}v)D/Y!q6'zwBK(-3pB_O K/rdpD[^ Y9,*@e_wsN+v%9B?>8+9;Rx Vz! ]VQ,,+wG_BT8FH Nɔ9sYU;)ZO_Fu;ϒYHRnv$6|GN^>BD $^5mp15!̡IdZ)ORr i%#IJr4=+|,Mg>Əx^Z R~3RN!{`[S"2+aO#yO<jakH 㽡b\f$ct?G+Oč_2%;muPGA[quX((G^ic6,Lj{WOD5g˜!p16O38pݗy)yݓF u|~ݗ1 y9nF"[2xT` ^ߍktSpX>8'x[^_M;GKcSHeOBrD#}>v 'nBOIc!Qm֭{ϫ LS*>KX9{зߓty,nMXJEa aQc9~7IEa.q@V"f 7Z#I2{"h%EZ2&4QOi?uK8'c̥%+t`6{9({5@ \:{gUN܅8gϮ\lH1 +g`S^p- cCF w$)0'B:GDE;eMNo#L=q|ۖ0)ѺO@YMy)sh6jCGK9 QBbRZXUVr+uXLJ9`%S =W~]JE}4&w[“I9l%QSr4E RPRh0!%L8t!ٳ˚\)v1"q6!kfwwNPDx>$^OGSb5 UT 12%) ~') Қ;AZB(|" c r뱴a#wmsԸw, 3;h)֭{gf5m=Rs42T&aFшdch _MYپ`Mt)j(i 4'}5ߠ uޙYMF`L5VHg ;\c3H\XV"nLDw׫KlpoOIřcٷoƾ &F{%?{ܑZ9A<Kg9.bNxf220/6\(7{ԂQ1\)ȎsY|rR YGB)96j2A"qX5ͣ syK( :'ɞ]JT3UtŗMqXφ2 رA}Xq0ݗce.30#1-oՉK%qu!Z.՜`)u2&dlpbXgB+;< z%Ź/]ӧ`s-a~ԡFk4( H ϵZ+H ah-91aۻo&`L׫('kf`a9`oZzMF}K@1T~fȇ.ԊnDVUO A$rb}_}bď ho%({zt!Q!/FX}S2LIn^*.`lO)'V_ U<3nUFšcb}#RV" "lIKpZTj;XW;VV24y`xx(!D#ۮU4:Yʝ]g)̘3ĻS$uWNtOkP[p6,mCD|nDH=JÅ\ܣas׭7i=?q{kq+gAyYѴwFVHBBcNۜiσ-.(ϵӏU- PJ[wn F:43uhۋx..$X^?ZY{:/U їYೌDZ%<0m(wS J.  EC T$X`{7uPKbIK錈j;C2:pL K2P_hMpF<!"xo608@Zh[ ļFsK<DžtE!dLsz9Xjt# ƬwVV[-'NB`)O]AΕ0 0PP6 MiN0)u /4%Iаq©B{ DaBN8 *&||EROp\LH>FOp` PJ(Ě3IOps'EEZŨ|~S Ipyyc1~5XZƷ%vit2uwW%OOv@&?KM'[N$ϞiՕ>ۋ[31h*=s8 qu4.m]>?} ˑY,n>OoA dB*=״IjN: vrjtf YUmƨMEUXk07*7}ϡ)])XFLޜh*,"HɢTDLFtB좼MvǮYԌ]'PJEu[`WZ6 `~%Y` qF[MA<{:۩X%{n;R DiˤhoVQJ7aMruhes% uvhgb 'iE!XҝA6SLPQ}.@}|Y{_r-hf;ҋ7&c> E?$*Ef!+:E(딻ϖ,KR`)]5etJȱW738\G N"m/%)X)<1s"STt9q@G˻Lk۽2"uG\y=C>L7fRJ lS"`~cf[ٻFr$W6f}SUMB_$֮-%u OPSGJ2SJ@٥E0.FVP̜"dm9g2C3 #&ȕS# մ2ڨs;A4<3n9l9VbK0c@9M`B, { Gόӑ6zt~umBr+]߾=BP0. B6= =-՜8 R1a؆7&bۀyDըa)ޫbQTcV38ʫfSiՆݹ8Ba+ڎK0[3Xl>8,#+BVU8TN"qNe\RYJp/R(Ę2DkTtKm~ \fG~SeVDKNaӄL& pR1͙i7jjv 5u WȥIf5$MA(I9O3Z V$x̠4* E|Xc%c'f 6v+#jv<~t_ƭܸ_5FYHFl1J ]nnKkϠ(P0G xw%SLN|0'f|V?; =ݚ_Q)#U]+mn9:݀36'_]o v^2Aq Ϲq/UVLo-嶵ЛaOlg&΍YۙiRֽyp5'tbg R' `;&/E2fN˂*ĂUWSUyŌ`tVquUk戳Z[˼nY9ܭ3G6 >7YY ')nEwqAq, t@~psLtct7"4\6mkee]`1kVo47m~#|THPWu|[ wqg >^ D :4q4ch۸y Ǒ6rv, VEn႔e4g6Ƨ+"&)˅[3epJM-s",H$s̙1MyS~` ڶhujvWQjKr:4~LFi4jKZb@W<48T`i^ח<]%U2z[J+T5_mSdKҔW'Cu7AHDu["FK3Dim^}DV(>Ąӻ}Yb+FrPeS<7/ЕuX;aWJ#uv3f+DJv@_#-6XKL0coPP`?K) Skn1k+FhkTOmu\$ks7bDrqbF95a'-X}?{pAF%:|R9*z]#6 ..mhD#ͅf54aWںcitZ/'}>In&1^雭I ziB'? @ܜFIPҚyFGy)M|Sb8WJP4A&ÈsҘmkr&Inc ̍BOӿZ+g3 \C뗯 u&k2L`_~4 0.$ߍzcHvN2lpJkLIJHF"1)+9qd-e>¬H)[GU_}~Ha86en: aDWf40:~@d&vK6J e/l`u$ {^0.L/Lϋ/8& P&HT jʅ̤ 2.ұᲓ,Dh"pU\uwY Y |g}^~LfBYcgb[!R5ZiNfp4(^~NTJ~N~?r*߂^ýqun,AWWRE*Y8,|bUe4O}]̉^9}ݏWQGu J~7Y~g"3HI 3ⱌь'O \a)8A3uH%eݿJńIƖQ9*- 5OP+`2 dSf\5P*Z')PR )Uuzo$8@Ip%y(.C4{h}*5V:eݲw QݚVXFo&!MB|6Ƿ1蕍KDžա١^-渭(IAo`H@)Q LH%`3:GTm)œE0ZճZWu`H xl$,S L? sM]՘!gS kR"KvAfSO]S⽐x/{x>Tw(ZzmS簦ځ)3RT[/(u":,;Q+#huVZ^X1"1 ,lk k}ClQOClEIJ1L p3ˍ4a(͔K9*%xQKa@SaTGX6EU˂Tgky9'ë};h =a_Vui;Oǧu{? ]]^b"/WJjuyU]gl;O\ގ?ʡϞFVPA^YNy`?M^/ 29y6L}B7 Z.!C1}d6ݍ#ψɏ3t^H^ xuR[A(i + Z*2y\c ;kb(6)*eXb\G~ ^(2+eud߁s@(;'[_: /uv}z kO8rD8t4Uc%ƒ?ο}-'qƨ^||x?{` _`K I\ǿ#/ x?=݁§8Aj;wf4l~aOO<@ۄ T!.xvPĹVB8<~tAb#u¦o<,S(':ceR2aLfƨL:B2 lAuhaw$uXocU},!ĭk;*e 8$??1 I,ɨ\Rqv/OV* Je)GP!GZ  IXX6oXK,L*iEuN  B{ Ox* Α8ϲu+a a:H ֭@mm6m kƴQƘm# {t_A1E7m BԴ\Pii5Ǝ_uuʅi1_U=]Ȝ@޷5 զFQ$wP.r׹(%@޻2 ݵ&ªX}^( h.- / ?|lnD@ǧN)6uxdBnw]t>tqtt<?:Ѿ=]1PKH*;ʝ,i+"S ro ?KﱁY8FeL3S9f|<EYJbN)2c(XrX:,(,͘UVLs 0dX;$%ȚÞ~A*SDri8Yj͐DX/,)szpϔP%ȧF1<x o^%& NAWnsZIfHТNكL>DK~ddRiSŵdsDWR<1jh_íh qàZPWWhTh"/"(wwބ :;?|=Bqݽ|6˻~z92r6G^ NHjբ*W\Ej@RȿR]q?ҋ!H|= BsM-"4 h4כ}bo:_]u% %?y ̀0w>npi?C: oco82՜vЫv1L#9m zn¦10絣$#e6|:xݟxsn4A [DIg r>4KR*҄8 `'zh'WJ) VAc2fB5t*v!2XTA}5^1 ,2'ʗAR"T.Xv4]v4*$<=abY\p\Ŋ<)`KUES9RgP\&ByAѪ }{0}R0Ǽ0f\UakU{A.AMLDҰMKVs>7-Da&bKrnCwUHkW4SZcCGmMD5[Uy׆n.Ɣ*)ZRUQ)nTZy[%؜L(έy[=>i86 %.r;:Ϝi/[JIpE uguj3QKo/y2pLI䮕;4DCF־ tfBI{s;ӟ `Ѭʖᣝ&v;3>lSts?I[~(p}(NZyh䒶YRw%y-KJ#Qԫ}p!3ytVh0pc$zst3}2I%!T.N$rһ^[ +C+'@3Q!,UX*HrT/rfs"Fε#=-R~qNn%O/rƜns2Б?N` X]?:_e$`Ƣ=]M?=z>_>z޹_xqv7߼:{yo^}^jmL{|mZ7/=J?ތVRxI/:ϧ?o2ܸ8>z]ݞë?Ug;?~S# >NV N@2MoN쾕c83&{0M6Pg'?T|>seg_Oh>vߴ:~`Rħdp-e0q xmӟï] g>v R|>/w'vkЂI]חg?~Co^[w\VE/ғeoa'~v^us@~~yL򼩟ިqɯݏY1 ZCxzM_>04v X߼}x*9GEY SVp$vp8n }^fė1;j$ss ;2t(VLed߼xԡP5XiWXt>"c=E1(8:>z=5\øɻÜ9cZ5w…܅!FjPV5rTb:??Z]Q"79B_Ë(I7^"3T߽{qe;ҰuCP`MXNUJ/B"Vc([$JBUxgϞ-AE{>]3s۲0l 3K {v  7UUEh9٘eN4a6ً))}1 V(!Vdf{aSƟr6{n՜`TءcD*QcP!a'" nQy!wunA6M$v^2/Y:Q鯑1孟UJs|vcT W]qAMm 5 5byƳxZ4A'\s SF(w٠_o1s JMXUGDBA3gUx-ڼNb k{2kzȗw=TKĝYk5?q06kf9GZE٢-eؔCkT£?`a54XCL/KpIǭ~u~~Yciς^,}N!aVr9"%VI'$L[%>pGS^ EP>("bpLIEEJIgg_] !X})D}H|ԑC+ATЕH+f/蘷^GpP,w3_I\[\a$A*X<%ZbRpK^T)0Q`z%% ;|oH=@#:ί~2vw3j>@xa?p ,ioK5 ۊq ADϴ H. M4K&`rnD7⾐+4`~Zܧ1R^.VRu[~ mh)5i<ò| RH~)Ln)|A5[ Y g *g8;W3VVLp5FjFuVu^ ygc{!WT0KX9q(nͳQ *7W_QS^1'Lj t]iʉÉczGr˘7\grmt}x|b,KO$_?T)o$a$lXv6" R\z͖b)[yA8`7 Fbe8^4fg6}RZ$x=E-M8QqE[MdЯvcC!liʔm:1cWеp3H5~1X*=[ & VbX=mJ7WbX/Q*UnYq?Ӕ{iDQm.R̉@Z  \mtj̪p&FFQjsVP)N풄K\غ4%t;1Ks!# ҥr`b8v$. ]#<bt):r(02_Y/M礲$pr1-2SÕPrhcx  a~)\yOߧEHS$/:=qC &s">npK/=K[/_zǴu ^+Ǵf[a(8KD8cCE /MB4NpׅJ&I$B4; Q.r R ɵOBg8>h] 0Ew 0 K< 5im:.XmeG(-YCd.ܪ"m%8b>o7MP˸Nv` JIa%0" Z`y`2PN;K7J#0Z뽤: \bhY Z0υ'Ad1(RI,P*CV{.1n tJ![S# 1W5+ь47bB V0D{|fl]BZ7秪yhT;zj`S7&HN0X $ao0.{m^"jmQWZ,A PX9r\Bڃ),010 Jv;f MLjcLSEh"  2f70\aLǬ}:e 0JY3\:Enm񵤺JS,x|9yltȜdEfsUڛMWMdQ8'޾9?+ ˢg(;H?}p?18.0cm ~^@RPid{\k}ÃW;6=AA8O'1JRcܞ)r5Ѕ1R ll9DfT?FsB0+bQM tq,O=rXI!VĜĔH\_ڹ$s\_چTYw>JNsUhaV>)EJ$)v?u?$VPkZ3 u)hrr:)/qJPd5=)˩5WAlyPX7ٓZ+'oVt)BUgM&p| ap'I46a ]yG s*Vi{jF wUwĘu6q>6-W?2(c 1&k1W]fLEFo"~:TӼr? u*˻j.ZS%޽ ['5{}u.@15ۯ5~ERO=Wyx3M)5(#Ҏ{]/n63\0S!`s6iY1ɴ[,9Q)م_5\뚼F~ > -va0V3< c4HhX(Tq%jzqww^ڨi&"  vPJuS8· WG;Ipj\#,_e|Vm 7ߌoMu*A#Bssɔ'Rr՘0OYb-I6Eɢ7%Ӱʋ I8\nX6|%>x*FVWS7$mqh+`Xc`KmfnXNbT2n+^XLЈaڌdX`M5'_v0^?&xj6G 6gmC!Ę`1Gɽ8;['2^/ 5&f,%nʗ~oVŸ6/ e3ûYeqyNiѕ\cx/WY-]$tsfVӌw:أT߈zc9z9BoᷧF0= ߚd-΂{uu8lۜY459 EB%ݨ·rķG\~kpŷ}E>pF7Ͽ5KL27 Z`1*[CB7l'ۋ+N_r(74f߯Ai$m $E]u=# l<h4c!רi 8cJ4),<`-sΈyA`&0s%՚vJ %[,,YRwh[Ԝ+ۛ(/˹#טiü8nUI 7=ݜm@ws Zu^F(sbYƒb >[{h8ri : r(i%+ f8VrX _JXv+7Mafut'86S !U(I!r"B\)Q;h]MWF5܋irsmC&)2Tˎ\s K[^EYݻaǝh^(U)LcS,24>ˎv;cK jBNe+r[慓VqX2MO`+i1U,sE6ïf?q;ش,q1U K)K/vJ\:V`KOJV3b[X*S Vy|M_Ku:V,hf[fYEN{$Ge(&& <6 M +# @ $8k\T Oh}֏5㦀?'ۺe8{dh]{[얒RQz@h߷WS; xΟfW >Yd&VȇTԾdvM5Y%RhdvhC C  G^Mnpډ\ Tt82RGFYӞB6 Gkx5+ Ích" R І;jď99f՟!7՟w#߉;=2D!4YḧQCɭ;~AʿoI ='h2:- e^W0&nȨr|^dݍ_8|'y4}pм;h-<O/.P7+MX i"K2Uj FyWACN*/4+R_t3zW-#AhO 뫷zyWmˌ]OLzg*w:b0BH4{Oܷ}cR|0#/FIy~A;Ը(tE dAY φ❁}G D`}r4xe>~4wOs?]ݣ ӺD<;/GN&rzx~f*W9Z= 63zmrǽs|K$:6I$kIz R0y8fg&ovho`-7Y{w_|TGkp(4_6yS0M\\.U]$w9y) GNP$n:Gtu8Z [Tg4*0&2YVP˽: V8eaAcZ\:I+S5.E3&?RP?3J$ĥ +Gy1[uJQC 6*9nsHȋU<&PLseLzjc5 5BNP}ȵ61rI3 ,+ GI$umzq eJqeh7%č 0J 8J/9؂)c6hCK`(;)+eر))zI& NJO^nAqn'jRVb@d'QȄbN 9xY$ḧQΨ.!  APm ^f }m1fFG~un_͖ ]oќR^]CU= eum*\/>=u~t;a>r~fgtF"s2?_l ߾^L`OTH}BСи{W鷻͍J.he`?օ>y\~p1z ~^rq'Ic9eHA-V-&ڭGNwG"pћvK/&!!z ej劻<8|diow`>>}kYxF=Zw`A.˞1gޢNCy!c jM\WMSaZVeAFVv yJizC:)hqͳW;߮m}H+yy}s"L\Ecow.hvgFkFOI?;uDآm%[K SϟfHYK:w띱mfD[Ȏt_AC;5ųq{8kڣ]c=Xy/-XֱC_)A'Kjya\49h@rΑ1w$i> ݖC]`'sExA!/9V=n%[A?$w%1{ɶs /FȨV'=fNdkac~9͏}61bOL'"Đj夭?ҦH(9, O:m+šGw)J=;7Q1m6QqlrÆBx0HL;,ӛ~Aj/P]5FQ$:=Bu 2sHuuJaqYug[TSpt:8v 9%z868w+>:]v(s?VsO| sܼ;g) F5Cp!}_Kliѫ}GoP%w:&ja^AjM;څ]nu0}(Ǫpق]8YYA`k>>'Dŀdq< 7G87,z:*TZ'N¾!zڍ^񝟏laG7Z #U D*~mTD[]_DY/v5*ugTV3D֪wADӋ S%1!Le?Pa٩O$TzC]С,>뢝.=%^K5Uh3R;NYmsFFWmFeu0F0 mΓ3"E8VچS_yp^ %y| =HU޹EM"8W_^Y?]ݬ$^%}Lq(\vurmVvOSq7iC;9Ƿ:^hٶFQ9/ǶB}]岖k% YUHK prjASc!&"Z +p*9ns pݒDo8ÝQ jwsTTP '"F2A W]redZ[bn2}qyd# ; ],Jΐ ƎyC28TIx&hV]q"3?&h#kfP㳕={nvWfUP9~{|#>lW:+F? ׍l>ݍ'nSUFT߹gMwP}4A}!L.GEC_F@;ѧ|n.{54Md7'\|>W/qaw(4po yfܴ1owk9obZrp^8%M.V, Yk1Yrb<{d9:9^ Ef3c'%r el_R20?/BErK!GQQlܼX.(:gJ[b9ad щŲRNNcz_'g6~N {2$هM`ֶ6H' [Ֆ%eL0cK})~_H]x}%p1%NP[1iF_=91$K 9.f[o\k`2KQO;YyR;75xVic٢3dq^ٟ%0E_@G9}I`0͙1 vݬVKV]3"D%g>q>mH-wX_ڲDG6#59h+oNgu:NA^⻖[~ wu.:v'E8nm~\Y&,[:g^d}Xk_>"KGy7RxJ]{xq@Ak+'MO;O ?"KwsĈrݬf:qb,E|dbW2v|˫Eou?a(s5z3C/~~ 6'%knd8xk/9:Hw&+TVȊ M4 $n3jLƽG eRyDV<UYnfB%HEc,WbǼTB 8raNX T=\?Dx:}coCZy;1p7~LM 7MBVPo UZ,v7QoDkb½@k@SN!AHno鸙,Ӈ`to#qp$8 :!<:<ѳF3<ox:CvvzQQ0pf :M(F\vF%pޝPvA5s&!Q X&3eHJ91;.}T\ywLw,p!'r ""Ρ fAwgsbn >B gSXeJsM<% -NTɜQйrV\aKǙ؁M0r.X9(Yi z8~/B:k`m6[w{֗ݨpEl}V@*1*wJaJ$PxI!X{q0 )-bὋ?Dt;7-F~w[; ڷ>7o/&n%⃺HtƏR&JY&I+ t&\11`/|ަf6bl07D{&kpfs*'HCr*ZKyºiVݢuA}GM?9njݚАWZ:UJ)ԪT~Jgi؋€>ɞ@m="#cY@vfs(x酵3'S.mkHEY"Sa0SCˋƅ?MFא]HE:ʺ\3L4[6qs 9pNh\CjIѺ2?Zˇ4@;(28c¨ 4eL!Ŏj6XJiѩ4fE&|%B$k'a{3=( e bA^-o7 y)E5LmHV]TpVgb<>f0}p: x&3C3/N:ן|/;\Y%l7؈̾8CFҤ2hH 5*WR\|BQFFR]s/\.tD.aÝR0(- NJn~H|lIN :NMc7&f?yxq ^Ov?&&kIlb. 6 [ N6[O0-kִtJ\_Z u1HY*[enh\r*ZKJwuhIʛ涙V%ݚ﨔ȩ:䬃4UNaD-&Y~ ͻ&3ю> )AEsW%n=E4[n98rA;=T`Z;<3R+`9̱ R/j";?tBvtQn(;j3Z03[WK%4?-ժ9uvw;C:Ђ[ƵWeb1b02N؝ė0(1E$9M]D`24 *X4 fɉuDIhŒs~zՏh0tk{V$5]/ȉ!ZpL\Owc|a8 L2y8CbvYX*+劮办PR71>CO7bBsxs-:P>\FWW8Ǔt!2.߄E_v=&֜>`2%q*: #Fꊍo] IߜIY}0A񳜸'r.W kseF&; dxgg~-~֘+|kriLZ%iLũj1W(aXo(~%ˮ8˄h$ g؎#P |F]nk0إH}[(LhMkj- B))|$#ɯ.. #[):X*Rڋ*͇00`$ɳy!&ǘ ץ*ݼd)w޽ChԻG\׆T)i z1Zp]VvE23%̉)C=`sO\#\CpɞB6 yPvE?9QnlFɇMҰ6x\i pv}#Ҵ !1\Z: =m@اꊫjWJa~ C_gnbKM7- )#ճ ޾z&xUmzfKʍA6k\q#D˜9PC\UpJ^=Zgu_f=Vbu'=3t0pp |5 m: 1D/Df(3) A;;~i;Ki#60EdDRV/O%70*76%$~R 6HKbqKyyZafJ7gW#~6}#oIն^'q[:Η]ވ,GߝNc-ptՈP%"^m0ܖ?8UsǹuЗ5 qtsS}8 %P8 5 C}Stx3#*h64GZTi~)kKuÐGoaB FxsW}Ek(2(c:e2)N"Qn EpHt8K)o]:dW1"8ʘaqP ] ,zNLNC]#s1]޲Z}lq)%}5|x[ޜعV|}%mtb.OϗIi+yw6v9>E^{zJi;>+Rӊɖgz9_!2;=,Zawp3/6pfc'D2pjovQ%cy~I)+A C%X\<̎YAgiصSEe}.نfjzDV8t,+sb|n[PV`2ih+G2ݱ4Q dRfq$IIzMzdP*;z!:s`PP 5 zwuIRZ7(ѹIQx׿>tOYa'o63{nn $8\>}`% @"Z;^ \wXkuoD:\:(ik( cz"n|v5-T!qSCwEۿ>˔"\7Ӡ o{'NeavJX'-e]_"6gA+h})9hOzYz[)_.ATYb &Av}Iu@.JBEj&C/t$ -2fGQ-zIn !I'tT Kσ }LP~'vfմDrw~b>'H!QnO 11Kl`K.ib[*B"T! :>N`? T,䰗f\Lyk24$Lzk`dAs,a9$G`9X܆4!&H +'A=q\by27nǃ񭛯 hH*S0W#;S|znFPmůTo?f3TbU7פFڊrHbr85C1PE3C?51>GV(9 RZEpq/rʘ.ZanY>'K-HlewWƹ5,dC~^dҐ!GPm)(L*3 XSh/e&Q qҊ*+ ˹e/@Ytup~\^=p sYWw8Q,9 O_χ [u?at7wgCΞ tO ~ֹtdM>w_\%Xߏp}pls&#Z/n 3]20ffw|qM?O<Dʦt9жn?o}]wuA0S낃3jbL;xfҘ [#*OL5֠s4 5Z0ldk>֞!cK.U/n5vqTƚGb{&4m"N<Gh1o%mqd-{]k匀|nV8Z7 m<(1#p׿X{}y/O}.tQdtk(ƒaڂ`+7l8/iw;*k2}WTyiX6h9_xF>c!y|wO?GYwBT7ړd.#vyÀu "WG cub,R q\Yb3B^4 ήzm^h7"vAЩ2툞8A$Ri/!|<4Q|[ՍFQ'΃0"s행/#'Q[LOr_|{Q|)ۊӕ_w:L8_np%ELV2s~6g.ϦD6pRP%L >`0epAri}8K-;ܠ-vlrHs*]."*l'Ggtrn/pSW/#3){$]w˜1qڛ繿N>![^8p|1=> >Qvq Orm]>s J71hM#.޲汩ݢgAՋK@ ZP#LCkqca2)aF1ᴥPapwlj M6!i߻)IR2C<A28 cAjR1E.72z͋2)O9k>`)$R3lh<0H-bD!+WYsx䌬`j5_ZϳatnLålX*KLOmt{)}0ă9! fhT gaP <@6AUc[Nߟ!p3'^xWN)Ҝ·Rs,PR)gNI*bi#5#R)+PV4縁eSu˔I*lI}@BN,X70)s*,?&F$&[ 3]h5ޯP19P<1}NOד]w'.9Hg5wYujҹL |PpA5a 8Q,%in w0nR3kʦ@,4tyorLg7ҌT;(n'lf+ ejM})! Ńȫi+}~^FFXv2쮲w7fla c:6`9\.Y.n;Uy ZyoC>?~8Nq\Rڕ{#0qP-rp(j"-B.kY%:08+BB; l+Ϥ_#sغop%*$>h#fL`ieF96L밻;:Z mWߘdhM$IIQSUz?q]zCC[qD+]'Qe&1IQ!snHTg㿃:e3&@J{" VBkr<< eUlGn:*bK$UqkX4xp w=|-q͞%Z?e>O|lvSȉ%nR$$Q 61`Uf "kJq'.Q2tX~ ^jBjzw5U>EB^RJ)ƲvGBp6=,q)/^H5IA !Ύt*r^c 'W g1WOu`yYE#7zY7/V3Y,әl't@t@Ev2͔5\{]+-rA(O%0TN e5Ϲ RZ-xJ;9_ϼA,{a1:%AeϠ'g*?ag4+푝O@WSEgò|,IWKG@'0l<D36܉@`o楓,cpU0@bf[:@;PbAp4+VJpGrRފ3zVӮz $BrwS><%X'AD31SFX"|Txht.SoajeYJ'>'9ˍa``*2$jQ*JBSP* U~(ڶgnbqZet˶E& 꽨&b zd%+n ?Ւ>xϥ1L17qc~7qc~Yܘr3RX#z Z89e7|7,*{ #Q%1>šQ8d֝/ )uؓp+J W$&uw$݃gtr!TjLiXI3 #-\.&BM:xܤwZk)hZ5e4_ũMEbT&2J BM3Zi>lr9\{.ωYdCXՒ;%U*( {OB@Pfs&^XTT,\SJ[T _5BgGcc;P_zgF3A(9σDSwإ@paٻ6#$~T C`;.Yc}pk':7~$E ɡ8Ùc%Mi8SzR),Š4HHi ,9o(Q5N:'ׂX1<%F-VnZt#9T/8\_b7/6L;=';]R+M h|k5W[HgF^{<%$%$R3Z=e RP}T$qh֊m 2\x,EИ3J"gV;6/Yj`!g2]ՠUTP^@tj~яKK%SkFvQ+⏲S7=+>tC'qF#m.d$jnn-XbU@< JLy~BC;m gp/*qbg0=|"VQ);'k Gjsdinw”'_4(Aթ͌|a co0<2_\mF?<&o{Ee<\)jve"WgaJ8)Sg&4#v#34R sxNp'>paNK< 9f6n)ة $/>E-Z\ɖm%N3ٝ)6uhEN {EI:˵Z=٪@sYl5K^jXGK/=L/M^o辁-xr*g,kmnX+$U?QBJK5851p2 ;/U<$;Wb#N#seڟFA:|U8Ut3T6w_-:7325 ŋmo0\v~ A``A0 My6YKA(H8$ϷA9PH5䄨fWκ: ֭3A9AX۞bd]w;Uc=YRr\L8 1U&;&hQd2IoA`P%"TgA"74$ac8.pDaH-W7>;ū:i95-_ܢ>f֡[oZUNgx=kux}F36i*wjCSs.F :iHB%F4{w4YF7܄FAوyE7^V-r_:d'}?-MC2FHI,I4zѰHFWqfqc2 E`My6LTp4$rYblEPKeyEyATPx@"eXRj(JUb*?װ" =!yO %Y@qc,*Rr,*Rb=rk)f/ox[[sj Sm:4N}\ AhTCw 3 V4]$U=*B=$!It$ͥ?e5Qau?P=[q+vv_Ԅ |ю?~!=Lx9.0x>Gm8\'`O׃[so~^ـyL3]3oo+p^=>9&_]qՆʲr2c9Nj͕[YvYYFD+U"OSQ%%4 ?-^G]xG]pZ]LN'[X!{520ǹC:SC8n ab P? Vt 19N\STLsI9jKp i.2ݞtLA8b&15W4o-gOWt%v- rENG`b2 xJi*Ʉ(*b"!%yC pyVNr>ލ%m/&T;Ȝ߮-- [JT[L$EQT&iZX!4 'sBh"FvԄH1 :-,ـ*8Ed#u<Y4z(UX 5iݢܚ*U\UPܫ}\iVq`.nU KfU#Be#y@"'Lp\ d%Q?R:((B xt&n94P7DEiw3 5]iCЫ9-hZVo&F OHZRIJ=7s<)ir1F‹S B&W7qkLξҸ~ Y Ա h^l5cAм%)Jw |jĚAjj *^ryWedJU ygEwly;SNiT,ͥ*[PKQ W rYǒ*V3 w3 9@g壯0ϸ?tNQwMsU\'vf>y-Ѓ4mX[qʐBe8cc[]OVILk-xӓI:xrR::Ȋߎg%+&C,ZG/ {>ҬEnMb|= b]_gY!N<~0O ՟oyZ>Yg?ha';񢠐5e|ȟ,fGeV̽i 0rV#-I F2eP׵aڭ)UD3hőZi/ZU!!/\DȔřGnn!EP϶Q:Pe(a$p{׌n-q,QU…8\A5fՒkf.Z;'5Ϋ=w g>Ǥ勶$1i^OIii56]kkk? =7Vcyu~>ᐅ4jBz%84SZ-ל+pRQ rK#nZ ˇ:/&'YrЗÙR1^(|!~@2~mkȏOV:7;Me09ǿ]GV2hz?9N{v2\f:Iw RbF<(< #đXQsbJIt#n@ޏ~YXHhme!=7GI|O7Dl gNp;W#k\}o2ݍq`/V skesz-/<ӈvWnqX,!^∽"YzmxY ԍiQ+C A}Z$@l^Oa1.mʲbDboo ={g﷧ΝyւjdkEaJ s%UZM5>5HWڝҚ"B=;Lכ$lB@nc|t4F ^2q:m<<֩Xe_HQm/*XSylK<5vaR2ɽ4jk ǯOB}Y\6Ux4oхHvXhkٔU7_+dx* fѾ['1[pnj0ulfbdp?n+TI τ4R,b1P̥JHG;X s4υ+ )<-/OMy/tzB$zßbմh/w,wiXib?͝96]7qZ"պ-ia'fi%9hbQhrc_t8~WvъA9jvWTiS"0A5$!9j$Ml[RZ7'ǿ7| hp@\ Qé'S6ڔ~9k):PW3)8+XE7xpƝFϷTtd[:Xu핔sS\V2ή,gɇ(CԸD|oUX5 )8v kBu)VT7]}h3tΧQ\*Rn9&s;0cǧKAzg;qK/P/WƜul6&(@ݟD#q q 7r< F)cTX3.X# C(BMf1<31kCƢDYn(Q B]K 42ȉQJ)Q!iEbE,8%֏І#mwS3=|҂E]ܪ>("]Kgl)b-L(Dj%41qf0޳+~0Oeg f[b8-O],4c߸tzD] \:ҋ~z#b_!hRְ4MqFVлNb]yw(f6W>.A(U,R݋FMni 2hPA&&4H'VԡhSNd2?+ r~ɗ|jի((Z)c]e_.bvaƓV2Pmm=wp/O'v ټ3Qdtex ukqǐ{_bGW) GE:ЯdO%v^eԤ!\Et H=u GuBۨbLKM[Z:4䕫hN1y?yӺ[(>FvdQk-|Ӻա!\ECtJ"xzӺi`B1Q6XCp۳nnuh+WѽuJ 䂘wT>[R1ƨ<{F,|i>Ok0.2ݠW-]s[ajs0/nT'[^eKN8ڿ mtrޜ\zo}bQVwS2B@x5%5s;-|woB&Kdggߝތm>=w}xO73z':GHm-MJ3u5|+A;Os.h]eMBeGiiaw sB*Z~5TBu};Eω z%SSsn=h=6lnt)1a/AFygepZ -A9h%%];MG[[ж'R-.-3ڲ񖥥05,H8j&R0id" 3IBa5ZSI?g+L|Eext(Oxp6{IZ_<{:n3&Ef&1)ft Fh@41?,j/6cW4&Yw";{qM_`4 ߷ɂMBdD*cR1NX*B02~`1')U{ YU &qXJMikkUʸ͘Y)'-0-iFufl& fq~p*LWˎ3JrdR҂46 TK' /luJ,r˖Jw?üp={n&=hf"{޲%L]=G۵T<<|4+imjnNxi^,8e"IW`:x;U",򌃰V|;La,xSgei)wU1MirZUX»b9I݆-gtE,A|̢b*RԳ,qRI8%HPdeݢ]ܭ_nou $+>lIAaTDse7y~v/$Xx9d>̆I>#xA\ފ tɧf $2CC#Pql6fkXR_T˼rkղW޼HS&[Dz:?wu}x{*[|=1'F!\R'xa".x 3 wEol fp W#k5}o2ݍk`Uuؗfz>狦ϝvh;`.4tww7VJRړ# ON3S>p芦ɟu*'u lIͷɟ6*X)TOvj,Fd@fXA>īs*ThLbW mZ3mڒb.!=*0ô hiY9~&)ٴ>*)4y0u 缲N"HQ0OEz]FZ!w[Z#%vWxNCpT3/Wv2E݀yݙLMO{|umC3Lrcwf.n]ۻQ{p N֤KI djŚӮV<%m[Z@"Mlmk"b5L!$لjjnuQp26v z[0¡eUO=FpTgPa^DKtn K~Yc?~|3q؂@\$T4K4KT 15pJHłg*ӊNJIc<(PLnOVr )\PNk`O/fO:)bP! # i,|ʇ*=U+s;Q!V*B ;TA2=H+Ԣ$bH N' %km$7/\V2ŗq`; .]dAɱvJ&SlrnC%_Uzy R6 I# 6S`=IW /z}_JR"-쥺?MBB0mjPWJ+sA;lU!i@z tmgBRG!)sۙM`}+Z lc/9j/+ygZoWc|֚4Z`i#hf#CO €g@#XZbZϠ ª i͈\3f }iP'ie)^i9OҼVW -4l0-wfz6#liP,/Z \=>\,{\<;|G,=!{j%KGI"8@ i6{&9ES7F #*!3EbgH8LOS\zf4Йzѹ"~_MN'riyZx(i_%р gɲ!X-VX3d$)$SNz>BuyMfV ʃgo'3oT }́ Z(%!,D-b1 B/=XB`&"S{ TQFS,B`> hԥ) 1bvc˓%(cΪ !Wio92sIJc)LQV*hIaHW*UGkq(("k1k< y:³#bD41Z@* P)\`/5*ڋxiZm2eI)aap?}NfAjR1=6>h\2R~/4Lɛ>+l84 %zT_6-=C &q#,%*RЊD-PˠAB 3p麴QJpXߐ:ÍMB qaC93<*5LF z2F @rP?/.pJnZټ -n5.F'mƳq}Q0(i9C(z9Gސ\̤MB򳗇!)waY$k*Y-wՃqq ÜE7DEC`[/ !CGA rarHxp{? R8~|ݷ7)?Bw&v6*%6?=%0h)P9}{J ]*EA53d~NK MԳ NʏYiZo.~z99?dHK{8x73SdĨpJ/&y>g0Օ슣[ ?l}wqunߘܧCZgčʼnPQ[(y^.Uq2bf\@Oۯ*) Z\TD0&ʣ%a\k#ťHŽGdT#_zvlW#.AQ5671^'aJٖwW耠4l'FĤjRA-+1 @D@U->ûkPEQ /I a1VrlDWؗ>eTFdw|v5@e|LӀU CQ}D\Ho>O}fٗ{=z b- w蓒R#jBZ|ZU4k]Z6eRwѦ{ uMwڬdzTk$Xi` 7?~*|qu粻ڛW PG2-.휑;hhavɵ P% [P_ PJι ;]rkbj7{Nmp݀t~&QQl Tqb83zؙs>(&9,2#FO4[Ɍ; DҠy!sn!:zW-Uz"6\Z\P4J$?UL:2ƚkvlqSҐnQCƺiQU3Ng"O5ԷhJiBBr%S^'oi7(SvAD5Q Y`.Yvj@Br=Z4gᔱrG,Rʌ'e ^3k(Pn!*7=ݡ5˦]ZڜߵF%z3ZFqn3U4QG|$y/SCc2F^zO ΄B*Cӄ`1(@\Hܱ@tfZϖh%V]uG+BR磕 04wBvX@Ym?F#)Y^ hSӜA:h]QA ݘNUN: 5JhYj=x d,3F!ہ ^YHKX/ڇrXogi/<-G0Q{ P-;7Ӡ}U2hWV1 a߇e)v[NM; *>R(zWoZlL􂌦b+p!m_gbj2IyϘc{˱]vξ]DRˏ0 o|>iB ڸsm$Bywwwbݝy;}2bu%``:veڄCtvDFmiTV:8vlCx3:@/RZ;kmj>X  (P9uׁ+f1 4pBE؁3]۲ tR\uj7la3ULHP tjOݐ{Or1SQo+l'bX"n2x<z.Ad2bT 4ʡ>\߿虁?ރlH^)zA0Õt"DPe}H59yW0c˧\nԬw䟷7ܯ.soIƕ1""j)VlYwO-_e!gdQwUލqVC Q/gXt1'byCyVg2T5 9C ِE4VYdA;GVfQž /-NV8h՘R|qk'YcyJk4PS0siw6HI_y ˨N8,˄@'\bS܈]ͳVSܓɚ$I.ULC{=%v:>cxA}j>LJi6GWh% v)nsx{6%HLՓ#[>gT|>K[q!AeeRZjڧs<=2h(tʖI|_] u,l2u|'C|m3 ZQk6kI3ga8%!?&WR6x;&!yBV*%&Em g/tL% 3ƨK)R")-LBz;TZNɌNfu2c{]4cS{4ʽ&\ai}ȔKO?BU}rnQm>Lq;ꢉ^=Q4SCxdMJ!P7N T}n7UQ%pEqcpIJ=.Mj= fG\`Nc&kTGk8҆5,i6Zs`pVf@EJh+.Bl*&بe'QL pp0?}x;ٟ2ǃ#ՔgKd:.q+"l#eF AP@:h-H(&P)%C.qDmLOTu 3p)&i &O5Ae8Nq7aqK~y'K^N,O6Zٿ6e섃m PH\(wvB%5QS`z4kOgNjM" $m.QHQ%7[#o)BA@Ɋv0ME0W-iJir9R W׷9?c!|lU氃)j 0vZ($DǏ MZФ&?lP rͺRHl( gv3P@_{:':@frz =׸`TXRg*SN;oP*Q'yuNԿ K'3$qb#]z$5|rK=q8^!)jG|b4%;LfxT!VJ3Q9<'Zmf!sxj)dS@|.~ZU.ϡ5Rek3uIkV*xVH%ג cyEO $ꙸbe*/?r%#N_+_A.s>u$cIƮ2V3d/p"ׄSd e)/h2cpqUt .G|Nz7,jHz/ij򻡐|Q t~67?Zxz,RFǔW|if`hR(3¾Mj(=Cɬ+4_h 04{+(EtVj2OvA)E @q YL)n{<ffQ1Of2=-lٽ>d7Vk _]YoI+z{h@X4lݽ/3CQn{IRRXB[&2#Ȍ̈8nj?LM0t粓?[ MZ R0}fu!tpZUV{x[li}Ƣ1젴c͉_3͟ 8{#Yfcko&y){ !]2d2RgydcgDr9fiIl}Ѳ۬g4K=CxiVˮϞ?3]eҞ'$CH|~)2+іa@ Q&ǎ䴋)6\=$"JY%ߌ81qaI(TРv1)ZhGQ5z튖OM0M@E`Zy7)~zSSwnD9uL+2/0 GcBb0eCĊ/uIQFDWa^(=j8r F vL)δcZ 3'Tc٠)dJeR=KWl Ubd[Y4"(9 [96,(wxJY)ډ`1$%^c)56 ,Â٨7 ,DcXSҿÙRPA\VBRpŇXa:./E$#9|KŐt1,hЫ]4x5Ԙ¾؍[B~Y~(4*jPW/f·KTrLBԗZw A}Y]kѢYmGYE\wTonKTZTPi7 咩X_JLa nn7ǓHn۞K$40^0OEaFEd!` s)3h$!8)AQy"]S%Vgs{(plmKRR6 J aBT׻@` R :JE+akT@ti_:_j_2+(05 E_]49"=3c.`9yszd.>D;"|Y-s@?J?tU(TpHcM,AnIUۣ}#A/Üq+qXBu6L/CHgw@UBG95,yꥦQP bS X90w1*!D҂ưV O!E۫f*A7_Z.Aw $'3JX<nB!c[gL-^ئϕcw}|BFue1ҡ}? jAT{V{}Vc"G^ʄx^"}(D U?΂wq/1ո=LKjnGxqHp>" " Q}4Eͭ#zs6=tV%F'&T^~s&9v>(G9ƺtצ$3rĚJ`lE֓ -:'\4 eG;fREɬsXoajixK%q8- c[7;/ם9G\;0G!ׄ Y8Ow~P&z{>߸>GȡIH'VA6'u U )NEĄFc;o8ۛ|zλvg=M[qof7{P[cgGU]OOϜyR,F- 3}Mv VDjD8jHmӚ9.u4ihl(PLkթF8x^CkH,{٦c v-F'HvoF)E*BS c8[HR"\# b4J\d1H/,#Q)ѨxjH b D<ޜXEv%BHˈ 1RSLOH$;I b}+7uninб)|q25F 5eмVÞԎ Č_yĉhfv0U]T)|F9bDkB ޤ,Ga@ HVD  +JhT&T=yYSO){A8s kPMl){#g A,[+&\q SZ*=k{='_\eybF0XK`][=1XX;VJ'#  #$R׌ @5#`sC6P;$;jXcpG9sE:uIbC|`тDP2g N0\HUH9E aN'|ZxP0=#B-U71b{T'e>ݑP)ts:v520v_f!uY4L>>Jjjϗ΄_̗}%2X*K޾y`pLfރ R=�:VS|ICC9l4QOon/!,Ya52+)=)-00¨fܺ[a>C[i`H .|j/8ь{v?}z5'x3XvT+9^V_޼ikɾݠ"QWflKa'Q,F&|;m4{S0[tnS7 r61;\oDIf,y`G EGǞh]d]opJ\ ԱcbFIr5§qon5갡!uo~2n각ѓR* n 0rwpD,ɒX:HZ3ZbRK#(Nj@kY\SAWʨl,_w,zeh/  %hb8HJ ]ß(C HkW>jʩ,\Xmd>nx3JTZz&#q% # =}r-۫[p3 amZPt=P-=J W{@> K8EMQ/;ѢS]׫"9q#ph}K:ˎe8S`J]"˧;gn2*Q*`LlewleIXxT.p PPվ(+TAd_ GHxN=ӮSNPìVQ`;NqTljz{ȍb=_ S2Y$yI IXYHdSlvKjYl} rYXE--LK?9<عs@Ait<.^i)f{Qς*4 (h'1 Ӗ>}mPy_(&X{gIM3ApEH1%Yk1Y4!&2:Q,`P`|(ks:wT:D!RM5RXqoRfHJ8 ͌5y}S:TFx9P-;ePg$'*&ybZe$K3yl!έb$0`kI)rT<d6u0{}T9]Fcbk"#Db'B;+Wc=XN)猌cm.A8^5Tn[lϿ~))qP{ƈ|O$%nLa`; S)W}; 9=QpE}9mISjSQrJ ]C̦p+CB/*dsZz\'{o7Vm 1Q4Wڲ[NYb,r Ts jP\BJ~1[?6/Y[zffw!4۫ X,CoN_Y.dXkbi|:Pw aB7n>lo1-OM]О1f~,aRs wc|fdqYHFIvMه$䕋h"^*M`E` FtQźu$ RXg۶n'kݚW.gA7LK!FCa9KcN(Uy=BZU [ Fz&EBsJJYq乶y)yg?"Uht7KwAqV2(z7#FqSNi!{m\NM-DpR%g ):| Xh:n2tWc򞋲a)j;ui1->@b$$%YKepİf97m!2R1v=Q})J$e{Ҟ|ʛR1.{90PHa. o/՜n=quI[O%C+[Ow5ڭ<)Ax&[Ow5ڭ BL cZ[OrHczÕDu9?ֳysHh[O<slb#zn= Bp:,%h w멀VayG;Ǯ!RcqیJ_+W -:km9B׃&^<9YN4Onk':|霍5_5+KOn@VUlyhkLq[cDk1}*LTj)D}ZUd3^tC*?a_y*=O>8 K;o|47wv7jT7۱kLJkiN8/r/NX_&cQU+7 UU2׬*ɗh* )I`GobLJuWS)nf% ǃA[~ٍpZSYv1{"vq Fu9OD ΅R= 444(Ŝ>A SsroR+ $S1Ʃx5ˬ;P-NJPMN̍YRbn%O^KΉ'#/$4֕*쓤24EJP+6F+@8,K}PfwN+(efbAU$5)YQe*R1\s+w+D9b2JQRrR$xkV9}:}YҖ^MoB%fSBV~?,Ek{TA6W5WPhW B1Q\R Hˇ`EL7`<%Sl$($fH; l51Ю˸aOMŻ?nA}3y4O`Y K->@ |\5&00yJ \qfNWYby'd,n޼VGNS >SJ"i eu bj3y%t;BtHyjc5qR%Qop(L 3Nre052'NvR](7&mHX.y٠b5ή'vGSLrfĂi &Y!ұ 'C[p>)F FZ_gIaGoD~Xϲ^By :rtNgp;mV) :IĠgR|> 0b=OC*>{Bw/?nӹ=῿{|[%RȻ~Ux`+AH`o""A8) $-WJC.Bo8 t_ΝYg,жEDAOߞ|xpp 0pf{> m M#X pox DKF^1=p҇{~,L9]ɉԘ;EArWm󩔡~ ܦ(oRCcܺO(d/P~>:)<-eܿAhBJLq2m4,5Q?э}ݘ!Kv`mXJ~\vҩ H% Dk),X#c@Xes r 4+  ,Z7s%4כ(wd/I)x.L}\ݫ[]B -,O Do7jrY-Ԩ;'TjDq-Z ŞlE`L)DQ;E9ш-Z\B3}P悹DRvHTD`kӚH#c,KDa͌p` ;*2t "Т%4.,dަ4v/X@G0'hlxpE"tCGr,&-SBe.K#q:y?~UVP$'HtX 2ΤDeCEڎQ⒝H)Vȧ|b`tھjrJE&&,afR)řLh Č24'O@#mmJ`cϷx4VGSD-6_[ RC_;@he9 kQlC=ɩԘ zdWS (\>iSN$T $ݢي3?("r0+MD_;n\2Zq#m 3 ^mąB'x8GVx8ׯOC@^VftiQV9p-TD2DUX UmL&mպTђhImE@Lτ`V,mIJފ_+Q(RpB@,1kΤ'`wQ~{-YD|jAko`䘌| Q.ŔajcM^# ٔ*Lai>? qsԎ@|#)QbWAVPkM#x#c R=q\w̶,>vj{'/*'wrCfy0oş(w]D%H1եKfW2ٱS?zzi^ߐY.~flDjo0{דoC_O{x?o>?2㫿O6o=uВАah=Gz~s-V ˊuҶ錐*=sEz%5QnTWHPCG!+sx '4(%s6cC1aMm ][OE_B muhgQΰGG;%4tlL  |%{Ǚߛ-|sM&h^ ~(i&z*sp}q*OH}iٟ3H.5p^>*ʕDhXō\gzɓɓ1HWnZ\1MuIJ{iRH&ChzOOax戒g`-+B2ƀ!,tӬ~cǻ#ܠIo a.׫;Щ_ԯ>e_K?fMCf4dMCfش}B$y, 3 48|PkE@`#5)Zdʦ^r0r(nП7w"OrbBQ́n&? !qM|>51'[_ llC|=+$ՏӰ= Ӱ=-nw#%e?{FNMA93 "`x/Hr&Y~%jI-n<ÉWbYk=>ĩBG%wyP"YZd}QT羼TpqT2HyN4Мǔ" gb0 ;GUЅ\bXF,0M-J=3M Pyw;<$Afۧ_+! Z ΀୵ډ('p)=Gq, 9o1r:))`05]=ǘu`UqG/(BQ'5(N08SZpf3\@DTyP$5#X,,2~c`Z|$ /uz% ~3࢙+J} 9ߗߍ՜P73e5) $:I\QFԲG5\{Qs(,]b׿gTe)g4@ױK.V 2y˄qURuBFv/r M5HᳰCZf:)-@R$<6tdH?ʪ5wS4()?h 5&՞xbќ##ò~T[ʗipŕJc!BpQF@dP]m `"_kGꩠPG=n`FP*R:jΫk b^$ /~ݯYLy* ;Z+W G֟i(~SHw[$/hB&wr>\?fg~nQ<|{ZjR*K% lnV _n\L%tzuy{SsfA<pGcA|iG~ +ب*^yAFT_h&T.ήeOc*R2 U%'r ^Å,*E3V#[wh4~RC9cT':lRJXҘ)}lOX045rl@ S)fθ(CDzT]Q*B俔&up&l"y" x3fqIXo/j@[g}$\mką=#UNd4峻"xѯ9@ 5MH?5:lj8I_i =yn- TGuNY_u[܀Z޴&MOH8/{w\UWM%]7|!Vv݁B@XUmlx%H#=놾zKRGxV<h&YCL~?^ҾxRnqog_ߞf'o7W)oȓ Y{s*qD tN1G /eyDnh `.\\Kƣ22 8۹IqIN|RFΛ21Bbe2 jjJDmy&$,'fC#;lE.*L i&<(Mm6haJ**Y(L3r$i7]7Ӆ)L B`uQR gRa2R\􄑢p,DI\_q0H1)!34;K<XDN:s \私0^B6rТrrr[Qp(爛6*ʋ"(o|6jPM8} igQ("0_%-!Ǘ1ZօR}(ami/GÕgw|Ғ*f zʀGG_<78<*u~sw'ْ/=\__^=O \K͝%tnoq|I`m)۟94ntQCCp~(Jڀ<͐%]j$9HfT@%K. uL k%ǚWePWJ)j,D2쒥/FhT+S@$JR65EJT,I C!UChBrtHvV]kO<ڱz۾E_JgZ|rĜ6|wo#o߽O?OB{;F eW,5pY;>pv|](x 5Ypy~$<\dtGl}T4l Jvދh-!8:}^6bp~;Z6Q _B:ۻ&=-?t>{v9 C::kDye]\E[;&he/T7 }u0eߜv"L9n$]"9dPY wM͍E0TIY&QB{mq $YhE,K n"FLM7p Z(R4:%~z''㓁Ӛ \ ҺR8 x\՞|fÒ(Ӛ| 89xJ3)_ʦ >m4ٗ9}4&$%l;!OyFᙐW9G-c 2lK$McV'rO+u'-$vy-4x;񲰕i´% )kʒ_dtuMׄs1G^]|y֓GM"G9R⵵'n$l_LNA}>)N^Z{BRJ^{ ]#K켋b#ѽ^g.K**}qـ| j6!)c5E󆒍ӕ~$}ٵ:{%z>\1ư'\?Þj@uö&@k|S|jwrZcۈlǰ5a!j (e1zpf֓Q 48 eV^ i?NZ3 n|󆐨ݠ{pfǔgǔWl[L#9qҘK\B׫qI3&0\T}68NA{; 9z0o4"$M#V'|wר:nET/CFX h蛇~$%7=- 5 5H6w^%{уz2n 7F @2G_d Qʥ"qkO7!1x9zD zt_aP;?pTbY#3>hr (8EcsDB*lQ"(޼wnlC&'GRHu`LF߹ C[ @pW.-JFy4zA*G~0 u g*cr ˩@˲z U!q0qɝq@"G(djE^U>${vysA:_H;PQRLeoBvW83._fbR |j6>'4} 7w§.✔su{U%J鍝\YS-7r@A <&, #ޗAyOf:A1 հK.հʆ}ex&4mEiE`!,EJ E {ȢJev,:TjX$:eQê_}KrB-!'T.թѰ_2l;kBȓ T&?0 ^ñwnp@;={V#ӆ\6{kݬFW cc1&}7M3&#Sups!&+HC*hqy89+ۣͧoNk~ "ɠW>4.Y4s"Tevۣ b@'sQG\6f3P8`YDzT]Pf+w?`ȔW-S WqZgΞYC {0|H^=)nTy[=CluLuzHv0M Ia_ѵLu_b S7Ƽ*F zbūC8jG'@R?Ju29z'C/ ׽;! 9C^,PF31 ^8! Fw M'@Ω樚+yo)e\" _$F)ZK"-wB}i܊LGK>-M$(p?]gxʊMaDhg;ڲLB, Gdu<]XZ[ OP 0 1' _rkc,,v(7V-1$pZQ:R B 3eCB1WPp݆RabkD%M LUibPW5j4y@ PQQܖ//삉Ϊ"* q5U5A,A A>.*0T{͈pq|PeI%`њYvuVyrZ`Wh\N '>^'y9_yCFL~gxt6~~&bbA̔h<qu6;_;Kē쭧CT%1ǨnfWͽguCqքQQnOh_}/e4ʭq^^-ô>0cO.GŁ@+S2;?9o@I _Ћ9m"g+Q7(u@gϿ;=Wlm0%Y}O2|JCQدtegg9.06ѴFyzr->XJ>&$ry<)NO3CC}P!{Nѧmi)EJ'URtrgl.e5?5nJ5E'42e 2NʈB~m"eZҪV jK#u_Rx`B9gDTzwtn_7vunP>O,* 'Cot$`{l@ }^?&PQmPu~ *no/B%@RǴp,Fh_PQ\NΥ-4 DPK.cr N;Rh@FJUIgiRPHU fbInES‡AYzK;1f)O'}0ُ35I1C!󠷳S]MHO5 :֫ivu]W~^ӌ׫n-tk уs*7aNnte)ChXQqhsͫsm S¨d`9XC,T}*\P1:ee/>LcVwoF:"4Dw\ntг^eS^As5e} ~#"eN(jßPH%llMd2fp9Vv1p2V/ػ(;wL‹h(zsfm@0I/lBel=Yk1>^Ꮱ6hKMA-Ԣ|0 5l;+v&eC𥃹l3LijRr8e(?0&Tîϡߗ~t@:Fmf>,9,z[:v|O#F-{ӍG VQB_}-fTFzrw&?{q cu0OXq6~ \=h$(:6߷Fn+Ek<]竮,:B0\{_~"NmI9NOH$<nheWɖ1^կW;a$CNCߐ9V†vcejz-q}H#`_}W1hgcfT~CXg="~pٞ>NS+sPO>yE[m18uӧ_ۛeKZ}2v^nD S^]~\k 2m~|&gL*-z_:ỻ] ;=Iϯtڛӵ79<2I.MJiRϓKEVZJV”hȫR (2&'?aqiܿ@W 'F3}z~&XGsOjO[c-K0P4j DL j|ThQ"F!/nmR*0匌)M*4mr%h HlXvj?*% }nR8U兰X[*ȑTEQNL)J*XšDc@IQABEQP.$j%*B&1e D%L n@pje>U)X,+ JHMKx2YS$i3X 4(12uE8~d7NJ ɱ[Qj}4Bؿ59FCGnUaـ9NdXL d"k@Js֒u SA+$R)nkSVLIh0N>dk?~hw ,w䯥+Ľ]ݖdi~O}7E(TA{^SQ3m#cNS8Z }4UAeY<Vfbp괽-Jなc8 TmqvNZNuabF蚸c;+F㸈;#!( ABTPg5'#J1ut?8 n}=wt?0{ڍ`k_אƖ-Bxt45 RRl>)zm [stDtL՛ڤKMlᅢ)]h-B8*k?;+F qmg˼JlYNѺoҡȳNp2u]C ؛0J ƥ(VzTZ+R̾+N?@ 9L!hg8 KjC-c$m]*yd0m*5k(R+'U\1DqʑPN[/.i̇;bEߠc]Wq:QBl,m Zaޤ {~NVk-FTb_>8IAbtf t˺ ցaO< O>IM5v{|9F?|Jt"םy1ChSϊra_OyO1)Y~os_#c૨WbO =/ܿЕxqLUi4'N,R]h~ⷈz;z07oqř^쩾697U.24CN|6SZsͻ-h ޭ)!S1ml Mޭy%ywkcm4j[{챝)|{ چ(,C7RAf_n[,]vpb{c KP鿘(\T0TCWB%j̤y\7X4B\|6{Q~F]ugu}F9\XfY$N3  ,93DsTPhQM~+G1OyX}>^d/C_z v2Ӂ/Rth htkH%yX[6|-hlu-yFm:yF ZuiRrD:oG-v2>LdW3BY/Z4 p/1z|/ObzQhqL6JU1.O=\[nׄ@oFv„Z`-' z' ;}{qw(ajUG[ i)OsY""S)25&S*eIsS!V`& VŁ$%oUe;ofoǵպojc=C]Kjjexw':dx.23[h}ѨoAϨU48W#Qk-9XhvѲp;,Ȍ e{w=Қ$:u[q|+fngMMV1;*U"&&lL$Ź͚3EHdGxxtFr5}nM[Iy4ƒ}Z[E /}r9ԧMTj߀y' BRUEe}/UQRilfm?&,M*-AHs!V:aj8Z(l,f{A[Lko:u^׭Q$e@JrjK*Bk@cVYZH!u s[2 Lo:P8mhl7CCI JH;۾)}VAa]dB1V*-d邇LM"Ap|=?\K_O%X\{3(ӓy9?XB>cO٥|v|}EI} >Ξ^4[ p*!տsΞ{! X}sn[B--_=RYZ#2@b}W6=J V=Jأ5kLlfd9 +\҅Kgx:- tkxY)q,+ klZuvFh:5Xl0bl_Zm՚TT-~ay0eKt}<S$H" -̖-03ۍOdJ N/^ma0b* }M[&PiNQIEb,Oiy2 2K/SK?g;NT"!< RJc^%vxQ/E'I M@<1d Y C4hte%%Ԗ*B]HQ`m!X!?6c888Q!Y nK G lc %#^är;_x5]Լ?w FCO7g@FEh=y}$;T!v0v >-x J~OdA(+hoWn<|ͻ];Z@X{?`b/.Q?8`#[-?\>ޅ6ZvYqM2ʆJ̫2-d(jQB HM9c7EI T93WPHWwo(?ͿkP$ #lVI$ەt)"G[P!v'u䴢Ŋ; g5v=sWqqLzp(p%k 8d&z<<2Ŗ1DrOUT[8JQ*f&Z̲ $tݔi@l8#e{2+mY:L"1& ĐF*ʦZ$5]]fmdvFsaCͽitH|\Ӏ99" [8zWLٱ/yק2@Wyϸ=ue,}yO>m<^zftM#|c2:+Il}G׾._ >*_*2x";$ny}Dyx9$#kd=X jqti s#5En:3 m)c7:}IV wuyٍ<OݒU{7tygvjz YuS~^ 6B9ڇ7h.?<`uiD6@tlbA𼍣[0jT L`y 9:+#S‚j9[Cg")d枑JITa^`4xDuǭޤ$Ff?_HW vX-5gz6_rNp_' ͮ89ybW[X34vAVS7JCjH({#ꯪ[1u\uD\K<*̈JQy=N ֩EU].p~BmO->` T>X:*QvB0YIG&/Y+^Rm;BCѹS'5$݃xKH۞d iԼX|o3 (!w,܇A_?|WhTiws }5%F7 (Dg9%Xʮ$<#"R*9Oj&zߤU}_Gp%CE'wZ gesm&hdT iӊEM+?JRxZ 8$BpaejD)N-0T# , D0QQZ T4G.b k(@zmn8К9xy6w=ltآ Kӥ=Џ.d2b-';\p$u$[|#`9\0PEK-w`IGR &uƃmfj 2˃'a\{Gޢ M4w)A^aJd\Rbc!šXhEZ&dG/47!$itUhn2 -z˜+E1C,ha>;s)0UxpDX35z#!$ ),0loxGz %Dp@!y`qx` \+b ˂^G7S ^K:f8"-X@ofkx^ }qؾf&\3x[r{,|N?7z04@S;Cz}?@.^/X{WSL̝D; ff!@xeu['rVxV[hV+xt_tQI,acndp iiKU\ ,F7Q wf|TEOiqD*Tib_*B[m3(m[::+A%j*|,>G7B!pV )8`";;ńƘt vt4N4:\ob==|pp 8'81e6`FeCѭ2nj}Cb=(ZZ*Ǹ(m9|6R=6?y.;]Ŏ:ձlqGߓZVZalT;_~~o:q#xgS~/@hk3RcwΤ{(=VpΆ^-/Jߧlvwwpf.ҜcjƏ=>hZRԠrG@:eO n|//Jo s例6h.d3lΑ7W*{ZoDΝ;﫷x<ϳqtkAiZG "8<8a!߸6).{Iމg?E0P }³Oh'p(k3q [Q6b% M7b{`铝i4bKғ񇛏rX?xwQ|>y?36ZH΅~esn@9M+-r" w/yQ tUQ^՞6>_G䊺H3l1ڕs2]܎\)Yf썴xqINd%$s+=<)1Ȇ`Gا)}~y2^*.OJa缷/a_I.;Q 8oSU\>4gK7-VRHI qX 3c}K.:QvgC^IߺU|fO)dKӥ:#g$hUvI>K&Ϟ+/ɸn¡1mݎ#S9{b]4e^M2dZ7ɴn򦵓[pǘ o8PX#i ;'"W*@=ճ`7Y솕mZP!NvүʸRBfp=:*Ɗ.,yֱDm0wiZRj.ޛŇOfvb_~:Y/VunV˳hi{y,XT,(HHyD9(9Q0`KLZUI֤gX$*JSgkCNG銀1"Kz#i&$jqwq&ړ5J2Vihs8TQߩQ5{h LMWJwodȥ7ԂS.*-SJ G]ɿg?L?sеMZ^ A: `t⭈PX Rb~5TpB;[Rrő~\OLx7}U`8=(bKhg%w#ܥ.Qh;a* B?/!P#6}+νOxgŬ1P Mr-4q5J4mׂXaxJ=q ݎO Յ<-M0!t.Jl{m}s:6AwRiϦ3'j9nZ3@aq߇!1VJI^O??=@'_M8E9?@泙d0b%:Xwxyϓ >VX,,!!zQeְ^ܩ܊Oۮ '|LUCNZ'ׅUrw~96 kd%OK=0qJ]r9ק\p몜`Yzuwz{uwzU3,h)sA2=uǷK":Ov^[^o@槑g[ѴR?CO"~iYҪs(IPVRΨMk6ֳa4y\-S_8 ǬiZ5M̚`MWXaWUeh4ErՊ7H yx1cuP : 9!ɟ+}V(WqE ![j$`;ܓvs,wD#udur)ZbY6XVe,k]q BkiHsORqSR,}rq IbO}S] zhShBFSꮓ5TL6FzZM>i%81rSLu6`2McZq4NF-`(}ZJ9L!KcRK#4ʑOс=.+HIJЧaFRkջ-5P8yTR( ~ #ESaJ5c:RbDS(琱2p "#>^j0+X5Hw$&kP˹n%OR~XOUW[kCsdd1qnUJxevRDz/_*V[0{h`(9 Q"0JXTQ"} q',TI-kknٖpJ)[ڙ+N&/R V,)$8;@"Fwi)%8u,Q= 3c1qcuwO264nռYdLŠýev{+.īvMqήoZN֘F&hdrkh~fHj6:^\sbP/aI&K콀w\"8P{#BF9Ӿ|8=3htU.zqҭp[9Gj$`"]\} }w#Nb-.ْ݅7!d2d@wYjŊ,Ztw ~\A+8JIx0 {XQ ;bMÕzm̄|C~<”њEGʞQW>ug:7Fyw:BEȢu %f֖)f9?<ު|,W:Dk.$ϝA,o>K9CVNm̭5sH7vz~[o拐ݔ_Jk|Ɵ/-D!ҫZ !Lc<ne*˰ف\\عp0XC.%FAYb`c(HYSaaqKlj"cƂCJ[\^ ybisB, 9/kSLJY/]l #*+p!W %՚5ʎ~5Ŝpjr_ Ԓs0 p+g(i2*:jɟ'߮߼l!eF鿯_! T"Y noJv6cI ZAh]Ekrzs_ICȥPqe8)kgı85C(  *Uav, [A< x7&{!p\'ⰰN7΅k'80a7E+VƂYU`΃sBR8`+ ؕDQ 00Tk5)$PvRR߿DiLъ4z)d/5X]k/ -1Pz6e(ʮ8h( /O>_S{)#.6w\6kր.ꚔMu*[<>{ ܬRw!Kbj?xy:~p [73/U{?IO) e0WSgŸ/>N_4?ڬ sZIa8"+ ` U)^Y1<+ZAUj[a^Q,˳םT9AG=Ҕ$zBZ\2[[̸w*$a Q(\:ƈ-h|X"!nvui`X䗇»aB.p$AnIxzYB?jsYB?ؖ$ 2jIQOv?H>,{ B**C-S ǩS$p zQp?C? , 7 N쉐ka 7ZԲߩyY9Q9ۖi@H1=uypy[rX`5-ui*!bMv[X7^It]jrspu =֓CÑ;(l-1c,򾹢N$FnrD> zYfQaQMs{]=퉦4EkZ;@ho0I}$FROlE ab)-!X4q;Mƌݥ{exc?3 c$s+)$@BT[ Q|W<'@~ڇ)"F"ƒ|;5U?@{t![-l_|&q$a$_Y$ͪ|HZ较-(/KA 6kbteLE\2"`]%ps*@1-uPWpL )цdZh<[@,P+N!i GWW8^W&R#sP}#=~ysu8_su8_qX(omhR2pc8X(KJˡ͂E(Oj(!ǣO {φ#z& %jPq_i16Iz3bӛ9w{3e{ kyEBB^)1$-Kc c:3as!+1%j-u\ 2Lͅ5z}9r@?4>S#Zvc~c'Pjju@\Vد 0tFV5ywsb' ᨫ\D2)y/;6EaSaG1m8WjX*| x@fW :L|2K"&׈H9$S2XHT UN)&LHQHh,VcMikBEwSFRcֺK4FHKfB*5JiJR -%S<ܘ؟L je˳A"xej)dRR& Cd_Y@$!@DC9KEh6R(j)Iy2ٙYNFJ K\ um-нC({ و{08owZ`8o5@ OGp.DŁ ~;!dLacuȂ#u)tɸ60BK9d֖.Ca p>-JBI@|@tU?ʗNB 5tyv^(y;gL2|:c& oʏ_^I'_|tuu\/ b(Y4aʔX[HHPiUX[bЕTŗ(v[!#Vz.5@(!3 [(aԠ2I3)Ԏ!w"RV d#zJq`uKL[Kǁ`~HY)9@$p:+ a3BS{X.%^j X=vGv!C@N>%A -BpR!1$ {#&Yu;{EC*ZI-9Ԩ(+żW-}y8 XB(C jHmǂv8^<?Oyi>AӁ7szJaS}x"kp*m8o_9_}Lo*btz{N07{l$o@V -tPKPr+**%*͐v"ϒ: JYõT8څ̅4wjfJztSG{$! WDZ.ĈV!-g cFG4:iSitk/~j[<[}xAF'Շ8*)(.B ~<P sȺiBoj&t( Fo%lvVP^`6xdA&A1!A9U\;8I,Kbp(( HLldvoc[Nc?WTcXK;bgj{_i̷Ş{1|d;\}ڠAԤw3!H_ԶdS&% iL*TSʽGoBA: JX7m*l,xê\x - UʶH!(t,7 &Tx r(V6pP'ecz*9W9+W~H:^\ ˥,췹|u?>_~y(a^[Wx:g G/gx:mCqrT2*`Adittt촄B^Ks=}NGZ3H̾Ї 5e}B#i-\*ጎʱ_!FJV YS\4҉ތKHYBCv Kڅd)HWWD9>F  (kEdp6qjRF*N i:H=FYE7?| RxJc.f;y^1"{j<@PU9bi$rA"5p2 `Hg #+dlc*΂']:*9?j,;VPapz&"G&0T*U@%G5Ql`䷯X:),'i+kUG O\yf3^JVYOW>xgdIdACĢVTd 8 `$8tZ^hɺPs)>>\[mUUŚr&naqoVlo!Ny,` FW9˂9 TԌqRdP k80Vf3) ,]IYD{YFhe ^4)΅hJShN(U~5%[孨RVCFwb^I>@fH+RHƯυD  #2V@ su#r ʞ:dA|ʞbYP1&쩘(?E暫⟊2 ?>6??x+$)4C` :Ɠa&n0X[@ȋ=kRHo~eP RR*# Bz!4[ILpl4j듺D4,Ro o+`h&֢՛ ֽȖǚ$ҜD A~h}FKn6h[K.Io#lښR&ůuJ1C)4'.釴8(Zwu{sSǔ!0q/?-6oa%w!F.9K$=ӔK5mI[2 p8W˛ 4t2H a_1m;ah.%;N+ҶA{Iwz&-qbz,:rz":w*yDGGz(ts liRdMjmݳocH)o&;};Gj0] ^q!iXҧ79RRfSlCyC0"OO %?YxKa?"R|ˡf/j<~oUrw" 9fIQFMmNFg`̛e\! %$gi raiu뀎7Iط$OpAt:}_7&z" IMiKkwdglQG%R`8]:]Wn%=5' {L[DF󁚪u4|Q]e[_4#jMr jIo~9r>R@AX}i=` 2ؗ%4O: V7V( M2^-c!~.JwјAj&+MXܚL>e6V y|N[]ՆKim(7ba#m(,:ZuhQMjtĪbGoԩX(d\I.V5(0 KnC{67?l Nj_2(h!R2/]V)+-i}_G::bbN6F]+d6 pI|z8i f,Vr*I#]^Hk)'7qKݺ狛G6Oǥn({l$mGayxXMQd.zM(@ g]oG~{fv|׍/>UCSBMxb\ Yck%9CBM(+N;qQO~p}:>A1$ǜWr,LlC!mGPB.]s}`/̰~4p{&֠ʦa#dlg !QK)(.^BKZKޔyZ~_e,'{{Sx .8H!΀ ab Vә^<Nw*3YŝKЊ#'lWt$ə<Ͽ^:q xyZC;mV N)qiU:'=0 :bjUo**C\™2҅ CK|x/bݷ4@^c!\(~o69Uu() \s} y@!QJMݓLZ0<ڻ'X%ԀzBO&wM:u45gNx͸Nhyꪖi&LUe*uQBr*b3 IνQV}v͏`1qV e PSQ`_ Sm\-lrc- - x  EdMVW@N&H:dڡB FcOYR˝5.OK6WZ2BWiJ|x5Jj5֠UUSy@ĩ$-:hj*cjǼd0nT̓;V8AS]LEz <.S~1%lr6o:5F(*￧w Ce׿&|3 fHg秿 +a?ކ[2[v[cyXwaOE> &Z';(#NYdPґFeG7nԍ\(-:ʊk;<1Zu j*32H:QLsWmoX(5A80L#NT0 !V!+Ѓt!|wV^h)Eǟ$D$ CƸ.kk+l_-KMzPdB.߉|t @[k%8 Ԃw2$RqcaMV9B :oK{MZCznss7{*,\H~}NISA;}@w"uo(ffuEdpX޲V VkaJ++5V<ʏ'y XI܀][Z+aak5 (Rkܠ P}%H`S6ЎzѾNĒ dƓ5 q̇5x>87_V{ׁlvI9d)0VaV-}لPYHUt]@yC,Rf3xMG6hyXKI;OA ȩ )W0KRA%ATpz@d4S(Ky|\C9D̦W}K1mX@;^7/.U m7C3wL1=$l$MnBI"͇CqѳmI[4·s2p0h;eh |Wn%=% {LrM/O_4:юZjys_!')Ju0!^A8ɛww_)pByn{_mt>"?vW[,ݼOn;ݿ]mFS2K[Ĵ6vjѵrhK bhmXb;5~ZlcFQ)Q#VN>~t=x6ZVC(Yv_k;"?µ0%ɝ*lxt-J1"E$;PKcLmwŝ[︖@;?/d wm=nݣ1~lXɉo/ l6V#M$c~F.a}SFA2v4fbXJQHH+qFAP:N#2MCP:8]NŅt]h ]ppQn>ھMpHCAHS/nWOnLL68 ן[9oLxZÄ#A**=ǵ43ғ끀 ʖVE4+ *F@g e-5 92e3{R0YN1HBC &OJlςfj/ _܆꺳s4g39U<(_ L.U K(f?j活`wUWNjsiz@[]saaDž 0~g@MK[mϲݽ_݃Σ?:K~yY0 n]%Em'I?!N\2LbLY,I:dݳ+m䔷Ȝg1>} A9;-3u<:2\uΡ,E̩i O.W%"<@/ήO 񭏶aG_hw|q@`#ᖺW;DS(%\7Cl7&s=beX5SX Z_QŚ\/) UckŚTcx^駥 jaݷ|%B_kdAolf`_3ofnnq`W<7k\JaA+^SX#hMxJþ܆/"Z|Wנ (%#wߧxxBWҳ%}eJ_ME4I̲j@*vR11h@Dj:$䅋hLiFDܡvK FtRǨݺ;  i햞ꐐ.I2E&ÕСvK FtRǨ:O[zUvCB^&ɔ◖`bq\c4Ed:ZWױ$ )]}ZWZ^YD2(-}~BJ\uƏַFW/bbu0 f9Cc5_J 1TQ0 #(vڣHI5)#Fe(5/5C?R3Ԣ}9 S?_EaP2Fh Z4qKdcԦ2Z"R i_YmJCqF~=~"8-V̗έ~۱3 ||LgZ?8i+x0on^}1WQj=+ULz sb]T)_*/XG n kdcjŇ1]Ҙ GW5fM\`(G]"^RS,(~tļefTx7)T>yQfXFĒH\}d5sBQ]jFfo ښ~{x{gx$hlf[P3IrѢ(֊8 ąLJ_ rd.rpk*Q'ʵWG~_Q=WV;?R "'FᎫ,9!VJfrK3x47T[%[\X~x%$rPxv:;M2߮IEdK`C)(R R8 /LZ]sؿ2`!+EA'z0|+,K7'iPB 8\NCp1x.曁aJJJv<4Gze}.e"}W~?(K3 5 EQH7 Pnu8eݻ.uQO5L1o])Xn mRʯOdz9 59 Hsx#&AKDDj5tiXߘR$z"M0C0Ëb3;tK?Ӫݮ I_HO6ssZ&Ю(|m{> HKUگD޹ '.T!XGsͶN=v%=V ]3Ye͑V(3I2Ԃ!fN_EhG[.'Iܭoe:L|u+/,>G?ߤM]wzJ$8dp8hLj3PXg7vi,KWXtkjT&7u!^ F5!No]O(ޗ;c/WP*eӷ^A$#!oB>y9w\\&y*2m,sDN:ϝ(%տENS噥2xatVpw?_-E0] 8D Ĉ%D]j(,]t]g"п u7׫^b_R/ 9zoufnp,EoO;. u `~N8S  괂eaZp$8߅\ ,2F~>F wR[1+xug`'~A#rW2昸Jj$Q׃.I|x`) _li:G'fL)y~@'2?uHFɻ;e//3C10 }a`XN >*c_1/.7u0Klp߰mMvu~B[xT{1~mz4{jYMոr./+ mE伥3ܫɲB\1ܷ?}@Na_1MZ@sW3c rc^M&~ /N2hy+Wm:N3_LP;Os)Ve=W+r|3;W/J 'SGyn9*h|<</id_T|%W9QHTP˵И;mhnLs5JdaUA Zq*& {VȺIaV%0ㅒ#hRôtcYcŒt+ r_JצGK ^ |S hXKOz. Lgs7n~z? K<>OxCd%c~ognak7 "b0L7;? |b͉1~^~gloec3]d{@,/ܿ;2@y %nV/x~+b7gK"STswB0sǼ;*E/h UQR8CQF" `[.YN HIY>P3Ub Jp+r Esք$Rr#qK%}7 O-$kKq;`Bky&w ]hBQZvD9,NOeZ{VC-zKH``up%9+PFjcr`c3D\aqc*0t=< 6*€{r#|S͵b)YSLWӝJ$A eW&m I()CuH@v?rpwUu ՠ"r.pEn^Q*exDPiTJ9`,(對 S\g0d&+qp  M݃k.G2)ׄ$LT2.jR:u2 %W*)t2d<ÓJ$@>{'ėƋ9xoeAe39άZn}Dx<Ԉ59hu)BKi/1ը*5#5‘}Vn–` q{^̛K)ɪe{BlsmҔlX: lЧ63H-|ޖ16J}unm[BŶTu= |I!о^Ϋ' -O,&nGh~-DGg! m#y4f8ґۏ7 Ow} 6:{gzHQDTznWq~۶oO`76(5 #{^n4 K$8Ds HnB`msqw˔ v ds;JJJ꒳K? J(EE+|PrQmHvWi-\*`'ÁSi&'6O5:#U:9|H /YwGr>Y'Lkzt"IcVo0H%$S-)WD fHJ,JC75EM]xaDž ݙd rESA>_ݻU1X/}/&w@!+DPVբEByڠj.ĕ|+ aWm#ĩta7:w.GT J=g"YzRJq[ */21(f4 -u7ncx9 @/T5CԧL* S OfR'*Z7Z ebO?-;m׍Ȃ0?ɍH$_JWT.gd;bf,vW$WgU*C.$g +*vgO7An>=ᇻy8nzw˿1n;>k󟃉n+^RZfh<(&,zع'RO#[Q~iRnWPt~bޥWE\-&X4 g ^ qJ%$`KV}͑.s?6 g&=_; ]"ͷ3WSyg-(9XUgˊ]DG^B${ǫQ?|t)e\"]KX5']-_n `e+wq Kg4ٮ`0Ez[x_^tYUe!4=L\ՋS(Z!6ЗںWciJ/[m ӭe+CΧځ {fӢ]î{0{~V߇oY߆wfo b>%bp.,2mry7U|`gfGM S4J^W٫QօkN"Ak(3E a+JkaNIb98 ^`>AXd:ޫZQ3P\oT9݆/ad ]kdJKɨ8LfUVXxec@$G u֞Y*ppS尽ʀ l.3g#"3-í,Ks TJK7!K#^L,xkiVxϱHuh/X<4托F (n7[Vӡmc䚘BQX4+mփ@ QmK*Yo &LD&Lci4a0BTc:(8AK+ wĕ)R졔D.igB&dgH?d7vS4ʜt25zIn9d@/~b51xƐ!;޴-W}HK,љdt[FhG8ʴ|-:)|%"mTG=R9霫{W`msШbXr::\:ڒai>uMm C)CR~U’K{Wa }mT׮TޅoO9CT^5FHcpNYruKnjSLT󚄾_vui{C=t5TxιF{TWo Aæcχn.2!zG0O}bXy_Oay5*~s< W+dN޾ڶѕ p&pQ)UAU C҂ )*II}4v;iCZdP seg.d*qnv`i^!ϕi#C 1Xx'4) Z5H Nˆ@Eh.I" =EQ&zm5~PL1VU c޾w?$$U|-HfW"rZ~b36v^&HV|uF क़<bo?Oȫv?s0O#2nSi5 Ń":?:N=_ȧ|^H t^4F sY>Qx EGrGrZg(PcHuLJ|(kO! ~^r.71?>M]&O_+I!=SHN-TrnfJx7n'q/'|D{̭c{l;ya0Dy,'&3c }+Ok)x˹QIۙÄطӹ$K$}vͮ&4A-D=RM>"g Z shfuo}aWS$udB34Ivz@+]'z“|qv= 3~o/2Ws|^/jޫԍA_bD-|d#5Lw 1Yeߧ }= Ɩ[A?2U- ^#RqZ-Z'7ϟV z2rCfm$NBFhm%  b(pP! N!!`ߟ\;J4Ӝ!%' (a,sVQ ,Fh PjPpD^,{}Xm_?=~5*&ߐ7>~ 1 E~ |=^'OBgߌG~6_,Pp}֬$06}x'L2ʅƣXe͍RQ؝2eHp)hwkL>½8%r`!,_K#e7%`t)7֦[BryzꩳFWJJU\1j{_X]]B ,!ѽ/cyAXпن^*ŢO<8FifWX] ͒C" vf뭿qMHjnINyXտ]:`qC>Mu9!pkbs?`*?0u?-Bs+N%ޜZboۿag[MD!.J-r*S2 yf, B>#0ϽHXsڲPq͜y+isʌvn `s g$Y ]Ax9&F{!8)dT5Bpy0I;a$;<{Q/PZBJǒz>4Sa(N-fE/?ZfE$\iV}.?MY&x|!rUi{ WQz$KVr InLPL_D>"Ҝx39mf;LzQTw6I|ĀcT~Vkc?3tĪhiWW/hFDq{xth %RRQ}*]f:҂i<$mUw-V ҳFP#{})Ya U>A(*7ؗvč193t8b8 3e쬎^rb56Xrq.U8;E#WR&ɕDFIGU -76Dh5‚IDٸs-s$FC؎dPm%'֛=8m&h(_8ӌDArh Zdi:&CɠӂyE)m{qŻcoۈXMVKXt@vZ"K$y˕?,8Q2-ܳ^z1y:2T*q& xgÁRkS*\Rȡrc&_B1P1yhBwŹỷ#ЇWћn~i74*)NWC`Rt@Ah EZ%sm(RPц%Q[1%&NN826YaU6ȐPs#=V§pf*CC{/vٹ&)"q[꼧[J؞N4^{\1sQ0பsj*~\s\ΏpqjaHX\j0qYcMYI-P˵ᱯx$/Sޒh*Y~\oQIN 5ϛЧm~7\Ҧ}kw3,!au]J(5vV$3u%X˖94Q4:E3Ρ PL&K9aI3Dc:Fm Nn{Ơks64T8BHLQC3C!8tBF+ |@ :#S ų좯(Dyƭ%b0%U/aAvP<8:69]\ET|tzҍ?vD OY  ?{F_!nsh/&CC5mIJ=#Jə(- ÒWѿԙw]_Q#P;:ٲ:u$sI?5K\K̈́vVZc-PcrD H;댿zhިBFG@!g1^*7r\ѯ}u%NV=vn#ؖ%Sc?W<[ nj/G$OthX\֞[f쳼?ŻQgókF3PI$>yߑ;*eL\7vݑ~N3t"96Y.'=T].9?dK IPi<w,#%ƇHxeUFNJWkNޣO.z'}ZB-n6A9-;'s-wsZ^LxꖷdA>?-/lۈU7)~ m͙}vH%#Y-'sUiIUJ D&} kOI|o?L 4GW~Y,'i3Y}gT"⴬ ez=P<}Mw5p&ʩ6PY %$4 zq9P`# F  sXǗJ5 _7p>krQ{)l 1^4Rx9 )9Ɗ(J$.m RɥFjϻ$Rml;k,~k:bhS]0Mn?oj]pD}4K`3X.030`s'0s:sLml A9sK^ ]! @ 9H\%C6 dTO<T7\dSw:Ҡ]*.71T,ׄI+ħGuH,0CqCBV`CT)\||SDoϗma!w'z³wyj㨠rwtġ1YӤ2ώn@M)_DV%hrGAuVJ] _uY01I%֩ sJUңNQf2Yr>g+u V-kMބ>A@w˹G*edhcTO~SGoxY'-k}`'ob;g-n_{ &bZ >TJ("p'z,({-3fG˰)u c>)6on^F<>OɭA44ԃ&qs#\fȨ޶4#V+(U뵖FkOg8"ߐwJr&Pd.j3P# a 6h@BS|kj4TSV4f>cT1@xȜ߾/'[UDb)ge+I TiY`lt% ?J]HDf5D׮"R?MpO;p#t[̓cE;*b[rul59:\\4f*Uee{&GCw+Y:kL<~;*͑[,&gg[@]ݍ˛TQMn,|k9~{{Ǭ@!ct2 N"ƂQz"H F'Iٮ|(f)P-&t8QaA hZtyEu6WFk1|VZzU!ԘR*(Q;"Z ">2)XTO BAۖFD!S+?yEkY㍒Oqڟme{TF}u=e x{35ۿ0G!H+A}cb>@c?H;iC_o٪5BQ0gYՍR˹YD{F.X*2>l-0 ٬7_]|v[rx.8sc} ʈ cD*@siqeUl4DTɺ H8} "(op(B +`6YhkvanN,Sv0# sI\[  h%le4E̠L`AO;Tvgɏ_|v6@W8pȘ0!$,v>N-=::d 뭑MR*X!C5@J|OQ&gL2YIQhb}OI/l85X'Ee}PP8aE4Jm ^pS(ŎfLȾAoD>񶅤E`}4i7ߟVbŵF!P}M`JIq6֙l}B/D{+ Mhϡ$WBퟗwYxD3CޟSF!Mg 14wVT= E53ɡDp\k#5{]r'oFɝq(FQo=M*lJ+pq1)Ii;ym m+ئQn0 br|k^Nʢn̗f|~qZ(NJ &VǸ!<,qѢec7C>ny͏Z8 ǫljC{޼aQ2{ ܱT=͗< nYϛTPeȴ>‘ R:j@KΝR9]QB̹| t-oYB-"1_}b[49W)|AnXt-b,8r1-Wo]9]o˽{fkYZ`Eoo=ZV @7/Ռ~k1U{m-T[oj@ %]e ,"b ^/Ѩriʧ|4O$ dYٚ8PF>J/_Fg„fcx09bxyN3.HRza?{q 5]{•v.WQ)ðh wM۞uK۶Z3wB!I4 cEDJIFdrӶi$Osa5z(KL(7HԞ +d+Y|oRnzLѾ4R{[;mc&ܽǵ3!ML^QCZSISpBPET㩳Z2N逅$uÒr#a`=Y? *޷@.P[)Y` @-ӀxjH5@ U tѺM)J1r+*- GͣZ$κ@!Q'ciR8)+3-moOqMuA>Xb78w"WIG8XH *ecG@#˚WVh?D3ø`ۄdr ⲋ` MVo#*oBp "1&ژp.GD&lr:jeU6ZЧZxѢsܢ6{2~ Po=@kȯ(b|0Y,G z7f/Aq'LVnw@N8g],?^(8' qNvrv r"n ts~>J*C$h! ٻ8WvȪ" f, wU#Kr?m,eݭdH*5xǫ0E<Gڕ|@fesnV.z7'p^5G0ՁA~QGҝ=Kc{Yj Tar6k.U YR͗ |>KuYICdik I煣&ZqwF 2Aշ*&YUgKI7M- ٞqG ޔ 8F^Ð8!OVM5r}TX=g%$PuMUB:NP߯[nq| @ ߋ,}ܷX)쪲Oz>ޞT`fV/7'!!IϘrpMd!1 ޤ>MUuΥHuMg2+~^ZP?GCSn0" (ʩ*OPޭGc"eXQS~(Ի2F#4k!SȰ'sK.o|WNcyai;VihshjeĞdbJ6HhZ `ˬn8|$q ^>*Ѳz]TaK<5C+1{ rJ%f8"RZSUݚ(jHB'ipD*#M^l"!=(|X\.Xb,%ks9S/FҔd'GωD0G[֦PcNՔ 2'T[Q\Ru9!f Ñ.6RNP(Vfi-C1!8Bk曺 ltLb5I!JٲixQ.UmՓآw='ʂ">RK,--% fju2"/A_h:zٹ>Pg8!yƦ Qa߲ 1DC^LSiPE:H Xse֤N&|<5k"c_%7RDTh[nW+ Z50')u]ϗa;zﺋ)^TN?n!]ӳ㝀}dw'kכG~>v|6-%%/Kn_}l=ݨ+CқG3v |E1nxrepxYx:^^Pu׻}_7tߞ(mK:o5N-z_f-s >'u_hd~vLVAtrgo.mċnEd~l}:wM֟o4dk{HaKo_Sv߾=:kGS5]^R}?_ nzg_S4xYH''% WR_?\A{g\ jf<'760=x0G^=GE߶<̯ h@Ӎ;]Q7B1,/0dtR#$ˌ)`Ymσ~\)Ņ~ V?MSx"E6dDB8# =7NR]z$+aW\#Wx7-zҔ :)('0|=(lbkkSs''To~I 6[,T<;ǯ $^"/kT4}nURIRфjP.m>|(7Ck&Eq(iDk&{!Zi)f 3Ք(*%k|X;@^Rr |N1<~ݣb_ku\7ԜjACI} W_ub0Am'lm Sm%T;QUh(mGKS?l28)-WǀƎD}%c+iFŹڨTg<ȸضK2Yخ~`q`l@XKoOS폍wׂ[2(󌶼(o3XKS$O5Tr[ 2'ڤVc.swƩ3ۇKy,w&]J]~u]΢-x|qHĹ\s2/E:"@Oz*_)k@g=ncIyafwKkBSU۠i [ZINFjkX" EiPP3$bbLJ}ġմy]`Ɂ5<_BvKB|bt,K*TO PƅhɩBZ^!Dmf6YdX f/2sv6 NJA'(<"s yq@׋#Œ.\7^xL{R`@us{>28'קo~Db<`ˠ)hCelv߸\Ck)5Y%(E[4YOC[!h#-ZҌ/WE?p*j5!V@߬eCyMX|q)>½Xs8$4Io_O?֊ϊd}Է#fk-(*awn_~RfLB(\w,%tZJENL\{r%Y*kn: 1ځ Uɥ8m~Y B6̐bde5ώ82l4SZ$lg:lr1a厷S2of!|`SIIC6VA s-h*^]x+*u,Q|ͅ%\ T3)bp3ғj~b99Zbݩ)fͦϒP3W-dTYPOvBd`I[fW?>n޾r zܡZ T^}Ӛ-Et7}Um_^u^|B{{ng9&tm(rtjfث BiA ۃ Y {ʛMRk1`e %o1Cu)VȒu5LO+%#fV<}B_Xĩ_oLp O{BNͻg]wH;u{g}tӋ$s yN^k<ݾi? w0 .!{^K/ ӵnoJ)BbEssytGD6bЇ{tci[V`ʡkLϗ8$qbMkjܕ%1R&sٖ-׬tDg{6OݓҴ'aO ;@LEF>QeE*+a>2EHhds#oާ)R;Bg=# "__^&s "=Rh6ˍۀ@7*CE͌KH?߹˱H%S5:lĚrE} V*F9?h?P;6sƝ0ܡYfmsREܭƔ8Fy'K'K>;ێjqٍHn.\>LRas2 |wOMI-p-^AC̙K/xOCl b8`#}cA jTŧBf#*(XWR]%(V0q~3q '|1CypUy ނ7Dq֫>-#_=y3G?s}UhL cTcZ!|=7yQm3HoT49gL 2B=KthRzt(::,D6ѹo`,)ZZ#eZ)$椤a-#tt^v4XK ~%h<+|=G ݅G#gHbi|d撹Rj.wCxkS'a )$}pG$N[]BA4˨}Mf څQ6ECβ b<=]-#.-|َbhPĘX6\ecTm<\F WI문GbC}sKUk)5E+.wCx,I!ąxDLa9eƸe ; oQFړt ?Gth5#xjDk]d hyt̔(VF {~X>K~y|B#xk"* F'idR2:坶VNF32sTgUMo;[rESyr&׼hHCm$A0ܽ :g!M_ |:oϧ$-9C}/:GI&UP 5F՚&ԝ?әҙ@3i ٻ.EW8U2!tu0*Q&vOkiF?coEOM>e^d!_hȐB\\]6}AHU.hAՃAd@6%< EIsMo)cnmmΚ|?I:`tizC#;넖s e 1|)B2 ED6OXD3-=S4҃^/{r/fx=:48b<{h?/;w&Lo%?R rӒNRu#cߠYW%ztf'.ϳnl-};Y y0+GMh>zv)~sѧS`=,j,g 18R 2P|*X׫K\narýu*fukϭ{A;*-6˞۾YvVL Y0%F#UP?WWV|fKںAq=]$>jyꡟ[:C;]N݆l[ aﮨNY{ӏ܌ޣ{Fㇱw̮G<3!薺w,aamz늗϶|DRULxM͖3[C:96:F;)EE2J &;)"T1~Q׈w _ܘt>ù2n/>ٟ?}ԷgT礯gwYz>Fg^،cqt[{ZA(Okv70yCMҡP!B.:¦˼Mkth#iiʌFnP2/\ʅnй9؃y#imq8?He?8NIDEDJ-jD# S-ɩ"TV\-%i.G(Y*eMyBW}GiO-. 3s60mv˚4AK{F8B{@K{ƥGNxB}30bКc&e>ô-E"a['ݺ'WC^*)0 #jSFŮ@XbtuACPOPBș +4Cx44CPjV<(JPHTy8A0M/*ײ  s AV#'0J:;BēcK!0 cA&9I-6TVAz\ZX5$3P2E+lA`lm^@m xQ0T:[[) ,HEA Б^;Id HE!K )q@ KA,Bhز5| NQtU+IČΘj-V[q Gs"P1(Ρ>dQTkQL@9e%O&^AG[(mUk%$V;B\_:-w ?V; 8fICM5Ȓ s"TsҖk!I7ToI<+ %*C hք́k3Tnj1N6]pPN*UC[},P\(t΁6sH^JinodyKؘmBiQhyBK CK-FǷonǡƬyZ־)FrM5:[hǡM5:m٭o zct8ۊGFA ن+$ʎ!UM!lH!R é&kB}jL^"=@ &[+. m1NTH۝ xt_8¦kBF ޲r=_Ӧ&wihPhwM#CNmi m*?&5\AEеX2RHY8]5!E%]R"6vg\/gZ'W_5]3kHo9 .s=(Mp,r{}2|ZNnTiGP. 5$/!ф{2ꚟn$N`Ej ZJ ,I>gn`tc`gŢx`OrS^{]X:5m8>ip(ޠӪ04[oq6s:5FStƋa΍c|ax[ >EbɾMx7Zfz֚x=NZ-??oQl) ݍAEE\<.z:_S 8>tgyZ鞬c=:E%F䨽? t c쵟 >6V+nH'W{/*箕mo~.^}R^f|:K© /XgQZf8ux>xcxk O=!j2OLSđeU]q^faƂрI.,m2TKPA7 쭴Ϟ SPk(O5?cg%mL%5J &b2GpN9e̕|JTgD.eL> eFF#Emjlvy=/t8~>A q#==_L^]Ͼ|w'> 6|sbTmT6kdV$SRP,gI/={W_ i׿e?mIh@0V{*Nc*,Sq*STЍW]rrp?+N 哒p2AfZL,1xߗ]B*fXa/2(^BRj3Dٓ?{Ϣ6nvۚ&JN{;Ɏ@[V8= ,۲HQ^yg6qp>gY1%6JI=NZ) &&JHI`yV QyV !@KElaRhD,-VI Lmq[1B*B_-oڷj-EX@Lmm 1\(s,7D"5qBqdT;$rJ EXbҟ8Z cdd3_SGX|?AOK&O쐳"pS;UD M^axл̨Xl_ ɇbʊF*/Z=whm~D^~H" kMTdaJ Uoجk(J]GXPJ 2 2@QE*AN,+M:V Q:p*MFH˩hW&p8KdX[I#I,ؗR"feQNXIulaTDD Wm2&6LR[D?VQ z~Ms:|!P %1DiI PNTV1KcaQPAIl*Rgk&idU qE.+d'V ߚC>3&9nEV>;{a~u0['DE< i~!S.1d=5e3gڈe9A<6f&hNز(ŽNʼn%e!E*'*!^w܋T}*(3dK"7BǺ_v`ϗB}O}u kXdksֹ䃥&>$l5h8ׯ|6&  S1a @U%O*"@u0VY#Ib,@<îaEc}Yߧ|6٨u`96DᕌN9!y'?7pZ>w_MΚmjj81QP%&Q gbbT&[4Ȅ{D^ZRc~YQQ o`otnznbݏ!N6qtza<@-VK) >@M].7/ve&Tm2׮m&M`3p-ΠQK/5flȭ#i ̧s+zkBT'gyR,'Dz&/N/ǿ?st~xu|8AmHAь8yr xRn|w=ߵ7-L3KrMfrWpd b x4]vF5\?9|}vvr&FOUESqϏf{UzVsp\1^]\,VS4۽Vl+% S?Z]eX RAI'7G Fs]DoN߳?7>>kiq*_:eWPNA*ɽS{)YߗgQ=<=ɘu6kO7&tR^xxwb7koG(xs_/oǯoާ_/E_ V:A/ ~p{KSݴ7HEj˚zQtrE*_Zp 9mvk*:7U 'U%b7n9,۴mF =!U+1vP3"|s $ 5AN/{ٖ?'s+`̞ZcU ; *Rslenfp"_/ݿ j/3y^ֿϛfk;މeeWOw5~v'G|;Ͱۿ>p@Qh1a>\uWQV6NO'aU4K`Q3M\zn=ϳ~\Kʟx`lIOUxi]Zz%nNytg-Fݞ46>MNG7%ϭi͇ ϭ?ty wst 1߯{G'tOrq訙8seZ3o7;Hpt0-lu/rm\~92y݅Yh<{3ә\~>U߽j-d=A~Α8W7oo{/*vӡ&4R8\Shg:J͎aovR/]B;쳯{،zA??;\1]]Qhkngݾ7G83僐d~d--ǠP Ftm@:V2p;f %6VZG@G"Rne,oلm57g +̦V ӕXsÖg }>y~yF,{Pt)T3^C穀7έwș,>EL8!Qk f+Ua-SQ%*Hܘ\$&^| qHC֩\(mf[:Oc\/?J8Vr 4R r0E\LHG|lkFNZ"vU f.bY믵=}?22 MT봼KPIu+/gR ~\ i M=uc,5yeX|cbzg6qBe1T>]bw10%OEwQ?9^\cX6bؿ`jd8Sx]lñ7 ?w0wBxTͽ8_w SDKb˜8 D# _pnK RE/!0Nbg q0{zg`ڶHıҼHN?/^:χy<&c U 73\1JhAJ R;YĩRNQb,RqaF<փ ͇RwtL>cΜsyL a{Wn{gr-L[wr}Am©?Tv!lc?N'#MH8!%I%U> ͑qDQ>agޙw2Mlw!z YyηvSdߪ \Xq̝1acrn9ޙw&;LYB_wo\A7r OÕ9Ba%(ǐH$.FVŖ!)s&Jdq]$6(}L̥]٘4z+}>QZ\J.-"iaԲDY-'4EFF DqbX.QI3L(;P%RF);sΜ'keƕvʿUSKxRla vNB3 4Yck(R݅vvegw`L3l"!bT4y E$6%/W -tBEt7$Q ɕ$rg|ذa~W}{ nįS2EDLDӰeA"q]CF\ ;Bc_jwfLv:p+ }W `zCx` L/@>Fb?G3.P&|ݾKb%?c+la dlA:1E<8߷z"G<g?$5W\1lS VM Ѧddj{%7C ,2ʆ=g]CJ]vnwn: gK/]r_r{Y-fG:C嶞EnXneg7e ,8w Ol[K]6pqU ]6|z%M X󍟄rF訄2&l@Z0#LKBbŌ.Ux) w:4Cs_[dwFr/PPKFqzr-"G9zo#+mrRo=֖(!R- DiqоvY*֡'FgoG{vlΎCbj eAYmDJyx@b:+'AVblS^*˱/.g#(Z')i_\|E*B}C!jVx. ]ˡt9T۪].˥tQYc/QW7#7r,Ϩ/Y<q-/"w9S\վ˹t9.R(=۽+_c0id"`"|nE< 7xEsOXnzmvHn^< [Z`qK ݹ`Vܦz[}Yvjw]*r}1m;Ĩ:ԶCm;vڦݑǾRV->3jI@ižsE `%!Zjy|Latwz6b!bOǣ[;;b^SEpa΁[h`_,VޠC+ %u@y{qG/&/x֡'G?~ܡJޡ %_d:S3ݔ@rQfIl]+~\#l(~ 1,QES/)~/^ŻھO`!v܌R,zl3$Ct{K3R#nEǀ&pl{9,9DɩAE6N19jv0&,\2-%+E G>,ՖQdVQgSq-1JQaR'J x"q0涖F;;8LMΜp;XaY1 uxpmgv{D߲<*bM[-d0K'L-b4''}WZɍ 1e: VE$cHa2K!]AҋO Mӱixa88sW8jOFxKr]h]*|iH>9 ar 'w/b!Z0;%.Q),p|8C>NB%r86h8m81MbdKrUq:TF8([Kd*,Z#uawz5L=c\4ɵipn2`Z̥^2i5a^ bEdjR!zES(=ߴ"ĥ^4i%iQS}Jo鵗L[h*xl~4%z'~]||4OG ksGV,B4ff K 9ӐsIGצzɆJh9vy`T|g7|7^f%(2rkɹ,=ܒygI]'9Yyai}9'NsC?sV+nvGE^~\=3pT Rʒ4 ;ﯮVBYɷ U˼u[0%1[<Ű2Z 𪿌wC,rzq񼩇A#9|q46qNhL&IPVKiVp3;LSg[dq{{D D티'֣g[s8@9{+ V>ꇿ}ׯ{>UHω q]uF gQ!gA L6|^qRw+g`*!'bQ]EV/ȿ"dN3Od v(r^)Ϥ86 Wz AiN,R*$w&jN\}fr(ͨc$0E'"p/0)e Z@*^9`5yoJ(Øtа81 ie3U HR ]M0& fF9W *Xڒ,j&c4nPJ糐(@h{U9uH}j>L͂JjPtP$SLR?Hn g[Ɣ{Ji: DAIj7]-;&^:/n{ߠOhy e@sQ B0LLֱ-1yѵl12>;GR^S]3("YI})*D+Y[!Ôس{i.x3qTtDijHѠc 0j8ӽ"bt+mfMh&U(!2.Q@pGB!T]o8eh6&CňD>h\,:bqjrR;2`\fT3L$#H8> sXq"eB `{CDq1(Dow/ѓ#z;pbDE "z~0 Jp0^ hR %DB s.*BёwuKQyyk9-!:ʼ6 p~@ED%am\#RVjd[ă R1mU qp,˘F29+Eqsڑǰo? BZ"<Ka!ʓ +ge@Ia3cgLeD` PԒIPVZx 38њ+8I܊vOŏQA4<)N"`1eZB[E$vEkr#!*-f`8Y8\LRBixĬ=+Uth\0QC ^sb2 DNkU1 -XlX|%7ZyI[WUybF1Pg#/%__z`osL+CB03 *L+ F7+0otχr)Yս*U%_X勹ɟW@tNtأJ Z'h%f`{ݛ)X3ؑL**,wa/=>p{}pt?.R+k~S{pwDK>uPB)#KK(FQa|wSzYE.J_j0C,|I+wdwT_!(]_W45?;Z_?܄ Q銊̮C2E4IW.@񄂑-`I+1vq9Y!~Hȿ֒RХ&ٶĥ&n5iPItj,vii j]]heրh-2Gj7E!ja z[db0K%9!%2&L RG01kβl'+o M^45e>KRȲ`=ħDqAMtK f, ̒x,N/!~Y+N`",$IFbO') Y>KW%X%(I" ؠIMu]rW_r:x7jmxZ|74x4 /zϴ14"A;gFf _ M,`^|nO.X|:N^|LS& /Zs_ M,> /42(kU)- 9@*oh`)//\=}*|ܭumuf7?.q q$7X]`jRDK4oOq| vj ~{&OL|/Fo}:Ӕ75MME*[>m*|*~(T%PY-i!SUg@9~2FB๋`q-NȴsDiI83a:o&s49"o Rp/I;8?SCD)jpF[X4%(D4"){I~UN{~ W#JW緰+< |8ZkgEg0қ:gM C˽iǣ'E;!YDQmz._/M?M1*KyGݦ~ i v@=CMO-@c0]k%p 3W"Ef A;sK5R\s_Gʵ<0J 9H,o$IMHQ+m$IEЋl>nb̬b yJPLjk")C̪JtsdUq+55GHm) AE, hL0p,Qcn4">Y7 ^S.n7fq<|lێRՆq'm`!yND-sDFo{B6ژhҽ=MYMxZ0M\wJAr=36ڙÛ:[Z8R sB9eБw<)9h)yKq,Z n¯œUmI#YAؔOSJwZHؚ]pu͛b^ڤt7SnI w2J~.>-1¶DM+*q>yD{s詨;=?DVv!{$\V`EvEiՑuV-F2;ĜQ 8gֈyUl41Aۄ rJ:] Y޶blYP{.#4 Q4G` cmzɩIeJ脚nUvSZڻaJv+{$ei0(q$xKx 1͖,)?tn0qyxue4\%RvJsn2x]M񋭐ѩP ZV0A0-94GQ^S/& S|S _QRIZ3-eNc cdoKVF'\+"44g%Lo6mM/ OH7E`sLe<{O_59!U-dͯ_Îr]ёw<)<7|?MN1$'|^ $9l hu5!"~ Dr;&ۊi1wҬxwbsZVVؓݗ5zZV'jhg *Wuft@-W 2WɨN;S#y{柞u'6(Q*xjsHds.&zX>~N%?M|sōN|'0 O+F2U7_'υ`^n,;w=}xC%v+&ٳ)]:<3PT`pnT%8v礭0cE4Ϳ rWKml6+y|Y'|{ $5wl=|9hIۭTh\/k6e_GWNE8PLzȦztu}4@Q!1)Qj=9oC|<%M!F\흏(~ \z'\ GVE/F Qs5CΈ=kQwXPV[-R6r@b:+'A'C8oxC5J{˃WJPQ !"HjIB":mg[e /弘ok$YDq ay#}-5---֯4ߚ Gƪ oYF)5l8ԒCGޱvᥐ)FeJ)+&BG1aM0'Nn/k>YwLX $Lyƽv p|z.`Ny.e'3GxQ3NV޶ 9& `K$Qz#B4D9--sO8>ѦX=rv&a[$4G0N%(_je5!yx砨hSc958EcJԎiE,lGgᎫ-83J% oҵ g;9p+-:NFn>Sr$Sүw c{{3Y+dV>i^<8盭[% b|;xٓFY k,cʐ3^-;.85AL9HJ49_DyKrzHg|Ǒ$Ot;kG9=}H\fHFzh\ztk M4;usXɃvYz)s:1cjUjYgjr,)b+]spe _rTL>chQ!1Tr'{ ȯ;*8DqNz–^G _jjpvF~D:*$ _8fH<88U3xZ.ȫ_n.s8| '9SȱS9JaI׈`XqcUnr3:̡v[LrvR+'` *6az]Q?%MEwgFo(-EggQv޾*4֥(9ktQ>@1a𮮚Wc#~A#PVͻU+R1ޢVyk8„ĥ qjp6[Z6j2_}6/2>׏ x>'($Q9Fd/lb!R|~SCGx|sdݗͳAAﮮWY4]5Fs* ǓF`JrO4uPO :fc\/պ3\saY.d%9֫(^ "L 9!R+9 g<)Y޺=:kH(26q.|6,QD8LVv|PTzb~9K3: _ 'IL`:$3)q>ȔP5x0uyU|24 yE QX, y #1 tgGd!$1FbB`J;Fb*I[+S O :*!~ȆdavziJ$. (RSRi&rnLIהXbe#G pqp܀xo师 iDDv pœO^1 C_^}Wڼ9rpyw9SAAG"j|Gi)_A1Ub Cq%{s0 B)9_xvAhjEä rTL9G^䈘(SJ K}MZZˊd31l.Oelqזi#NڈJ!t¶`zjJfǠPU$IYMM <ДRNDn/E:[~l 2&|1W].eUaZtp5ۑU9nuqj\XN%O"KAa-r30⥠\v v= 9-JhhOCu5]#4|rxdtp(GlJ1y ,G (s͔gp)6& (.5P(w e#IBAQdXUF.Jeb^v3n$RyAJCDD G=(È(# d=B +&da)a]RCp죛y{Hg [}w8> _ bǃĎۇCY_w',ç^-N0x{9%hgO^ $laEpc(j=6Ę^*_Qf_?],fw`3L)yΪݠm2JL~7SUD sMy \MhW4(E gWp -ɫ36urQDnK' $Y{+%<ÞY;hw~-?k:@4>w|MHKڴVdj)Ȱom1d+ .oj-R*qd̰V ╜GoEsVB4_ 1nT6L|Y:풏i&\IfinSgOˋx{qm<˫CY *rZcQImX*85@SxZ&Fa$Qbpd815Yn: ,Ŕ"waF#$s{81O%_7%0;.Fkm4cKnm[ė˒k9>v'Q4fWэ+- -HPv#0"D=2%/ޓhq+ıC} &D9zD.IgIH[=bj%a鮳7Iۨ0 2k8E%|"v^ 45(GJDPV07* cZF;dп& !pn@]^@DateNavF(0rf(\#!jx=*ט {l1ɕLjH)l ڻdp͙\ f'#f .$p8,V~? mFd OX|Kx&P+m2mT9<)$s ΅gSAL,[<(1猌Hz wˤVJ!X9Ye&@]8 S4 t]]*MЇwZ1%1"LTS !僋`i(6Z!$ cG),QM`L 4b2ZϦQ&Z҉ 0$`%iJ#2;-Ci'Lgk%\)%/Dr۫Gwҍ9 1>Q/m{5h&}U},?c:4O%E"ujU+48P= ΀ Z[G}݋BJ0%`\gg^嚏=I`k$ Tn^aA{KtzVG)G|`N?Rk R05M|z 60[(xfi_y40ݺC~{A80\)$aaӄ &j]ZQșVJ0\0GVhpT kb$)1PovQ/ 'ZXdచ;&ĥZWbVs"m` ԁ"B*%ҙ80^xA`9W*"|:ZZ+8<35J5_J~i ;O&YM`zniTr΃ݢBh/B D&=QQ3m6p*%87>mv?-$4`3lRi `JC:IhY א`@I(1C#ДmoRp6w./kN/wA S4pl[sc||} ؜joZ3@x-Q ] Vl=#NTk{ck:C[;fD" Aɉbh(j$0 )bDM`Ead%K i#3:@ u,MZy*R'Gnzx{+ G Lͻ]yh!dgq):wR!l`;ey~ ?TEFFJVFX`,D9н" 'Yɭm '2aǍEL#|H wa/~8 Q1$$DHדN".6+,vw ﶎ1G4x`cu@V6|T[y- AkңORMTNYw'$ F_26 l[ƍoSΤN(0ouN&lUl*qQS-+D,V>}ȢIc/V:gAO""G8x9XQp7ٛg0rS9c}?Gv3f6#za7ҕ&xU¯"Ϧ !"dC~s7\cߪ&9&0)1 )|5tY:ra"˧GhVxX2R&Œ"`[( ϟ 3ݓ{ :ɾIft|u,mz2h,[~ϥ.U~(C>Jz=4_Ril{p<L=x:,Omw)xpx\{\>%x^#=xzG'il*+2 W_/P~xn3*qX` fi('4B :z9A(V#Mgvnig0$XnMxL:K"vp|H&l?a|=5w%kf\XH Vy#eRx]&nO_ `#AE@(S ((ke,3t)wceRb0p(\pp(PIpzə^y_F84MZPVAy36&\pq ]7͏+1:`E$VZ"nX?A_us2SO 7SښpM}'Ɉ%@oo3P9/t X] YGWqj`AiH]`P<%E"iƦ Kan<+PY?8?*W?r{G[[MZ]_5%N7fcd)MQ|Wo5oZ*AY,(8 H+f?gy'B{68 #}wKL>sv-C{an γe^7WUݝPiuk\k̆E +ԥc D-3,a<.;5<;>g;n(ˌD7N?ŮFYo,֠PݣXaT9ՊQ<|5?o!!@Ez(GٓkIL[ }2/*ږ X룗/^<Ͼ){uo/O u_Tb;쫋rjxa1bda!eX $ӧZZfcz2AIKύ4\#Rݴ.k$IjJ7fX%{?^W@#o9z֜.eEfF篿rLjW ^]ls~prKһ5+ȍۇ2$Ysiʯ6TۍIƏ~}77aO7'eqqyw0?y]6~@㇥su f8uq|?||*o=0T/vu ` nVդ6&l{1rN@۰/ϛ`nHLgm:K/o\qNI\^~xsH{ysԎyq8ݞ9)OAow>i0ύi] ՟[c.^o<Y|/7vfc/wG,:vm]:JN9+x̜j>:~z@Ta'2TCiV% %޸xfztZk_^]l{$9@^= [[+yVgb⢂n.&W ? xSq^ Yvr& jSl/K@+R^xf&Z{Y?{rphFsy/ zu}t(R +}ۈUQ"A7תYTĪ4Y(Z03g^nd+,Cټ}w4F0-}N9ӖJɐ46gK&4(\\ ph_) ivֈ !RUrSΖU*t(j&>n`QUǍOOj#ozfƍ E:؂lcYe v <-b:ŁsK͙:\JD/>ٟfN0h|tDnRF34^MlA2g4SMNCgWF'N~8I`Y\k^ ,R@O]s|-㉣TDN!fl. VXNq F*xaA:2=d$i嘆5HGR|NRS3ǺK=8_#}h w $e48i~4wo\h?1/ټme%a[<Նm?b@>y/#n,#SB%SZ-|^M.e}RdH02%L.t*`3STgV{|^rѫP@t++١ԥ-cD V y-!-cĬ6NC'XE4{rn=jk#Sg0{BB^uPѱN.VV_6܀VlujLs8]~h=f5[OeЮoc ֠ob1!g5|,stvúdAx|Q7^hŗ/+$/fK$^> e"6PUx vemKWZ|iyբ/\ EO3Vx8#qd'0±΋=>Ti{܋IBB,ZEpM29M0˭kn\a4OGwQ Sstwˆ0c\ 1PE{QVi_ ..>t+ Dz|isE2xpn,Q؀ia˘({v8MU`d ˲27+ #̌Ů&56B""2k垉Jziv>8^:yi=WR[r8[ټKIZ K[B"ŎOŎϷj~͇>`ﱀ(]?>e3>Mþ(|_(p>6%RAeIA»e!HE) :Qb…Tᙿ8{m-cļ#r=p %veZ),լ}Y֯Pl97x]^{TϫP$ʼng-KP[ƈZduTCZA,I}SQ%g!8͟@Ozia~LVXEr쥕cȊ5ߦwD#Lj\] 7eg^.#n>mebJR )Zj[PtoGjލP#P5k$]y.h>"2F2#T>3q45ѼS*d<\ҠUz|8T&B'hWw#օݔ U#o|;>[a8uUK:opg)23T 6E@ P^}zC6If'.k뺭} Sm aˠa B_\1#Vg+[&0fG}:[5roͥZFW qC\qRbdrd (5/, R>Ԑk>i0ѱ{;N/n#bV`x^{xK8"]Htuʘ*h2a,-eY bX6'˲Ҋ7x5*lҮ{M+AFmjn^P+SJ -n륃_nZW~-cSn#ff,T.W [uՠetf,%Vz?\R<ppއW'Zt:ӄu4w7t)3xRi@NB+w?-8\wEnj[g|ųRv5+XI1!:6ZmJ ͼlOo;m8>mI0v(1q0)qKZm98 x?ěU&c4QOUoVUx7s<Ɯ:#GKK{uTi_qz!I4EQjQՒ_P1`|aV9όLy˛!S,oLYAXŞ1=BI,m/Mupyg^\yhQ (1Rb\p$>VbZvHL@b{u,ЈfAoYX(!o%t"}~ ֺvj\ XOvsJ(]"3튠%׼7ת69cPƾk5+pTLWҐ5~_ϵT*9z4`(@n{aS4 HgLT%La[}A| !_f2SDM UX0NRf =8%9V9'o顆_XOw0juop~ucc,a/0%!ۑQAc.#y ރ(TpL-:?Fx)CcOqH3S >IzMi7xYc-1Zu|Bή5X<UhV~yBΩg,!qƧ8 @ȁKqۤ%'h%Ih -J P2kRL1PjRL]w}xRp4 ݩFהsN5#k_~99B$4z,AE<-m\xK*Y\MΕ 6\^^O뺚ǏOA^{++mx3Ž^x71y#D3om^=˜lG0}s3QK >EsWObƮ6<&/! ewu_:J.\; ^t.A vtP*ۏֲut%MԬxR@LFZҕ6u;&XERXܓ6dʝB%"-!XqJS%.f B̒Omt%7:aW?=nj^HG]|;YŇ;;li@QYdntWxF?֒6H_;uI~,Y<' U;Eb7VP7|Ws2>2߁xUv^$n躲O]Cp^lg8v[ &R ͌5،#\#!cr 7ܦ͔ksy/K~b7zZ.㤫n~c|qaYdVWbu\vg$JӲ/?͝Zq!7$@O_Kr~*^^U#x3H[RR\Qsm9.D&KpcK`QiGTb, ̇*-zsp$C$[2OW4h툯_gi-QEd "~g?=7oxaUp!'n0ri<r+R-^qs 8jQ,pὑiS1ȹ. ݖ7C}4ᤰ[?m}Tg<W}vz㊨gC3?7KE><1SWg -X&JxU!DˌEY(Jv~g?6eVp-]r!ס6K'RriǼC Z!̇Gg,!Ə ZiRp,1)8j0E7&GsXH`Z(r2wJpc@Ot;~c.oO[U|_΁Iׯa'el"t|=8B5L2N"tХD@"NpsOoN/$gdv.Pcqd@ückϪL=*;<VtL,hn8f\%`I{_7=LM5sJaZ<ޮvm&-QFTʜ"N)Ifq!9o.|zf3 !bG&6ֲ4H5"`fBe?iA f'asC/ 9S 20CR^،=B?ϵK )+M*v(.J`l3L&>ae9UѰlYkF%aq}zxNXfE IejJr(ڼq}zx1ed%u.F0k)-HF@q}zx $`HZ `]p6Npq}zx|YXU B[*jWeB+Nlu>=ApU]n/b\ߟW5]uN2e7K! K@ Wo.7xkO {Zz|7.p8JWfSu~\> ZҔ6MyLdm֑^=$];;^oL.n f2J2=W(.ᨄ"K稤%c`$8AMctL5|+W)4)\s ]h`XWP%)`{ 0hjV9 [[êM ewJ~]WW],/%Z!nnס?_\:^] N˕ ȭ-P<:ZD5r/_|u#\:7Gշk0\a/.pMr#3 d?Ț; BZvKpи6jwĄ{b/^+m-%Zǝ) #qKcImב0 pjN~_@fR?Xl}o4nܸm+0_OkUxUuZ/{<('Xc괺#AkqFEHM{~ ΋^Èw7Hv%f"iluǴF/v7GX,Vb2NzaxŖJQ.0Z PkeSyCP}qt }d eP*5\2(%M"L&0cq.y36٥Z8[LM !o`!M~6Xjo]iU-(#&NИ/;zʩ=QvZSSZ |e~jDBu;prjib UU*izqހњUOVyHLc\ d^E5kqZ'8-,]d<'ggq'q2֜Gcs3d1 5%e=;j@]'gs&gCRΖ:;PU?NN'F$"+xP]FQөpUiE%!- Ki')eKUbT4KqCBv$tjLW7΢CZz-k#(ZOt[Snq7P3ixDe{nDRZ&`4m.}?ui<#Pqp6p)]d:mEpg473f,3aqi`7ẙ˂%dnN '鼦6,t J;oUĊ*]@--TȽo54SEE[x y0[EҾlܭNxlm4xH,dl:ߐp_| F}[^CcRy%iFAkI}:h2)tb w$0#x`any1z n Ā> t߾{,Mwr5Y\`ɼ ʀ(Oӟg9}uIJ&7zMQmMk}Z__] _|VW&ng|=*&tG-5|XUPrW&]N R8WRP~sy.LJ&ؤ 3Z )Bɢ$A=_(4ip=눟b/$fD{)T[j*ڃjK=-U0%?~xVr!$;Lz qPRͲJ upն:X%wm-Xԭy`Q W0S,;z+.y豰Ǐ{'O1du;j8˻K(p. @ߨu?`Bd}r }A"UPzAv;KCc!rEh%9\rK9nNWD*w[ku{q0-柬F7f舙fW_z׏V|l\=nC]!jA-s S,&.DTG#Yb#62*]d)JXat OSS,޳Z:taeFZ6*sj11Lw6YYj2DDp '//FTgB(F(BZGL`Z t2w'鵈><=0ݯt)`R%((DB#~A<G_`Z,}:?x{rqMj/&wpwS&Mf{zxWq*[NnoofVOi._0}4]F9p5_ЕYWd*c5@97#mH5?ug/ z'8{jl^gmB35G_g"<ٕ'V~etiY2D27E&6HG݂]!,0HfôR*)33_%2W/sXg}I{aU'ϪzgU}gL!M=20^~|<[7<`Q$X}/oH*!vb\ ` 41B 5Zzs9^qݬcn,YZ~zEѧ_}q'[HZ e%%'%1牔iI CHY'DI@jA DR ٔV繖i@y!ʒc5Lыƕ fiZ[{PB ,Lf RU翌߼_A棃;7ˉ=@/;hqOMa}XZ36| L9֙eVcPЙ}N ]ΰuT@~qz:/\1Ĵ-)Y"e)\R &K&Q:Ҧ3GZ'1Z3~Gb)gYJ)+O!kBTvAcWuXDg: Fx͒u6K?y,wsQʼnR/J8Hvs&h pΩe U8o08U Hf%ˡetc2W/rݽ7Y?%Jsr /q'JQ@M!(a'Hir] a^u5#(*Rv#-reLQjBc$i 0"Lbv扖Üt_LߟLBW'7?> p[),WivƼݗ]ks/mRq`5c(̿)m@(<"O֭d_0O{L؞q/czpğuya+s&zr(;̍Kggwg'4R0\'*.GJht!ALxdQ Ċ КSBk.?]Jݚwvj zauE D6rCX(6SM ^wॄ3F% C,ĩ mq4wׇGϲ|5[w'jtk6f zX0>Z ~j!Dg[p`n5B"!T?%((T{G N-dG. Vσ@b޵FըtFXclB6&%WMeqx \ܺ Q9 xh\Bۄ [tt>K6 JvW j ]ԛGc6/I JD@#=$T!Eܜzuxt'=L"J`b`7 m>kXqˣN7@ut<>䌅/o$K{<]JBS/r=Wh`TbJw i =+n|O6aA`⃢YOU[Vƒ2btP(<<|;eT8|u6kzUF!X3ٷMw둫2X=bj9~tmbpdIaP0AL FrepAj3:KW,^ F=/ޭӹŻcߔǙɴA:aȦ@ɍHRJJsa=xX fs>qd;cZGRiƒػn$W;7~=`gw̾ ^m%}v ߷(ZC'1`CݦȪbYi&\wFzye(*0ω"P^MG=Bq  z\SRyi`ZR陶SJN*$>Cyum`yDgLw8!ǘ$T51/3Cuu)cGO&^wvʓb:`PyxiA  RŠ- 5Yت#FNN5Ioxj0duD*< f4o:m5qeszsH!S!;= űz*0/-Wʼ4H`BDa!Ftj0EumZ `rKv+C A|0hivLu[lVC%Wl]DKG46z7f,SS#M%OCA/(K0&BSHg5 grYM[j]{Ƙ4|ww]*Qò`;ih刉7y*'ѪHRgsuYվ|8c>0zNZNl,4b/CPVBCljz{آ>U.!Nen9vb)33Q?5B s>us>OF`J)Q Q55ڿvZ_;$TY'8nhd7T4BMJ5 4HLP*!iGĽDtW GŤuIEGPٔǗp4DΪ9A㨸gGEDԸ:opx7J޴AWlsO"{kgqv$:}zx}3_~z"8)v)qD߸@"~) y<ito>U/߹TtN(kCb )q]]}5M?O?8*=WK /=Qk9XLLj:S̰W{"aAB& )e29Ig{sA׼~LnBt.缳NE(2n_9k^4d6^15Oz1z93]2*]OY;s'>7IlG]Aw SlB>ib/mxS\ĝgM䝸)^AQ3 cecv1bwp,E3;2NTN=hy@,g)kd9U|ۘUͷYw)OsIh=n'=xV흽qݾ'zCjB'Ϝ\LO)$5i{l[o$3ރDJtMF{R>Q;=+~?xcCx+i =On02Vב ݆I&%£\KnqʛI#\KeyԪ E;B-PhBԔtҽi\{PZx' ݠ۽{is>CCk*ɚe) +}+}`U#0GuNpU}XQzFQ#˓.6pTt/FM( GG-FE%*x>.&Vy SV|TLhU0DGm_'_y)=. oGDөЂǣbRܫ&8Ys/b^i|*Zi{p_|@i}O/77g_'>SԽoͿϿ~{:O<^P*u6CC'Hs.O{o#2 jA @!稰"zuT^=p6\| Tb(!XV5̈]D9G ;r,ب^-0 JȐ" G%a) iD[M^X9uV.,/-F4vR`AVpj/|SOIT9 {V% aZm3Ph^urWYNDb"*j9W1:2RA["<yOW`6A%^≳'}м)hE&-#EPk $-J2Nd*X̅F2`GK̡Rf{%u0Wc?zoOpu61Zv&b7K0)Q{דAaJ#B 3 DQ1&x)0U/Kf2ؖ ѻ)a;e8Pt&(|J/)3M|zl2`]D:@K^Q_iO͒[, #/SfOh4"Kλe/shG @ߖfy8󛫳r ,0 KnP7gsS2^8]z0ENY?G1?H'_<1:uobl>L2=>u<}~~`-kAn{3g + a={S)²'zՙ!)Z2TwvtnUg,4q`yb_@=UPz'tJ1ȱKp({FN_՘!)Z4uđV)Yc J:>p/WIf_%U!)zrBW1=xbzNZzywk CZBޮw3fAֺEA΃rw#Lnq(MͲԋ~Jޤ(ʻ ۤ(jQ4GFjC$ 馞6l. j"BpD$۞tAqYb,2Fͥ5)Ad嗲]n/gɨU&sDIS6^j{=GKXWBdG9{Ӱw(s$OIxU4WƩ/B%::o;z`aPHrEbsfgLH(&5䷽R?#xےQJ(ҡ暖5(x GH[QC_'?`)Lמg]>BCG32Q_M !vR*_SXWx۳doc[agb3:MlNKUexB5W󲀿|N5Wז|V>ラ*d>+g~.g De|NiԞyӘъ2ә9H+_)>w9z0S׳ݏ}g웯ߞ3#G^$UL`4 #Q'SIm]Dn~ 2uA{މd/<[fЅ]_O%l.HS]E v:p셕Q(,zNyCCp) GTb,2[2?Ζ'b/_ۢ ro]X(S&گQ+vC7yH"|9TS먠ag;y4X(Q}}N`$M'gҜqT&'Ku)?DM162zjǡg8y gd+JH|pv󋰕,.+3>,o/ 0{?%nx1Qcȟ?N{V鉚OFN=\Z(F2% oc}cR]^&[Cdsaԝ0a$DD;DX䴳ڎC: 7nՃ`S^PQ`Ec 5 uDih4*=vfr 9}5f=Q' 3̭*?/ =cLrnvvJbgmbܳqһ!¼f]O]ͥ<^t B`& p\=ٻ6%W-pF%'ك'8o LhQI[MJHCJ=s8-r[#H_+F[eAumWB]cI-[+ִmasaaD켥^o^Zo}ۃu!iZꭧ[_  =27T3*CީMtX~ Ijn5*-z˚ z-Lfwo}ַprTe۶5WARLJ|\T&X9񶟶ۻGQ6:\^WჿOw],;-N,/m n{bu M߄2w]C|U9D+x%?. †3. f'Wf *"LPjJ* @_3h5q j.oloiiy^aܸm,pMvӷ1ںί?r_ңj=4z 67lQ׊5=uMfUwPI2kK{dA D$믭ɭ?wbX%Elɕ$/{!oNT̓d-%΅_Ue+x dgxex-4e3(n߳RJA[";mJwTD#fJ\6]Z62RuERRm]P2SnNηfkvG7?M\;gR>`Xfoeև{٤ J6\?zfgtuy|~qd:F~]o9cb"Fm8v_>ȹaFŧvnSˡvwv_ic@Hpƫ>fWbU}QӃàG]ܙ&͘—qvt W7C*TJ1'g _|h.aĽ7D|yuz&"̝2ɓ=]1:tOU(V;EɠbgK) ?6*9WXz8?!_OȺ$Nf}k eYa=p9Ua2Y ;YwwYpl}XyˬM]YwYy[7 ,Zdy,ȹ!<=kxIO8=>x4E3 8')=xj[W7CҫY'a>ښο|EIFO*ɼjlb/7`>ArJ+ CN5bZI`722QZQaHxHLpӗRLBt^BD4D"Q039xulP!yԔwB^ϝ$J"D=Um*S`2F&p4J-$[XIMԜ};^hYF8*,1G-|2/@TJ&)aĎ;1 H2' 9v N$6Dp68QZ*AIm"($; &W/2JS,92@DT3/˗S@G 4I V8P 0Z,K"9$"H-`WRi-YxP XP3A)[ +&="XILRA;RÍIGH:pQ @]Hso$c Ciw FWxMD*m|48,|Bc$D3Ey3A@FN 9eP 5`UjSV":x5" ƬqSG8.u`b婋[ ٙ!"*t&8@\%8IS~yNRpWk `/L$G<U> &1gr=0SS]c3VMq4Vj>w&!ͻ+`;h 4j$bޭ9)}F6" &J*nŊz>)Z©y1s!oz7)g9)}F6ݪ`x+Ŋz>)Z)gni|ޭ9)}F6hTѼ[ޭspvqŽdx [ rS6m] ZeޭX\C9EK8Μwhvug6Ț(|Fyv[J͞w!E"|;ލYxRPmt2zޭX\C9E8wn!4yRPmtrUbEs[ y-┞E@ûJ籪Ph)gZU#ax f*wNN՛G̵r4EB rS6mj)ybEs[ y-aû1gfZjIgwj0xs@!|L7PFC;A}Կ/|X^+"8\kiGĵG^4'56lw ݭXWR56G?)Č8TA(ۆj/ń;O+K&&jt0k %l#oԺ4S!4HPJL q@ 0$u9r@HzU0!?EA2" jUbTeRZH杞;fK$ywtcjt0{}w|.2CDJTIJiJD{1eJ^Ɣ\<5bT0 "jx 4R+#zX|[#HoYAF 8i*$Nqn6a{W=ݭQU8IŋEexh1bW 1 wmqlĕ`5 8*kn&w蒔)R%A+΀ ,y4Ud,/ 1E16ɐYf_ Va 0ʰ>(dKijZW`YcNƲxXp Ii)T`=7s/Swo"oLU<^,1DAffmkw&`Le'2ٝ@4 2X1ɔ$(:A>&&-6 n$ƄR+4d$83 Q5V'QaP\2s'8*5$7y*"$kWd Elޥ  5T.e uĀX{r=ݺ[0<55/ 1<*0/y3ph8\R͹&'aED땁8$^nԀ~W:=Q,oE# +PC-,Z(®rERJz1Z*  (}s xL" ?+$X$Nb.2rsMb+3d<ٝ@$d&WNǒ%Nw(ӽ0*vw Q?Q+5UOZ؛a`ijU2&, [a;?Is=|Uj-H M-帲 iܨz_k4g"DZE/K96zqҜD9^<(w/ ]M#%:[/O4.QβI$8xerlGq7$a-&C%FX;I&}hFV @Y~=|MJu _D/Vl,\\?~| ]jo 0a dD;Ox>Ux yTD*?8D7FgܯZ+=8,U_ն\Xsƿr=A_\I^ªEɪCr4yx2am10xW(Rp6wWDth5,dd TfS;3Y?)R&Zɇ# NE3ӫNxރ zxrtJr~9W5ߜ#D0T*ǭ%4JNDǰ#Z9-*r I8GlQ«Dʈ\4~F7 I9V%JLm#TO])t QlLXDseUtĔilj"‡(A զr&j,H jg[M1w@YBy`xk3nw*(*u3\xO0}dS/ډj1 Tl8F$x,'VsY:I|c"Gs$< TSkGpvuY]I3>X% B,97h%A0`xՂY]ƚJue;@hkZr; cpp%Q $8{rUӟN$t)laTi| 6ъԃewVYa~71 fv^|s V77yf薳bqY|諊qK|t)s@?Ӝ CEcg'9Ur2q" X\7f/FڰЮѽ"fQk-Ԛ(rY-K#-!CK$?h"*w @Rkk&܅^0Eq/ӫbl~vIn/΃m.'qu^qB%hg>~(Vz[Ei;)8'JI3w>*E9@G6 =T&c{}c"dJgv!!5|(H;m3$ ,4@hVm( \'y9"x;?Hp8eӬ 96-r@vxzCRZ0K#"Z6TV4/(Zz+$5tCFp58C Z?z oBQ$Xxf_т>ei%"lZ0 ocdv`" Azv .="rE𘦝-R_∄:Ŭ uC6뱣V_m#Z_>2=PFPշ{GLCTK삟D!޻ R<{ GErwN3"d?f]&&BB<>9NF3φI IP.s%m!ulĩxj!o5đf3.IBx[R.m)nsJEsguBMy8Nõ'F:B]Aꘘ:a/HdukT{A :*WRgeRҳ*3C9At)gY'jY0dEJPuD0mLG:8ZeDH-D7L;PfI ya>[בC>_Qu5^4P^ꀕV򐦒Ƥq!a֮j 6՘/k& MqS ]7`!rb hkWkPx`+u*]RxuZmXp ZKsTgEp۴^&tsgX3(EPᓳ#"{1ъ-/qBsjd0/“X! N$㠂d[ AZJ]}HwjI?eYi.[T-<ږռ_ZUKa%׃ײ9LqkI-T,zH숵嵤vO}o^jI61EG6֒:,>H-)̋.?=s {ݎXKptHuCTt1*g m6ݒǝ'7wYe08CrO~o:=-#[y9/v>hrčJZKA Ii+ N>x~B{^hGD~.l9>ӎ:6aUIZ@^^Wjz^y2p`q%] 9{}g~ls_E;pU\hz}Lٓ1w$`;Bro* !snTG奔nj{PѺmlCwd_SGnۯ_-n9D`*?"ƺchݶ` t~!{Ffݶߘ_h!sTB{y/]J*Ǜ_/~ 0//Z ۷O_pS'_9ep{/O+w<9an~N z}?@p8oO.O^/y)?bZx7^w_mQ2]ޫg\N?+[Uv|M\Cx%sQ)LNjRyv8\ Q-.T,H.mo$⺄\(B3mzeᕗ=<٨ 0%3ӬjLdnG$q]'J-1u'BVsxq,>E=Ͳ Őxch6X*: RaA_)Pu@h#RJHɉT؂ V"`Qxv&)šIj)(}4hk,,ݼJhkN#YG},P[]҉V)\D;osum Rs{OPZjjV_i,}ΣvyH Ka=RkJaJ #]БJ0&,ZM#2fVrk >G ΣqT=2x/aL&OV`JB98vLMj-!JP@ hfm %4o1kr-2|A9CK o,oX>ͮi+qmK(4i7.Ѱ2G#Lw}BX17QJ=c{]͟eJN{YEtk2Kn`>A[qڽVv1j:pݸJ]6o@s2 hGS(!"@؁8WFGݗ*GPiw/~'h'O'o?۷3rZë{[n~ЏJzGz38ݑ-sas_}MnQϳ O~'?y?<&{}+eOw_wiy!ןmI')/Wy"_~^{m]ޖ;]o͸*Z@yl_mHvE6P |5 UeVq]}ݏ1,Gc`[,_IVe7qW`?eMaJ)HPz73J4Jٖ\Z]%QzHmQQ* B;r|(KoNҧEva-z(Pz7N!g>a#ܙlmGHᚚqY[CE`Ǩir2Vs]Xd*-67h]{jщĂ/FɞƬ}:PG`g7Ŭ G 0%Q}oZVКŵ3RK11ydDϵ SqFg$&EYmpvԜ.c$FOBx%fI7ƐTJZfXVcՍ@huWk|sm_U<؊l!Hk ws,^)RٛX=@&bF(}((Q Q+=$% 8rq2 ikpYZpy-^jvvg1'f#Ms=S =Ǩ<[BAܠ vR3qcYDINgGZr.v?M@Vi ټ(i7HgՕ\GUg# b8v^c##3݄'AI!󫆵lG5?s-7gnq9eP-fn/~a^y^FQy@Uz e6oc.pt8x[f0 Rc2oWSVlnHnr4;9?\U$ˤ&9#cm{wI}!>:O?/J ~O>-CO;:y(KR#"u8J .SQtY'i3JJB6wl:V4"Vi> !%N%x0!s YsMK-ԭ}ѥV#.4 U2ixWٻ6n%WX{/)jUXVjpqlR. yC Nf(i0Т%jii4@4RlRԒMǡE 穴. dׂ!D5ƒT$X!ض,Rpv,ԪX?яag8; uJLj%0XY[)8+P +}J-/+R qVJ;CVJ!JiKJRF]?:aYiJJi\i(>-Ԋ\nƭz鄕ƭr12㟷RȲځNX)8+ͤV]J)RDVRnI[Bm,2.e!ЉLݖZ 3Y[)jgyqD NP_z-q7}=e.n ƣԙOfk.+3YfiZ *G>_{p f*`SF7q]{l+ łqD/}oluP jsA6wfz5AM.h:ÙU'7Bp=Ϳ\.lo]"1/C q`'>Ϸo+|Ļp4;{%XdŒEX '@qi6jp_d3̄U"mN R\dcrqm5`+xö^?|gqm#TBD9,";?lDvqua &?Ż^*pp~v[q=A5cs NC,?%%0沓8x׋ʭR-0JA 4IJ+9LZZ m+^7E)*{UW% g7:ũvk )Xm 2TZ9gB[ ;DBnaB{JeUNOc( =L]żܝA^~ʯr\䗪[-MbRuExne {69k99<r=Űd{ǫIIW* 𩑫ÃDpӍ#_}J͈Jɩ=ڋq|SO֊GDZ!S7^̍+qdk?&|(*k8 *+шdj@}-pC*{GlP Gy/p{,u?[NwU{WsmXe&y'k{iTjG]Qw"6dR=uCJuɓ4! P,T: '"#(ZDm(vQNuXw5w~4`nm?-{!xջ=(TS5Ϻ>BʉmwcOi1JKXЫ%j2tg3=tр0`t KXCbKq'_]=;>UQ@L6~ZOy"w7[~<9pnL K@ e^|X<*S%^^V9$8ȗQSfK!@DݹN ('NKƑhpw~ mE*дw{B>\nn`F)CD a rMD6M0FVRC2TcjčQHb R"~HR*Mj6#)P0xI1A\IJeJ-bٽ13bC^|qd}R/\:w^7a:sͭx-yuWKj|]Y^%&D C]H0 ZҔe^ViW"J$[%' 1;PH!B\.*w(U_"t $xOXi?WssgG#a̲^92׿f1S&=(|FZ+E=d#\J/1S j$YbY+E?u)1h„ bse~=s ZgO_3:2HwoDf>L*Ly/_g!G3A i$wC`ib qۼw+\ȓ#v-uP͌ hn [T6p쭷\ZnC !z9#zqhIWsMqe42N @ 4?!yG_Ul:`)i4R!4 A:61bٙi!` y,2/?5rf٧# ?qgsg{,@d!/D+睟w[UĘNwn-]ۺh̻UOnuX 7*6H#m#7 VU1Se[(Ƽ[ޭ y&zMIBa}y0NYzCu1Z? G%fzQϯƣj꫰~F5D]I M {H~.&KK?hMvEe}3B6`xNˏ&b_|-}t۴#=-ͬ}#/ܥEp O7)ɷ hvޖI=%4E,5"<b EKbu4 008H)KWtf%i:׳(mJ'ɚ8 PFVѝ0GGӹFP=)o3 @%h:#WFJm],9`i:c&:O =i=",CYyn>LYJy?C1eAW~˯{ն;f]<T%prrg cQ7X-K9v4{gt(RJ@nl͡xrqP jl4*WR9IxltR0i"F 1W*;TTʈ5zPQ{9*9}ZRÛzq'x_}㤮|Fxķ!Ue䥤mKirzRS T$ZݖZ zȪ:o2CyU)-SS U")감nUlL+ݪ twk16ݪEz:,䅛hw0VU1Se[7ݪnuX 7ѣm*qv!+OWNɌ aJb!14bTK~͇&G0]=e;֏4P7Hf(v=q"y&$KW*"Dh@wMwq0O<p~֎xIc#ɕGbbZvN[Y]K@!#Lo0JVw@4*gѝcV ~ B_vʇ*%2eXY(8 bL5r|d8uʘh#.%<9}˒~9`#;H  $ӨC.IqOQǴ]EKs-fm=-xˤѣ.u׬e wߞ R,uNsr'ΡW " $h$#i0.lo@}˹.o8ޓyHiý>&qNY 83^(⒎cnZ^dSU6<7SI D㟏y$I{rr&DS/Lgq>&M;6]iF cA"#'26L&H aH\Hj; ɑ8Q}" :h}A_nC~<|sL2`ЄŔE" f೒#UBxVlQ,7JF&L$CbJy3@2 X!E m\QN5#򩶉̵#Wz]#8\ShNs% EM:r/*! ,'R-<#k'cmGzfjB`v:׳oS'pze;Mqel8xmw_=?MA&t #=]8P{%W>u)qH\qr#=[WqKH@S+QGkoQl #Q:E{m^mO͐N͓2GQo A鹓Kl:CNn}G5 LrZ&떣#kiPO:rCSi{1jXuٻ8bUŕ}};ѭKb?c΢z]6>LSVZN"-tiE&xLU[sn ̱zJ\r|Q%=U2U^@ X5c M *t/\*ڭED ִa1GB^8D0Eh$E&)cvp 2Ui7:Ďڭ p`J̒l/nv, ad㬓EtwT^՞IpGzXEtYpn+PʲF.qcE*2*!&Y(fb?|8ϻϏ6|rn%ȭ`L[%+-bzaj^ ay__]a"k% YcO`#R+wos͒Q8x0Y" Q=#~ߦor32$?eDMa"86#M0ے xDeP2NӬ'o,UdmfWQ4<] JL,W+fOhxU_R j_)PMR<ॹw}i5vcddnRv#j{#*br#H 2%+NHkEux,8%҆X(I jn~Mg~|L6_jìbd~ kDK,nwe!A~8*"-8X~a1ax@rPm*W`j /oĆ!B';ygoʁlq@\pÇa+[8=1ƒ0L${Keph޹c6 Gk;TK JT{3۝"ՌRrQgtC)cvR,Pjݝ5J)rC)5(=Dmj$sCSst\KJQ}[ti#RPTi!/(=cR J8R&yPz" JDJ2ɷKҔjr JNqV+ݐF!o TcsD)(G'r\.Qz'T+/(=c2Ǩgm(eQ  JRP/PJJI//=oRRJH@)n(חxyԩC:ʛ~X|~d1"%J5rqat\i J` bҔj/&gRPJTj{mrQvѥRKM=n(MrmQ*M&o>T8fdTce(u6Yކв(u6P(Qu)R 2G]ʲHXF)gn(6`B{RPRM=3J5r:{%(j#z8T=6(.AGq4g1g$b &eO,F! EaȈ4v/Ak*ǜKk*)ׯk},d~G(-RP.#RFN?+s ~Sjg#N=8o=V ƃWo߿3A\N>ߥF;Sغ >"W$-OD$]kCY/PمHOD$])|y %՞DX$JiH,JCbY-M(0Hk)-B9+ʣPI57]-r&M)ʚ87߾y__key?UC߃+r0lFKa.fS?VY g8Q3#KF!hLb$DrY\W޶.N{۶=[.m.(r[œ v'~~‘:cRz/K9=/ C*[DJ墝*I\TnO(q~ Ou!U$@)lF{PSJi՗rSn+)eV>JѮ ]vX%2 p"_Ub8(62! $"-F0H'GTA\}kN*bQ Ś& 0V "d)F#8P4ChzUHFF5ҽY!Png$~N߃L0y{sQ׃E%`$>|56I^Ϣ/f*ͱx.nɁvܘQDr-C`"F!""֑@:GHc a,c.‰Dߐ~Wcܡqd x&KI7A$RbxL"F9dž $y65j|x()ǻ˳G?3/[ b0=zonFu*|%eʻ_lsCm8[rz!OG`0Hֽ¤}rg7C!<GgWC%j#OwYҰ\D0:&B`F=M2&`X(M16m}ϲ*{HBeybL)b lf;Ͻq~薻DSpe-LƪN0s<W\(,fU'35LSA{%_ZM׃njMC}$25/+?vrGq9wQ[&Om 55VV'!zn[ps="*O^cilG,Wȩ7F` r1&S n"Majzs]zt "Y.d,o񴤏e͚c N|Mx=v_~>o+nnL<+cYG3m<ޝxv4~g~5w{9P{%W>u)qH\iTDb}h&F&qgqU{8C4iŌ 4u2z nj/k} h_gSI na9I:5tUeg@GE›ح, ĸf]o|ss}dEbc5kQ{+קZ6Y{XS`_1%u,ygԦּ*H笀ӓz#Z/bv7N'ji<f 륀t3Vрl=ZY%W7|(_|^?UNIYCy”7{OUP:諸ݺsemDOZnڭ pza{O|ywTn]"Һ5ݚ@ hmL),~1gUӫ ,zl1~= 5VoW_t^m^}e Nͻw&)w7\\eMjMWtT=TzP;a%>L nS7O> ?Vs$F_|VWYf< U4쎸|>| V͒Q80Y)Q=vˀsJ|$)ɂ"՞Ijn?N}fQu*(jJ@ E?݉mqFk6!A k_ky OEBsA By'BDX-S5R(MLAp\aU8f94ٍ_y6m4Ɍ2i3.T3zE^O/&CPnC\,B# hbRLDy6Io+҅I*b#1 f ~L'e% v[\l }Bgw՘5+RjJFUr4'6ArהxWdmF<׌lVE5%.xE/OpIK8ep ;N2W^* R3' %!2DF&A P Y#d˜(#E"KEuC6. =vƛy1'801I 7+}Ɍ(CKF#_6Xٻck%H _g$9[TIɅ(UZ0f<œ2r)=>=F[Y} 7),1]m~#d1Py6b@GT1 dފk?T/y ˾#5V+{Ŷ>wؖ#ۖǺA'O\= _o9U\hE9MCמH' ҔȴfDz%,3|Ws&bruT82'/'ȿ[x6[v̋7FR֚Cؔ-uJɌ;fqN,6#j)gLX%(3r/DZɎ/ r^ 's$ju<:I])l4̴Yf/*4"2WXІGIY%Qԙ‰|)d@Cy@ JlJ|cΰb q%B*๥}ufжu#Cə|̱"+%Nh*Reiiْ+Jŋz7yH-OA?+O |wʜeUeI5+RŖӓ.{h`ϟ_o>?@q v zwySBJ1 g?u[?4eR/=z\Y|7%vm]YC3_z&@v=v?N;]SBigB]p0LneR@PnDyqf@M2C2GXV;bdnKUB AIpdIHhEI[c ¨ocSeAY8k43*/rM[lA5z.VDFnt-ұ%8ݦgܳy)@9t)09H lEW9`Sg zyjF4 (I4G1Lg ; .V|/-EQy {9p!U!M33b 5G ["ǭYN4:sLH2p;F"=gak8 _Ô4X>$+ zz+Zafb+d?Q_0"? ^Þ E_e}Um6 n6}b^(YV)hķbyZOա ^f&q;^h҉IrOg1}PY`<#3"pz[@wx̲o?yeA7ogI_3o{*y,p_!]lD |z{q~[j1ӥ?н`U.;4,$Qܞ-k;@ðچSX;ZSsl636Xve(^MˈU0S4x_ΗWCﲿPO3 3c/5Xa ?Ĺ8j*mGv/>Gn,+{綗^:S- %ھuIi: TH{fE۷ST3$nNRkM(zRW.nNV'j8ہn@2}­nnds>}.p_*}Loq_,}W K.Ë[w_Xīg>^,JklM,zsN!nW?Y|.=DE-!C4S.c5M 3ʃ褾#F I-xvk!IlvӔ1hT@'1m];j6!{̀'ޟ% APluy׏ Ldr&_;u0[aOCJt>\;uvqڟ-N {d}ryv/8gW{\޽Dx0/OSYgB)5WfrM|^ԒZܐ/33٬-yC#d`Zgũ,03< _h Բ;m,ޥ@#$*5,Uh_ WFIz<;e0Cf =LtMb$o]Zە"ya۽xHôd'Ivq!Hh䢗ͩ/C\v RPܿ1Պ rjjY1Ij*BRs:}@J{;IMOR@;hRoj7_n_A :ny_(SDo-zvk!Iǹ(n0 :n"P{n nm CtS5>'BχB/)fMքvɊWl6G6`uIL.}00b73NԂ7TCKJk3h %cJ6lv: n6c / 'Ypajy4,iIP,΂[]h+ .\fd/1 ?)v!?AA_ WAp `M=NRcZ`$Pm{X 8ymnCxvH*x ?Zʨsd7)R9Ւ1`CV#Uf0$zOmW&Ռ,'<9J}GIpZTwwL1[ڍ32=Yp.&bOZz\x25#^>A/[!yls{=y&#*޸9RpW~4-~L`Z {)jzO+˳'숖d!Ѽ@S:ˤnʩSpL5BjJ,I[Y{;"oaP2g}&E4q,^[Jf jo'1 GU4ޖC0^%1PY!,@S-+(1㜤Y\LZ@e֢l~Mv^S@D+v}66\6JhA6H dpeJq.gکɵ|w*L vvW+$+UMGXg,k\2'QwLR[Bn]x1fg>KYaܟ[Z1UB *r^*svCb`7ԔoA;R*-5]O/HPs?nf{Cgrr&dhΈrAVX8jMiW䚂A!،hF5 Aʏ;ig3dbv xKW[aVr (&;<*X)Z |X&f:~Pk)SG'}pLX "s'b?eq. .~v'>ʟMQ9'?gՆeϪq7J<ߕhʉJp X왳q@ܟ^߆eړ3@5T/c},ДX_aXՂC TKZ+BJev8W[TK'} 022@!6DԺPz(< {yZ!QyJ+dңF)Z|(oC-~EdBTho9QWTS8=Q*. AWrá0WT_J GҰ~Gi=kHկ^RM GҰ?Qu㴮vQEJ+JhaiJ!Q*/H*ojj/=nRRWڤHyBq0Iq9 rR^=ZJa(X(PzM58Q՗>%b( UTSQ.( AK`S}I~ҥǍҰV`jt(e%FJ\We`Х\SM}JTPR(Pz-)u($qoZUjjiN9QǍҰ^ǁ(P_RM ']z( UTs6 E*O{:nC)TEQDJ=Ք0y?nE*>.8P_QM8cF UT1자H51FݜPIɥ*2g:geVhƙ2B͓_l+L)8(P @_Sq3E j7SZ۴?}MjϴB?{mrP^Uʻr%^˺TX"#)R҈H1/IE_74/tg#ľ%68}> [ÿ|rb:SXiCRܔ ,V"4E, B\UvWX߶Q?lޭ@م+i&_ޯ'jiqQ5h XRPKJjDIK:-S*TKɜ5keeISg Fľs%9)L>lj4+%ԏ]_+-EؔآVg_WMC,x[S)OKj;nCە6L=lH҆'jxS*dJ(CVYqۜw@r[+\cJ5Fdf_z̞V y7Us,]z#5)`P:Mjsb& dbvYa54j %v>{)CB$h偹ܯhvߌ9_.~j#8V6,8xhT(W/Ϗ':>Zb͗.W o,i.̘<@@-f^帷z0*xcEo17-^.N{dm,%Wq,E.wΖ€._~q[f۟4TGXA LV7LxEct~`DknA?̛mgrANw p9%ݗp,쿬t>Ph|bGPxρS-:HW+!I;-cj'g&EY")3%qDJ)[|(RX0ڌ$CPK1,cE= C$I_D0R`mʂ3]bd7haS\: @ τCRmȂQz 繻;_< #B$o俊znoꀷ7'ʜ{nLI&i-mOV#fS,yRKSr-8 Q !OkZ;Ԥ@%+m #!$xzֽoֽi tob#13)d5J䑶N3/hy)K\pPƞ)F3 kRrpD ALeT(qYK^_|BrO78omu0*t]oBc_U7zMf"\~}sɆ|FbF_|󋐚q "O3W'SVW-C]W&)% gzrPl;Wdrnf ާfD*`dbb ; DOi^'_Bo_KӝG~^>c~Vb$K&YmsZNnڮ<wj=/hZ_>IR{ 'q$\wwМ 4yϗ:~7L{ԃ2U\M#H:#Ev7 nX8/^..ŨXthiEp㣘ӼYQ/Z-7<%54#ßǿеk[ǿ5њu|V'D#L5cyT Sjic-|T H;j# Ϸhd6;C hR-Wa0hsjW{Ÿ7,,:/ |95(a bю x^(UĤprr-eqÓټXp~?lwl;}%pkPa.`)ǺskV|.{頑Lg\Z~LΙg9Te8 DEY[Nz _ ŽF3 v(;F1T՚=% C 1s `DVC%e'4'A 3HTk=5(Cq#w\Ex- ݛ5wY}kˀVsdy!`hJEJ:0 kg [r[#()#G';LB z/ʌ"&hDJ+}Ygu+]e`P/)A!aJQK Rc B:H`Ѣ;P(#"R c$x` 2 X4%:` A@]2Af<(OCRU웷)mxI1n쎗Ĕgs3'S$SXtx(=KJԍ$5(bc.R6Y2Ԙ==fa2ZmØuX wX*%VkTr$4`jT*Ke9i^uȹ&Ѿ}_{t%悙%EVO½#'clv6$RC* W8`u ~Q=*fQW4 rŸV{"V $#J[=RB'GKiy߼ێ {;1zByHL>f~|EӛVuto!!.O_ʮoj2u -REJFDKIavDYKK .\²qe许0P>A%X;&˜ICHoU $ȁ`O/rې>Wse#E *`]9 z3 V"%쟷19o* A%e& "pAG>,.F/,Nzc0lzf@;.S#"G)³YY8Ғ^j3K_ltJF qa6z>l|a'=ټs~ Ofp kn!;t\ 0ROYE>XꄓnvIwȬP\ BTAn9@3v:P (;H%cdZt;tTs=TVLv&R弱WUXH&渪C{չ7!OtruNBp 1W?ի}U *ny{Wt ksȤ4%0 MJE-!HrV`ls!*KcP=l eSV ^Yq`/:$%WES3N( 0<;Y`"x:Bu,͞;p(Z-^|mfd/-`l$pUAPt-`cg@"g@fDWkv;9ڵ4v⚝$KB-rcپm}Ѩ9{1 u SDJ@wIЮx!) Dqٿ;ܠ+W'z}y~?t;p$,|!`NN9^{7_LU0{4 U#ZޘOk jx{9 Pى(cC-3q~q4[of@Vcʳ=(ez|y]|G3k//M>skL6/rk/x[yw.&$JXq4)j=W*%2ZwSIݠNAlcD@d$ r >;[훊!#|QkLx-!)'{h)h7-5`TSjw@/c֠'uA/zbUSIB~jIQ.-w*`b# |$ɽl<G+J_&'~" c%јvmG,Z77B #$9FAJ$3Tۍ$M%PxfM p;ǫӽa2!N(UMdŮv1bMt2q/ [&5` cD@t$ zF 7T1+ AvOB ~5[v ׳9Eu6&¥G2dW=j!c'+9ӸC.Uސ>I&OM5C+T\O僄 ͡?VZ^L77jduYlgדo?uUs.ն֝W1'G _hp:R};Ɔ0rWG ]GcRvԕ&}';7f:]yFz_|+7eZ.Y2%v`Ơr11RpWOVnmH ,rv Ơr11RېQe?߬/R!!/\DFq2LQ\Oo8n74 0297BF!zOwQ`u "3*Ge&]f{(2PXi0{>HPsZ̯kIjq%\uS)ޣnc̐3:6[Q0H&!ת8u\|ef:E=[ed w{$DzyJN<ݻ(&ֳz5Vi̼DTy zcDjĴK~ٔuUWę2 ɓoXnT%QO@{$7X9TzKdJIVc!Th0C0G N|?jFX=MŐ^)jUw:6j4۰Eb%3H;UΕR8ϸsҖFzLL1cK_l&JJ4.C4%R~ݣ|Qddb9FR6pxZ;^<'aiAB^fԱ==dP-#E w)7߿3R!!/\DdJ{{ڍqdRbȕ9FRQ!<0wѹ-H Ce m90n\Ø74l7g`EP.ehxva'گUGٻF$WQ+~vke >yad-4Jj{ۀF:*E<K#7"XGK5q:H,\,{a?.~4xÑ9sNf/߾y1Yp&'wd}sイm6SVW·w]lݻ=(`J=5x k*HF3z+߼.ibݰsS, ߖ-lQa$|߄X̌ J 3~ V'hlP!I>>p 2پЯ۾OU P =uRn2 O%ktD.R-8SJYSŽ d&sL&D3]0 ;1+FgrЌ-` D k\r֦YU^RnA$EtQh{SDL1 F4,.1:,h#c ̆:Q6 IZF5$D KһXGjNj'=}spR!7][Vt]5?w.aw>l:<|5HX;7j20"oV`JGp8a@fd!9{>euߒ̹'tkE%i2VҹXc0X%L]NGLm4^EK)*?ű b ɸnh>\tPr݊y+ i@w!V@b=N9~ːP:'$[sp ɞKzq7{~\<7d{!=J?;<,}hF(}>WH$?Oq/hd&#} !}iDc>-HoMT+Pye:y՘GsX9xKz((p]}~nwXz`X2>6wg%iR71/F>׋%iNCy~ 0UTE9}s4p?>VH & f8e߻~F4~~ B*{H,Ǜ-{dnxlp /l̈J(+d]Y N16azeH(UV-ReC0 l9ryX2^ `X'GؖZ"LRyA"| GR zS+F+~cNSq0F7FlրTOO 4`%F#<1c{gx`2 .%..Y/?0]җr/ )N9AeḊ]yp8۪}wqwB7?(VXrt9s9, 1FLz 0f`rZjXd"$lNQD[(-mpUE lk|L(.;~Ľ59@:@K0Z]JsO/=N?W%qDqgaGmmf6a`=3fv;U܈ea\-F/XZ!FF{U3jӧ2鮌AԄ)nџgKKB刼i4{gmkYqʐȚ?J \|oOKNE}8;|x(Q3_c`Oiii 2k^lCAA3˅ :TV\PidZ2t^"{fVTU}i@p\cJim JiVJS%Ԭjiޛ _LvJk72W$?w&.TqZDJ "2GIKSm(RsW[E)ZfQjU(dFєqNӶJh4}01 m`ގg[ {<; cq>eyŠ߯n>ǛYR*Xΰs[2~'/FNg-GRe,; 1.cTj4F5K-c9[ZldsR/J R[RXJZRX+Qqd ">/Զ,=FeOxhiҲi&5gqfd1lKN`)2f93|RYhKԖ``,3=qTR5 X mLQ͑ReOR/HMG[z,QR9G>_3ƸY*tKE6U>_Z09:n /bi?y$1K+;5}n=x R/H)0mq2rY*0XUKԜF鬑ҢUsRAhUA|_kdQTBKt%ǸY =RgjMt}2ݥ#]Or\뗋oý&t+BAEa 4p*eY*{]on+cPs 4PDz r1u}^/ʋcdG4MM5 7\ܼmm * bCI*PQW۟sQ]A7lɵkJ!{(ϻx38X($<\+wwןkoA;^\[D[71|rw A&7w׌ Kj =:-0 ځt0U)SIBg|!`yE  LJ^L#]E3 r+vdتxʃتh,d{t7Nm֘ytH uo,Yg,ZE+QIo:_o^c.JH{ Ƥΐ%y9B" w) о}l(.h,7tʅU>%Q.^:_$:m\Z<=2(ш H " *X4 )0|^|Q|"muYvѡ(u}㦧!R1%V+jx-9N% &!uY]C^7uR4b{oY3HLO='{Fp? 1Wsy?=gZ^F_4E *o$CjJ:4H^ S`{'je{2Z 8y)n.T׋ΎBKgYlo<Ӛk ꤓqnI tăqgESmSRsFg2& y(A$Y:.*`,JftڣeFx &QܩZ!QS@DRR @Jz}BkCHK:B˸ŔHv"om KY a+} \R$}"݁4\%O \T.JQbd j,•{^e4"XB~j7=P`Zњx)p!)":b_alb=фwU+k˼N;8WC9\ D+L4⸓t@;p%9T`K}Q6HyWb?4B p.{WUKB+0;<CȧB XV_RL1L#WG!QB4|W ;Hȍ^(K6$(.e,hf\z jf{!F fJog([@J%U"mZ?Pi\6zL0RWym"!w>P;.'kE'5 #XQ Dk -#huHeH&aP!tQ#%߃je:[h)nOAȴ0/'n>!C+n7nؼK-1&jtM#cѴV }mQ%ύ5BԁZD$nLÓ׿Nۯw7._Dh.6FLq> _gVܤ?*wpe9xTNe7N&N?etyYXo9_O)BGǯŗ&|#]O.M:g*D>˜LGϼ/WJKgW'W/_]y`߇Ha]:TPU/EI> E- vLg| B|$?R@>8#Z$! 3lays !";\VjZA^ 2_kl}-=VܙV KVQk(HQ #4%s睭9y /"$k߃5z벊*\,QnBdk{O*\Je-l22R@!HǘM\&( N:XZ.q]TJ{\-(EFtƃ0ҚZ@^e>8p܃3ɤmPҙ>MZ;JzPC8Cyg=ڨ.3nZPI4xF^Ӭ^ Ԛ{R*%iAr tEAb|$W$0Y$eU Z$IE&Q<__ BKò$iy)-(ާ)I@IeI=`8̬xRn$=cޕ/g$IO L;TvIe^ wt'G9jKXhz7ڨZE.0Csyi"jl LK4hz @ٶϦhXbÀrobZAklmɱ<^Q~lYx*Y}3RHЖq;}PY`<[&mz2z={|7>ߣ>KfoIԒb@~v?ݹwqȣD(EciIב-WU^|D0u7_SW&:~{68Gی7N'K{F`Ԯ3])slf`w#h դ뱦&ؐV\.7V\.gnڴ᭸ w5D3c:[q_'cyǀM)pW\n9!*1 of_yKbdp ?qILM' 0*JG7mCUZb`yQ:oL>|w`e=7ot0nu?'䏎%> >š z!6MԂ}ϭ$omر~$%35_;3UOw-Fs&wyAÒ3 S"MXDJre.0C)Q @q` .㟎u 幈1a 9K͘]nORͥLX Á4PA@8Ը ey&Iȵ$9 YRkٝ醊ԈY,]1LBz\hY"BjX $jg y–֜K^I!n o\MǪML#|+P9?7]4'wXh/.?&4\,.w(4nCʻ|CJ@o]wҵ=L_AWȍXet " :ne_ڧw݂޺F!)qn!DP@'>ѭUv݂un1C))2 J["Ⱥoܰ?]}]ݍ.OC'2.߰D]yxBI8xc|cwՆфt8EV]+0$ 4v ]"=haZ"ɢ wc&gdAfw 9=[ku mL܋ňEتK3H΋mQhG!&AOYS ^Jd2gĔ3,I)4ɴY*](AcY )mX9'x"`WQ6۽ߙCu!4 c@ @$0%OP3!t"[jdlZLP1YS8$Kd u8 (f4Guiꋳ?v ;#܄Qxneҍt\pYPGg~jt'僁WN|Swes^Pqu}wғjpuy x%w*W>MfU0~׹Qx\2Ď8_,ugټTHW3^_<`GhiCc1QRj`eBI&x^őo^ Z>VIE}A/sZg;Z[΀ x \~F 3x3B Z[}Vb_8(@% n^ɇ>y5d@T V8pzMK*H$H )U,S0%Je(T%L4 A04OrxZ ?#Z0f}ͶO,M4,Ô|f33Z $J$c h I!vX2~Va7KFql HKB,*Y>PP* LT*'iW@qn) HS<<-8%(5H_~] $ BdBD9KLeTR,9Ik{ l8ؾ99s96w\jϰu~u#=́ն ؀ǎF]2+0/`CETIB).ψ.J08Ͷ/*I`.x~/ޫEuHa$ܘ[yAvQp\mPnowGkDN?!~G=#,pgXlc@68ۣ`/|0$$m,wͼM WkwӢдuf_v/.MMj 0}H79@ CRԤf(==+\Pj3(m0*JK}S@iԭ(J!JI}SZ:ғF)n(E >"RTn$@kSSD)n(ŢD)"@!ojRCyt( RCRԤ(WsHn&ƾ)Qd}Rb0b$b$vྂWQ0#tE%֤KQz^%ZL<;Tz8NM߉='DӦʼnE7 ֦E@ UmސǷKc,3kc&~h*i/ _?ɯ*[V6XmϳjS>!cbJ'Of,dUT Q@ܚe| 7&Een?y&gnGFu 9[UН]w~SiŜ.3'?: ͗|%$Mc&1AlL)NC$k4ikN;4EriǼPWHܼ'O{N}O.V2,+u]gt3Ʊ-]6ϳ#Sw Z>U04}*jyWnÛ̼_j6"y^{Adb  ]G.Ugf X4d8y'9&BfYQFۥT ' TDn\s@^谎T1='4*Dsq7l\rJj( )NZ% 22 4IP? REs^0{ A,!e@S8ݡh5"C؁{Α ^CsD ?a 1/}$45wa7nhݴ7hM6Nkyuuݑ;O?d? -JKhKBwӝx@J\PSLG-yy׏3{~!m3%$$M{WidkB`CfَmXvF߬񲚬(ƂOl6+j9|ވX6% SNhMN0~amع#!JH!vwx7uQorxnVD$?/3DB &:3҂")JW'UvYJKFI꛽'o~D؅Z8XO 88=Ì1!sE gpw4ý͎&:s+U Jv&?0ߒœo6W 'A3B/U=GʷieUsS<̰LyW!x%PCV?0YK8l=CloCQyf?{j_D ƥO>rT%ه;d՞Xdi,xk{@I8wP9ulΈh~n:pl Qk_sFi\:Zਨ1wL2{rmki͎Yzubp?9,o5TR B+` Ҥy_IN^V7)0/RPv3/2$;Y'P.\/FωQ1_)FBt9r/(r ky4::d,pfӄF-3J#%$֫o@ۊz@ݓeF>Aze^UQ3ܦ"0:+N{cmb2eRf}VgeLyYDC,QLbtRCi$8'}}P ,}v\g=z2yiEye_X1e#(yG{C 2!ւ`hhjT]mm?9w_n]<;zk֗w^o]̥m_`Rd_bl xYo>;oV;i\oL"_8ME%p~|87$+~!sbįRFGZ,OGDrĂLzp5gTT aءKhҵÑ_"PY"QYberUR7,HuQ6k=|y uiFaAݺԨ5-I[mWR)a27Ks*gg)d^0F1K!3,,;xa$1?J\8Ry1޹bYh,e߭IMRy9A(-T~I3ey{e\ YQݷ@~YmȊhwş\Gr;SU9;HFx#G9M$k1U0!^WqxK+!pҨDΧqJ8'1ƏPlfknUsKq!pdo\~§_xOA^Os8oCIi蚨#kpnnI%e1ޓ5tͦe{]hᮾ_LUK X}.+ #J%Iʱb Z6^pEܤ=ŀi}YrM$j>)چ&D`O0fCsJ4Gʂ6NZp!M.?)yFV=rdG2h?f>3<3G+H׍9"ڱ+WCB]WVQCiI]cl#jm5x O嶄D k/,^~q=5cL(#>Ϣfw>28M @9UX D@fFq.:KAYPJ@B{R2FkQGXW܁!c벿]tE{9KMxZ\÷{0Qd1 $*7qU;z䴆dc/SBFɢfu S(gql:T",qS&P:)chHP\58OUdAPFjƼEM,izK[m+=>޺ ^!rVGg)GN;B (X(9"FI:'X#EK'µ—X>G uމ .$QQ4 ~Y)B.BIUg_^} dMwlu뭬XߢtH.t'Qc"p^qݬFHP՗t\;%X3{f* F|v76QWkٷˀ}ْ!{+?nA[ Uh>݂?MNG먳1cx6^v4^ =-{ҩ󎹁 n$!D(~qij򷳍/g/yDP>O.O/̙[c],췏owy Pee4ߥ~XS Y-S݋swqϧjϺ[]#%"zRi'X<2*8R)LhnzbPU#&psen]gGA?PNڬ4sp䲎sNQOD҉.\]Y]$r(8 5^3PD-1H@ț]~,}<$2IܦT.ճj@+IDn-{ɆLoXF=ߐ)0lơ~hKSm= J[r\ArQ΢7Ƶ@%T=tn*z9ZrУXCsJA &i )5 V"F1.q*Ao*z9 K Y[bv,Phcv9LԮ.p8MDd1&qJ+ ]'s%KFeFOU0,-u Fln~;鑧eq hThIb%y8؎yzkwd}lĀfrT'bK채l _*艙5rO~r4X=ȅVJ>@**|w#qr|5Tr6̪>m JFVA<\!TcnM3O"w>I4jaшhy~]$t;B3=R='|dFrU6+ EᔦG|EA{1oAB E6-DŽ'e=. wٜp-(_BIb8/xLX? TG>`!OH"P)9,{ :P-oF ՍyW\%|i4HA`TT_yUF[uWZ -!+-VU",& <]w樂d3iWR]*B.-Y}U/r1 |F+JC 0|SrVdQQ iE~=GO!7TuŎ1E m}l~0olcR eCݸ䶱<Ώ*#L5`:Sn~Ӆ|@l`Bwj*:VZiY1'zQlq`!L!M7ӉFH_9ڳctWniF˷JaLٿ`ٙ(sێ>ݴ=+{RͦQO44GRe'ޚyuPӄ.VP|϶Qc١K(-;g*R!( eD@E(LH4}DC.6n53Gg!U=̇#LV9!\s<+*k*1jVO O}@ [˗Xov8%YIZ!c׋ y^PJ.vgb/7>t@)vukK|ӳ"^.)o\'PD>ElZbKb(x?J՚=%jJbz/YJRErɁ)b#ZJ"I^^J^Jmez{cӷXU #g~؊ !^gQ/A>*48ؓt؁31dUfb":6JXI+*Z*QS,7dɣM$$s:m9l/}g累ɹ2 ZK)6TQK;*A:MO :N cZDF\ h].,%.Y΁Of Q,HGadm;hb `4ƋQQjBr{Y;S :Jgj|۶"/'r-{3k=Ų%h6 #cH-;mlˎGFZEcV7Ԋ^2u^rl9;>tH Q c4&Ϗ=ȷ)c2WËvO޶^D?3?/<dBN;_i(!RP^?LHѐDxulX*>1_jpfgQAߌz=vlQ}V+ŅXPͨ|/JuRҜ(QpE;j@SQR;7IdyOXi`sA2O XzE~iPw-Z-X![ ZËb˞O1=<fO!i;^|Q*J'2RXRXX((hY`7GLJX8JZF2( X2Aਵ v#i w%6!1&q?J,+K zaM)\SHWwc)dA$R$@HdJ~3d9XRۭͲ&F~75 8?O\*)w,C+#XДaAթ{~ 1 pv QdB-J4)TpᤰLg$p_3gjqی,l8Gn=,\.m}dzkT6ڢ`6ajK;Rj=k;e+CA\E'; R=a'6|mK tPLJ]Y:6mxv!{W*"ݺw$R4ݰXW!xhcXVC("cCDs$-፬ӎҏt62Ŗ]\lŖ]\lAnjG"$IDU(NB, &R 6+FX *# qnw?gM }21{IWm ( 0#]k ]0եF?NVn|N?*4Jni-@g;cزe-bZV"h-Ybp0PJI,2$&|k +6`Ř٘ ِ*#ԯ:o/ù5r]Fa:2B`HIt@蘫HD4A44Uhjk#  sSd$a[{e/gយؿP&=OѪx y1uV9LFo ) )6R'0tAbnv]Aw U7'մi i-#ݞ=+cYoýQ/ag^=,Q*]x!taCvb۴+E9Z /[VK&[@8iO:N:s-x4g sU7lwukb,0]y+@ʹcilLo]j+v6\?~`Ky?O]7+&.%rrչp[r%YO$^KbDiӔ3~z\#:µ{V,cöΪ!WQ T fJ6[nZt/{CRq%6\pyodD`f8x/kUhyg:΅4%g+,9_.7N]a~{y ??J>\Z aN񦥵UrKo]ɸKWJ3y1CyNQΚ9`eV֓+| 7JuCy\ij: ˸9[\6N 'l&¹vQ/us:OF$f+ ]q=ՕǏYcjx(a!HE1 *lĄ`!B(" U@ҘĬ˱P.f]#PƥVcVf` 6 %%aL^3GL_OLfh)&`ঐ:R )X c0B B2 g3Y*7>Y'_[$n"]xE~[R줹gpA[9h{y[c}D7/m,&k>\| NmN|1j0&ZvSknޙYjXq@/!+?j#ڤ[]Q4g&,kb)Xn*B flBY DRrD2HJ(uvAST(g 2F!D7y}b=+ƾ\Nj p9AAmqO@(-mU*߆z\0C,N>r3tѪpnE$ ЦaRUm*B=&"EЖ,3Q;omrTJH.-%]bURށX>Z1XGDURSNh]TSd)ܴGUʠ|7X({nnxdm~$zD2(rؠ;0Z9xby k_I9;nPĴ)QC+4^:4pSM12}6Aw7YsJLk0"jY4wS6S6V1CUuynS.GLOoN~VkW(&<'>ї3|[iW>3m;`j" ̕gs7_k%{V~$,^7N^9/A:ypt>\Xzww~ I-WV+NaQ)]E? w Tջv/d@O~Y뿸x>{φ bԻ'OoYY7_yZ K|+Ԁ׶q_֍i\Ci,w3r#ŷo;޻-Mk*ls&ᮁs8utr?gJlzYch6ˢ.sUJP`.3DI)ȣE0 U?z\N]kF3O*d_xŁtž]~4mw0j|gϼ8킕_IQ #g&'?x:N7?x?.jz+L`nlXմf 3_{dc5p`ꁐ{zfف xOW\;G HAp15k<~&?{u:!9{y?iˉ#)д8&+[6Ve }s`hP73&̌ q]!k_ MfyVHg#t@躆pu *xrnX I7lY.kdfdxY\=F Y|C ! {:"ˣf06 D!g1_2_$Pc(06C+ =oc"r8c2xXLMWӣiοer T=zE~Q4pp>^c/ɍ)>ڽ9Fl^ ﷡Ł c}=\YYڽzR}{zvɁT`vb9*iksB0Iӓק @+[:F[M]2ms~6?eE%m28aWXE>?o@ړqG)&LQ2܋S ^r\Ea)27 7G`Di`R'W%Ҽs57ggA]䳼PLjNFwH>)O_aDx=sbڶa'Yj&eCK7>} { /݈[#!U rC`R&wA\4|P݊^_SI3bq:< *o ;OY\T`-HD1_d|I`a1$^V]*\J!h $hFS+).M*  'q@H2Ú-T8)9JdLR]Ȃ3.*MuA<j@UT>5U x "%7%@C(LWJ(_S8(q*Y|j^S@UE+7@̶TU&Y ;T5ZP'GR@Vvy8ni`)@֮ ME+/܇[v]X]⶜]2A&VcK7XRR,"{<-oSjx(|F#OHdq*Rs,m UHP/N-0*D+YcK=K٭xv5JTI Ocv%,)L@*p*g72?(m9[ص#FTOQ#K\jv(?*gHR? p<]BH* |,˜$KY@*r TbOw pH \_,?^ -}qZ$z}dqynrӍE#a'~L88r@n-U΁\T}DaNxIBa4K0O$I P&$\{lor;D03'Xy6Μ1ԅgObiڴsjF;_ͬ\FPJGp$D퍈ui E3j|8%= q:t= C-{[/ A?K  Nw0JT SF*l~8jҬnl2Ml6/_) )NkUuNܦJ?&?m|)rU3Suyz\_o?pLRIػ۟pQIlU. #L 29C:Қ zg5M%>D(iһ.NhiܮlmU5K%;i` #d' \^Xi^{K8.0qR9_gEwd讌e6alkhj<9A?=?+'9Qav{yѵ_^yߗ* km_1j-j rqadȫCWO]<>ePG5'td~;֜O`{@(L=! weREri ۶lzM?mkx QOMlVOO<1=G = 6NVtѬDTZ՛tpyr.g)2GULS u'178#cN_Ҋq/I]Q/{xW6\Ek{7Y3nH?u %(-s" ^iA՜b ̼Hㆴ pLJe\4`#IQC  #ъBi"S"Lх#7B 'KPc h%IZpPR L1Kj|a4)xko 4}#~[wB2 Ү`i(Mk F[E$.z^ڽKGuZh8.rFQq5ʧ&Z"H2{ NDiZ&)% (yACUXKL-k:߾x,2%܈q]%=Ts5Q)}i\"ɴߜogWOI/meċ5'wͲWUj5jI|ށCmɝT$|h9 @XVY^GyD]_eBY!ȜD↠%gs4!%p6nm~ k9/)#$&*|1*!愋T*9q=C1ġJn:.1fӊ%[ց+Rĕu}FHhngHZ⻰~?89P˝/*cFe7> Qj-a,U(g Α', ͢aPp IYiT fJlgB!5Dq,U*:+Q %aPDp}ӌ1>fI}V-pB*Z`%0ѡ0?0E"T Y=ndQ-Iݘa4+I" P}TlSJT,4MVܢoLBsV+)ǦUO+j Nl:ZR`dz(pɅbZJ}Q!jN=#uNChIʴ1T}bMt[LFG=\Z(·Yw%n.?{9-!ZJTP;pxzE(gD;Z -9n!@ '3?=ч"@ (3\q,9 o,m(xM[,H"C8(PAۨ4ޫ y.M!v M)DyrETtF"Q-BO륊Et6;`6U`(gڎ96@frH58:+ܡgPAP>!t/<3icL>'>>֯@4F'ڠ{!"w`s2p%)f 7T!Q^UH|LR;"*-2#*Px5)ܖԛidt@O+ߒ68Aj9DM:+%H>LRdxB}ER<%{ |Cm{?*puow]2Qڧ~zH)؈y p]OU+rR8{WHMI(.޴U7W'63y9ڃrtaێ;^y'#s\,F.ڬ.2_u @)n}JL+<^ ‘H5sLa"Qys`Y KynXQjY}ȌU;D'!)eIj4&,mx!0r 5yvHS8g%tԄCO@*zoKEp{FUOU"w:%7ל_me{ܱXI>ϛjgׯ^RMrh;HMLtf]gFq\IEǮzzL҂ɚ_ħ=7g[ޱssƨy7چt{vk~PG$,$>Eh}ޱ{b&شzݫs;pp wV:avv7ɹyu(gHށjɹK+M2lAtA{Mja GDpFkw Nߤo7a 8BqqHexO#niY Ĵ'1I|дH0w~ L1"2 L'*f/N#h%"1h{s4\Zzj("⏌5x\rhY/<RJ&4*B6PMѠz-ml~|sJ0寫]) C7K)u+xZi=%͊*̪^JiJi#JiD6+Hc G#7_81m+TXVd[KYHe5)_v#%P~ԥiDxʉTQJ@;Gh.#"(X{ !j(AK L_)q:2qz!O}נZ󫏥ŐleL㳉`=m۞>e;.}qLօ߯Jv̇s?Z'VscCo ~8y<*7G9qn,;à?8J5M< |JvTBӝ;;;+΅^]/SyY)>j1Wd7"Gןʯ|6RWr׸\K:Jn5wk{XL߹WgK:]ת"<.]eugoy]M:xyQ&?$l_>6[}5}LtB%"ЏŒD,ᤖ9!KB%AY1϶2^ ݋AybZ`sc(H{8vǰ;]c=nάk#M!tqF$;ی*L2Zy̼GU6tKE1 *bRi%ɐGwՔ2#,9B R {4BSNSB)bC Ђ+krFC;A6ġ?8}7Ɩeg(en7[߾"E @8B7ė2j%8Uk4z/?U*x8#I`t1!:Qy ײ1[9QKo@9!QT ؤo^5pYD#P!LA^D#c,j6x-ᢵrş9!O> :u+ Ӹ^x c6ݬ/5VAV\b?PՈExq;& 9bEl2 S e3:eNy#ttqt2L E4I,-& )hT bD'})PѬFd =%ɮF^)' (Ǻb pms61 wG}0 P2 $8sv7~ >H(Q?yv oqd +y~qV ^|"#SmXYh r.5JyBGRDf T}nY 1+Lΰr,;Tj,H'xZp~ !U egz*㪚 ]{d,]o딠GFtܟ¡Uʠ2FKe37j갑"cԆqE~)hb `+q9 'e5AyDC JqwwKp:*8 Nd`,'Z Y y9GÉ=HRjX]b4P^10u #LR]& (kn氵_ _AUZMz a_0DETCy)M0sx*&&..h ]W]b6{W+iy%~}C\P,@ "ܐ7^-cX> ?zu!<[A۟{Z`2ywXH/^zՇo¸s4$(ӿ͙gB sh^]ݯۧ yB~*_paK{S+lNy~dOɴd{CpCу6p1]7TC*沤v!ukzՕnƸUt&C3O-XWCV 1A͌3#6g G\wwŚw7$R˨7}\p!\S Q/HϽaƛoB\f<_v؁PF5ӛ8tH(!gVŸ2)њ1VWHr}4ײNJVP{bv$5PZ ͈XK,n_ Fp汛AT0tM=!m>@vnJ1TQJDu $xۡ!-z̗Ũ; z MGDnP{cPmyͧ7HG{c@xt!g? 1j.|ܖ 58Ê,q `{=4zwQǣl,1InbLZo\|>:6/O J2Z({JX%Oy(eo3ɣ'8wJ 8m7Z !NV)ݕB̊=-@/& j[im(]*-5P PQ+$H)FʴF,/!Nes;7uFcDo/.k6 NbV\ Tc-JU 4JS41E㒺:K7j/4xgc+뭨GL1w;'k wN;X;1tyeѾqQq&9~%`yK& ,ױaaX,x5Nl=zSGQHwcҐrODzSQR^OlfQ}AZ- >?~@NOwR|渜$,n!Y,n?>>/B¿=g3F!*2S/i\4q+2Igĭ33LFB KutvmYC\HY)ţ;WG~2rE+ਠqi àm'`D K5F )[&bG5Z,jcorRl hL1ԑ՘% I=^NrNHO&W(=\yar%dʕ*U3c9=COQ\p^x >F`#}Uh}|_]٧EjӞ5>93/~M"Yr97!Z imFF%Vkl`Ӌ+3TN!m9ٌZJ"Ѣ+kvpJ%U@iB PbZgx[ ;oW+n1,1:q{?m-51:Ndd(&th<#8BBLjAQ<[ ǒ>V322 wQ8\ q%L|u_ |``Dm0^ŕ`J~تx sU soÏb'1VN0a#nғzuf^t\X")嬕nmA!@;CJLJ˿^Zu=oQWSbbII|< C0p{yςH3Qc2}4b(3w_TpY70"=8p1_Цb梴d_L-lʡ_(f >\hu%˅t uKykʏW,dۇN[-frŘ+uJeFriXnxo+Ո[H򛤬7yr$)0-)ys,}:oJ}\յöHZYITU4\Sr*T0]WIݒ~Ut1,y7;hcePHK&ʌ0䀱 {w8' =<:mN5砝 ЩӑܟK5WiăR7Gib1Fvbս<o0IlE9~{|ɍ+Ĥp[RZzNKT[Z#<ry`8I,4~@_@wLBy;NrQcOzcP$ᕼ|$|70yKHʁ"Dr@3$ e3' /="x-,r ;^f'ˁ%;ykPIxC"ba*ɲam+w˨6C) crpvq]%fM{Rhړ4xk[Wml.o[02U",ĩ2u&9x(3(J9F!8v7 u+Lcdl.Nl9w{`Tê V+ҙWLS[j"Z{&śݽxӡBo/ؐ:z^vTc:!/+ȹ ~Ϭs9 c2"@+gk _`AU `r-]w9-L3i$8W?^]q۫_>?{h:~ۏm]WUPH%jPx;^i\ OUy3;kC@a!(HE wKJ5éfXrv|,CG^S^!oE0QcTYBu^(L7T0x:y)G'iۻ!xB0D3\ mi"hߟEi2p*c𯫢2۩TR't((tV8s-(sqEA'X=g~v>oTz~\`h*~HW.dPTMBGݪb":UQfw;%봶UhV|"zLIEd.oVɰo)8ޭBOU/K2 m7/^yr`ia06@m_caIc*xXWUph{̈́B5߁F0QOK&#btswJ(h5N=A i,OVE,VybITV&X:ϓg93S) fjCxV9djf2w'J­1arg';YVpḻNor4]2\<[`Y2z v^[b* VMAU4߿1BL!Lϣ< uepXKASkdWR3R~${ !-?{flyǹs5$p&̐Efr#D^l5I|LX[dɆϒul_FLCcP SD*I1LXGd! qrlׁ ĩФP p{/F6Icd/NܹnJ/w|{_Z05W t 혿M8'LYM>fk$6S/fi3 mOvI`Xwދ(nWL6yK8(a!Q:0*e Vk\&(AK߼Fe!3.g,mոξڊ>M9eUTrj'Z q0;*tImb"v"3QPIjFbɧdԝYؐx@8u.#q,p^WY0.:v\s^5c,w5 kqw>M_s:r}v _Dcq.pwQE+ v"t=L9Fa t`1h(If`%EAtg.qyErVjt3+8[+Gfy~NU~KZ4P tKiPk%wĊ&'@i֣L9L 7}7uܗAa4($1tj08a@бY> !*Ekl)U {!kwSe:w${=&si|NluZEԥ7b=%iz5{=ʴS>I)epK2ϜHC94DD .4F$ݛQTPeTabVXs 9&\ Zbvt pq\/~URIbZAl)u^/Ւxq$꩎VEZGd8RLU{( 5zBⅸk '2h%2`H@3 yː!E$HB`rj] 2-7 b2p2U'UfEo@DVF"bA5R;Kt0c*0Z;-UkdȨ/8,ao;ȕhZy7LbEGn<4ִ^T?e{ltM1U #rnibv>8b?5@Kyx"Fw$?5gv2?BOO337tyAG_6rOwn8MMEe~JMetˌB#3Fɑ9ET ƷHNaD ɡ'-91 Zob2pD\Vr@nz H$4!.,*KP+ XCDeOZqZf7.A?MP** H Q[XVnVp^ZsY?(T rk]å . σ}zBPYP0s];q^dBzpͺ[1ObWȳ<|c.Vg Wȫ\1pxqn=aG4jph*Rnnn΃C?ϳqo9u3A`љUߴ WKID:[ң4 :Л[ U(BS~ݾLu広IO*$eBl|_>ObWNa18Ms)T'!VsY+ލBrɚ$f WB5n5P0WY7e{iy|5;~2O'm,i[]zUyNm>WK p$J?RE 3ZĉՌ[ DH#lN3T#z9[gSE:r>uYA7ۧO=@T~pwN37)iu~x|\crTL} &0pR-X4+֘qNnX[YUZO%&!h¨/ACra,xFkNk>#+vNy3L> .xIsOlbc4ce, ;~n7vHN<O4?O7ୌa#Dս7Agl;#r Wbm2^NhO=7@.v|Oob+/_VKQ? WCO ܘee,S e6SR22CJxH0wORg?j,=mj; Χ`NUxj,qZƄGeXIxx!y!zR U-ŏTaUuip[ i%!/U}_/jHLpyN9E 2 <(K6qQTfKN7A jFC`eP2dALaīLLd M^'R{`W*~W?~D:'8߿l5޵7R}hAkA#OV4T9 S^8y-7ew`Ѩii,Yd&ÅnN%r`_GTifEћzLD[.Xt).t+ )(sIp3 ^?ۆ[o {=ǣc΋,hoCLx K`$H d_n\JYH"{XS׵5|L]ċn(+j q\0)kI/Ab:Uu VQb \JNPfdxk"ԲJBFƁfT*G>4 Ak*HYekiS{*TPEĂxhfdF!cq5%1E1|IdPEE5c"yI"$^IŅh-F#k1p,x"Z1g(F>&b,;Vc)@FXu1ҞRoByn1R,#JE$xؿBeg*3*oT2ΩQ --qTK*1G6t [A 9FhK/AAZ{гQ!^/9I$0B{9Tϓ="`v{SN"Љqmx y߃^G驪J F-Dk9$BVR-U.0nqY=w2<[ ÄSVTa1bI;v _f+@6KIjaY3 8mb7ƛ*4 bRTmÝ@hseF TUuzZ`?GY-+[+=Ak|{LPnZN UaG$̢ G Z$ n [k1,G/$iy/,_bj.}3kq'WјnɨF) ?;1r}\\\')$FVAi(S4 0FQ28⍁%хs~JI4+uJaH_ug:KbXhV*]KhXH)+g6רfT\|=#*P;\^`%kbCp{ KSPXɹ!WL["shQ7Wg;0n:?{m K/>Ҹ*>()Wŗ&`fMG+"wu*44΅-IH FAGnCj}ٵy_Lc0[*8NP33 􃻻|^ ]t4Dl=cifKrW]z %![iw>В{Ve'*R.ZW΢xJ-vǠݚb:MQGB ՛vk@W΢xJ=-&h [S BX3hҐRD]kڭ y,z4Oi&mq =X9`EU|S^W?'·/>͉Qjl7=x:Bl4rjZk%l$ϙRM(,b,,yM=Q\4Ų\'^~ϗs5Yc NթTJSL]YUyb nqK)pjғ4{XOww?3a[ AVSjK'ZkP+?r\NWVdx Ndɞg $Z::gqiJ?u㯟]$tuPwŨ4X&M˿\]fWܧ;ן ={tdR=sOzkY+PS25@2m gc!eMUR+ 1g?`p$7o.%>WJSJ幡BZysgӛr~.:cT::+-܍RNZlpeϻM8_1B۔sS߻tgin !RO& SRT r-.-KE0ZjDa,sT2S i6ʲԅL2@dQP 9dFJiB jE48!E L/H9h>w[T|M.RyChv[Ҥ7!WΒ*>R6dXI‰ LA |eSHqsP>Hiqрc$1S\kJ }Yf &K4EO?ߚWv)j,<.f,š4-ĮO ?ՐK( hՕ$oΈWGC)7'aE5eRkLLNw9u7gB_f3ɜ*hN h#s(Xf*E8rgЎXr re(ijxD*iUL RcpmQEaZ8ȭ2buH7ˬߩl &@No')Sf"-.fr>}`}ݽ_=z}n!TX2qJpT+B4ph+7ڝ_:swy@Ⴡ*bpEW`LުLЛC+,JEZ唰}ŖC/jkVšNuAB4L!< gx19άd]w311RB륓z5u][{qۇp-/R!D!Dg%e2S9Td3FbBA Q*h GψW\QmR r}LX{8tn2b)7"RCqM% *QzO8zyZ-w8_ĭV\dS~5?O揩;ݣRl}WE|t(Co$ S* 9a/LǍ5'B*JO7}QzMظ%f͋3G3)Xx fVU&l8PP?s04!iJr%(*pk6zhx5x)pb8$-_g/\:}&r17j1Ʋ b*SDq)S ѽ՘gB'cvv-}νFx)Bwvp̿v:.!sޅOsڄu9/0xMAlMN^6>?b$]ْ9] w׮>P,'\DvqX>=sQě|QָA3Sޗ\Cz?c2{p̹rW9Ʒ}FWvaF|CT/%&P^}X=R++D ܔ|k/VazJH BwN7ioo8ղeYurEwϳw>4}2jA`]4{Iw{Bwt6<.|0 5̟^"DWÄ) B|l-+E]Nw5ը)ZM #t5hCX =x0\Ql*h~y2[B"iuw9i-_5UFx%R&ij퀳Us)G{< r+F'zMq !uEDb6Вn"DѼ352ՋH)_0©NP0aL/n扷 /t*)$%Z'hj1$M+'!Oؙ"ܞyMv`XJ 䔓 y@}4{OC>Qvv$K;nm]~`YM`E \d&Mu.[+~zsb=۴OtUwt.}`(".7b5{H!'ng{S)aXe2ihD ')UE"20LLZ-:0`?l~nJө콹a*[d*ATp0p$ 'hF3A=CR!H!!#qxz,~ #;цj˩%᩻CNg?ٛ5{BH!؃(p@+EygZUDka Mhz|<ƫf\j&>:\Sz qT-E">-7,ogD⣉ 2B {⳷ f$uw Dg lf{@'5wPB4 ƌێ:f<>;"IYPJbR\Ny&D0wP5Ed]sYl/ W>=N,;,SVenZaY ?],7SQos#U*Hs4H-$# 0Nz)Rέa;ޔ{wѥ(FR1ポ` ;t!9n.Yk@|!y,q#LA`b\g#ρ+ S0_hb # :|Wq IT=B(c/el.ȏ`=T⼕rB0K|z\D'Es >%c^E&̈73)1oDsY.q0ƃ Ux0plq6\̕pTd}тqq4%) MSŔ+dlF*H%2TQbDg0^עP9)4ru)nSbeJ"n6PhC$z*Bh,wc@FX)ԬuϻCk+1ݠ~n(Cw[z>;"4P|w$EZw|sVyqZ*}ky#c-ݬn[t;^wVgڬ>vn2j9a_7yo/wS4F!Th*h^Kүjp1Val}+vFFHj4502Yf0LRn*ΪL(^.i~zkT2Yj%]DIf"DN,T;1]p=:o (_Tx2{1HoY% p 2MWfb` ,¶, {`9MĹƁaD➆`:aZ+2ę$ r¸)L[n9Ӯx@EX#B` J&.G Oc)i^pVWC!cЁ67PXVƞg G0Ļ 7{' bQŎ&r0Q ԵQ29) $:cQކ{ Y`/gcFCY[s-> :c +z3GW[qg 9xOנC|̯ g*Ԗ!sk;,u>4k@MNv} TAg 35Eg!=ȨM jnp@ДvDyk*+aC!T{{!V&!'\ ,IRƒLj0+{P,?2n9c?X3pb4B7ՀFJyTG{A RGQ5lnJQb)Z21aVF;L5T-!ӜfS>ȗ2.`<0ǘ-\*Ӽ(ĎFe pgͺ]oY%|Χ*k%`9$E!p˫HE315z_Eb,@hUfIldK3<:V{#ϔBЕ+;ZwR?z">a&8]R朲р2JDrj $rչFӏڅHt+) !WF'|Rj)ũdƹ/BD5؊s8òFpub};J/ˆ-afr"$>]gr 8HYB/82\N_ yLft=um c`Sn|/\)qeMd% i:cm\"tw }fӵs;'R¦0r#YI,%Dd6'&YqSϹj$ #KyB aza"띪$=yҷ4K CfI8ZaPBvg A5BVI9TЎќϏ6QTo/6Q7.]3&ڛ=3 Xo3;``m[ ]m((޸@j>hTvMZ2"=Tm{T_UStrkZxnXL&ϳx}"ݬZ|?yz9Y>>.&Ezj-~'ou3O&P6GUpLR_SeJn8zڕ# 2sybtZ,l f I $5B31#&oɇ?QO5冀־K'G >O!N.'ے"pE%oE븮lK+Hpr-*m*|FyA>XP#v !(&kO2酯NLC2HKgP-| B5u' gjk9g޵5qcn4\tk v =~krm Lf$TMIrqif#{rd.f&smF{ťU+˔?P 4+ R ͉ҟ ~y ANiF>+25Cdꀮ)ϓjD-V tBi,W/.ɂ2_^^>?Ƿz㛾WrC;frE2.H\PPX:i+M)db),XE*&iYiP3eM 7@,76XEfq>TRXKcR(Lq#F9gRi^OF.*oYi7#BJ< ˁLGPH@+v%gt\ z IV9IYH2 DgF3% VEV$G퀆 ,̘ > 19G'tP_(V*Lh<'n9Sa4DutBHAL"g'gsЁe%_ҰHp XP4Tigc$P'ʑ(Bb<=ЅSBipؕ|Ppl /Q%$$c5DCIS:G,%;4D%3  ) +r&(e6paɍJXNK2 P VF5D; ˲$Ϡ=ųx^`Ni_hʳhcKM!-.IY%"!]bƊӳc(z6-()`0/e hU!g*@,a{JMc O 50'r#6AtXH.B2gM11M+$Gc8}dBK8Eđ(#ZUX$#v RrF 8@zETø9e4tPGUe@|$Tk9On]5@|jQ-o?ߞ|VJ0oE.׷b^9K_'m(UPqmaCC bs7}<>="CD 58tF-%?\O&VzM(< Ri'K_5nMWޢ;ڰۈFB2ۉ&-=`hiMÍ4j2)-6D#FgvϤf!Xۆg6;v'!*ͬiMt75J3fhvҪ86AUYenA4E(fBnh 7hC4E&L4mF4l"@2[3MJCPʾ;`Lbg+)t3v+ӒH,:gb-o;ԘLFID;IH(E[»Qι 4]_۽[T!߹y&Yd(l:lwqɺ-%nĝd,pXq3`Z+n".y0N&4 ˩R4$gӔ8,k.63C4q7kzo'XnSo=[1Ł2=1 ӗE)4C Rq{Hd 0 }HdsEJ@>_==VO^h;=dWNkyi(jLGG=;w:eht5?k%_䟽j!ߠͿ/~x+RBK!c!``UǷgMCڃ33+T! af J@` ~¡i9xeDnq>{=[iQu0XR_Y"I*zh!&&EV i hgelv dEA Χ>Xm|^檛tǓb14ȃ Ifm#l$b;#H/NIvfOO1`?&v{ďR?JI(%zhc v.@P``iUDNZƲp82x^2͏6(hwPh6Չq$&k %cٹU2s-$̏hAcYy|c Zhy$hY+?sbO7KۡmyN\dOݿWtax&lC+ikYTJx{vGh'ߝi@d62頸 4DPk:h^tD7!a -Vέmg{.WtBSn:Yzs O3m{IJЊ䣄U`=zA2?0)a81 m$T]t>Ȕr6eÿrvq"V-bg(; yNyBa1?̒vԭ[rzmRE{{mxw?=&̈́ƬJfҴUrz{b^O+?6Mg7}u>.^-~޼Qj*Ð "ǫ/,ӯͅz*mzT|n ۜbi{rmq3d.{?D?%ȀMJ(o/=2n>AW;/({;.!Kz`: |vI!((b\*wF5*73}t?V')ϋK>DEfES$q@.e҅^-}L3&mo\>'G>=f|w|4i8r)pF㣕ٸ?syr6Oy<#1wvjÞ?S VŬ| w1p=/fwbGؠubmgw_ 䔮Zf[E j(i^c3ЋYއ-+W/74㫫T+}{8漄Uގ[E?aʑI]GV&fڿ+Jm)"XphJ 22(JJp3Wl[3X \k~ e 8Vf7] SȞʇ`$z%jaW/n4)la}\-+!{㚣 W\H(oU4\+튆pުcKEdAjk2Af\[rʌ/00nxRBr d\(D-렬B2h"حdS(+TB`׿CvrMX]Ū!C(RdԞLhTFmhSҢhBBh\kO9K 5x75 R\ 8T&/ljFU20/Z^gjνDpnWƺUyH:楰?oQK|B5A$$/8e̗[줟Uf&bzZ q%բ!yv\|3;r %sv0H\HQ˜!f̓Wo wc4cep쥲q(p41 CBuNiFcci>xRr5u&qRJvxAyv|0j=U Q\czKhS!KOμ*=yq{lW6|i JޗO:xYAsYwn84;˻<r!we {.1@7Ư!Y#}n4O]&$4lZJ݋v:dd6zSBŀr1֯o7)%7rr 츁NjM|MHDy](&󾮮^W``r]nX(^+g=wIzv3w0][sϏ=-u;2!Rq{&ugn H/_+Q`᝜WmXAOKLk7 ;։ \p+AL"dk[p`0{u'ɏ9*B#yoA.ӢdE"2FX$aBus+;>P VT hqg3Z,^6p[],g=h#Frh@F?dJNg2%( tdJ*NMӎ G9]%v/tL=\pR?e],8cb]jmfjmfjR&Ѐ'Q-7"S,C{֕zHjvJ7. CUL_B F'tRsj-grJkŨ"bZsT[ÃeKEImA2DH|m\Y1 9cTg窤>W\TŖI)a.Qq3g (˭wq M#B ⢣+8  4tm+1MBކMB$>> OH/a"8m_NQh“5ս/FO<)pH4nzھFU5v`,sW 7?l/zz>]F~n_VMPtSɄ"` F5 y(y4FT !nĜ/tUV2̞o\r$"tHyzJf**!M5юAuW߄s.z:AWTBf~ JY l"z@DJ)1Bb`L_6Tn{eoަM[{naBĿ&-G0:OQOӖ8* mU~|>2s$wG J 8ZG#b^NZt6807%|"@0:5TH\ވHÛ#hXqh#ְwK%M3x]NI\o"E&- l:R\Msi#oymM Ҋ.?2Bl)oB|"A.RY<,RɛE*Y,HDBW;\;d8%9K*l׿j\*|_woϜAioX[ ,q ï*Y0\u^QiuKopt7+:-TPĽt"pRGJjG\d ,SR:tx:ʬzXȨ0x } aTYp1յg t>wb*$<>^fͯ/5ab3%UlQ$'XnLFId2>7&H m э,N<.gtUЄ$sZ?nΣ>U&nTM+c╁8#ogDzT(ņiV/*'u}n83\<3Dw#hVc S~hU AM~X+ \).?vEN{pgp 2!bAFA7zl=z0)$8Gp %k&er;I48*^R‰!Ms7 qTZC)ĵ:d>8ɂ(v-ዂn[wQ Kw6urV9&*MXڃ3VZ™0 hFHϣ9*HV'\m@[ƹ>;oaG |+y,< *PBny]z55J-&=D6+ZV*-"y;xer˳ńsEIpҸ޲i<,xXAҖf[5o+W&E^zYo):_Lx\ ,5Z΃<δi#Hj{{6_޾nt Oso¬xr|t]D͑Y=ooѶg7p;=}2}~vnܝ٧'O^Z1D#p}$]Ii>0O -g^>X!&QcPuüR45$Hzz|R؟˷f> ]L] t9=H`Z1x4ix0Ҏ3rc1 g46JEAyȻ4cV24"^oߍ[8B9Y6[m],YeXs Gour#j-F'VUcU} \*[}x`HNǤd߫5~|hma(jn;kl9S2mQ ƥ]~e*T+t> T X)&Ή g7蹬wA/!yɔDg`<% t縝_4+Q+3aj3 フf~ê%#?YDTw-ڧD6,IuXxgJud$d%F!qR40X CN;.ns -TF{ ;ʙA蛖P6cQ{Law| p^F&lhX#,ZEj%9ȭo4"ɗ~>?K=߸\s,d&&c&\ \=FAS2R8gz2Vgj{ɍ_.mE }Ap 2&_6lX[vd{f6![/n$~Qkl=)V_*2]fRFQUo=~zH0o wA쯪{AJ ; 8iEÍAT~Wa`xTwߍCA:(}QS[vYnAetJ*53x%pX]i$h$h$hݤupcN.unD U D8kyM?ԨjӏWCEK& B7؞R*'<5# :Ld)iUzwMywH:@kZ82G m5^~NkHɅ^:RFs:-Ep\nESQj|g>p+M\ AO!bPn-ҥ7DlÆh"@z4@LCoN:W蓎M[{McO:buQQuuuzG]۽pj$!հ[us(CbfWj+X=(3jTC50Z3'wA!C-4I9ڧs-pT=R:kA`sSL&98Nm*di5s#qcD馳bb{mN9C414@VLMQÉ:ՓSo{[-lj H-jʦ:iKgٯ]qBwӜnSǗnjʩR继$ &vG&g"s&\`Lf{.Ze2~v9MsYXH[3Ş*σ F@[ 8ؖVQ,Q\=΅/WzUsYzΞo-AHtI1-΄IЛ ǑrY0"4EA',RD6&\ eiv_ ؅t47֫P]/}߁]-=u|Q\6!0>EV^==~wk#1{u`W ?`_ɻɾAt}WT6RVRm|YFn/bt=C,h%8JwW@&r3^lIP /*M}WZ7 Z7 Z7 Z7)k]/dJ(R\* \2c (,0'ECmC6*~\ o؞ִMqJHp@K[z%"l6R9:"wT8&hjSҌ,י֒ Mхz)(6o4X|i;* ʧqw;lm 9ɭƳʻa8 b /«87z,)f͓/<\gv@k3(Fb~ i *+yr?eR ,<9r"UEjf|,¤* N UwkEW@v]{;-'͒߾jS*wtIE,0ԛgc=6(z*h?oB^Sh ZL!O҇M@uEZۧbxy-F0V-[EkSpp `։Jm\9k|+k盗=y&\AM.cݚ-F[+5U b5GT8ײDZnK~V.*x }}9ӖBM)\2YF ^{s&ӌK7Wߊ]Cn3 R4XE҈3;*>,+xc;.&RFaEFB݃Ba[׫"ZkٯWtu7>?mgᇚL4Ѻ|y4[_O.˰q˧nFOVb_w1!RLJx:6okĘV/lHXyVK)gߒcEjVVٳ;rCb /υNչMc -gJkK?5,X\*j V0g9QӺ`>"Lقpw\K4vf I5?l‡q@H =Col/ c?$=Uާ D"Ati#!'cc̯/d|Z|)*!G+33*Ox?H*u8Ti)UAKWSNoI"uqg鴑zI9Z4NK |-稥<$8-?H Kv9k)ܐ(s-C~y<73_ͼPV Y ~ p%M`9pfMѬ]Heji ""0*oF_gw߹UP{sa=շX|zsBRy[S3O̒, 8ͳKorl| iNjM[Jsp~1 7&8SR?I~YIs J n~r7S< f(I3"ΓY(F'CQd7VXD;3;O; !z[$H띥(n_(;aiʁw bmO#ec"u.&y"Y%HSh0!b)g1~̒|mhOߩӨPݕ!jq9T{uN'I=רqzUpJ:F{q$X|1l(|UXXOůd Ư\f;0csEyRKO!_ClLu;~R;`"*84s!u,>kh)j dTd|Vj$8IpT*'Q")R!e/:'IsRυŔB1:Μ)jΈ bM`k}nF K_Rb'us*/Rŵ^&);RԐ"E gJtH=n@w˜5o:BwإWݜW\9uL9) MٺzE92I2Gf3 NхhXI %p,i Jg,k}%fW\QK+5t}F* &Qk|jKK[CҔDPQa<O Vʠ7uZ{ʩx2$yѽ3r {^@@UDsT[:ƓG]" &mt^.RMT o!|.1W/&8i\ Ĝc܀i}h@*]$4~f/gzNdCY<37"ZjWƠ6&~\9:=S+j>OMUA5M9;h3yЯÐ}XشF/%3ʜGuAYD9iB }TI9ж-Чd/Mq%ܩzk#^۽WD"^׉ )X&)0,̢_$b3![7f^NgkD>n?nYdhƜِ/ft^^g{릖7piQ:[IaeG[.) ݩT/ :;1 |LG2ˌ0($ 4s;%2KK3c%ݙLϺk/"G$-5xYۺdt>' st6*Ȓ6Lb{/; =U28m VDlr$[$Ӯ 's-65&}ڙLsfZ*L֓M> xI׊(ޗz+@"nͯݘSc)x~MzDء.W%:2Z1&uNN?YŹ͖@a9Jf.ԤyD2D 2b<:0HLoݸQ}9g{=Ƴ}saŧK|Q^ s/|VkM\PeS0`O}N't\af'mTN9"JH0mF''YnnZjb4 ]B82)>}4Nf8k0VpF~ jMT^{)Z+W;F|-.]ǝtn궊L\#!CUn|$\GRB2|io2s,H!J9N٬Svl6u%9Ypy,WT6XM_81BTGu ## :@qczS+Z҈:7зW~P7iqqGHQjȎ.W\|Tޜ~UFW07_Q?}i\#NãǍ 4h6Pjgl X!U S) Qi6H27~эS\v/=X.!r ~'/0\|X8(d͟dCYX"5e.e#6+Wը8VpV 6ֆB cy?R`֡K.M'z3'AF({Syl!ocZ9^ʤeGQxEYs SH/E<ߪ84<>iPIs4.P& Nԡdhz6'ӺAqBDEFW!u}4 6Ds N)Yj{Zcnݽj؄V&4oUCjpE3DH@2 YXM?^ja( +za9xby\ZshW,*IZ7K`KØ?ǟkIӆ9^ęg#=D=%pG>cu^Ot<=蔰HhZ];qMVTk9[w_kѿDWM )݅8Ŭu/_Qj43@05 ׽iz-$%F+@*ݗ$h;WuT 8O&fȈc!viW0 ڹ6yZH榲уtù^D!Bx*yZ˭TYrZZf / )=jSkya RQHL{,F)"3@i˨4 H [B w`ˣTWR|.^93ƘJz',u 2Jo"jZIfrJв!@^ x|[(LM*W"i WC4kcͅwݠ2*F7ЃFJ"b9}Oᆜ>$KT!udTpI:>˘ʸ-<  Ci%wj:%UBkH]X:>#`+<8x@hz͘dtOLF97?Jto4F _ge8S$:,$K(]'*<זx GqWbd/H4j@Q!)è@rHp5!sʷIKb8ǜxqv:׷_"2Z% BIKKu9Ў2yAR2W&pr47T[7]vIg |RI0;˙ ,`itw?ţbqԱ7pVή_<@Dnl滱\Fr}Q)[(LPH|k.پˋR".G4[yjvb};O4da18g``7s} 5EYHtw)J,u(u2Jw }㻖K֢0l57S T=*=ճJ;X4UD)|Uǣͷ64{ß< b$ZZGC\?şy7ُ/a:lIar5q@~zw!L\Nw7%CFVpJ jz/ MϚ@+LFƥVS;V5pNhό-QYwF oյQC2Jyֵ곤/gL8f3#S:B>t*C.0濫KKsʼ2.ρ f R51^p@8cATN cԢWE-Z*x#@UYrmVRf;KE FnɭҬRz[ZM^ '[pʗ=ՓfdA~@t W:5 }נK,Hx}m+h>&_|˂c.ؖ%;~{f}vLl_hj#N{o77tl"c85u0Qw{uO6Ԣ&uF>FVjdp&W9:$vCc 확{;X܏-t0)dEyey\<-R?? Jw׃.-wsOa0rv!% y"H*jh7d[S RDub{n_4U!!\Dw)9On]vbfI?*wWF鹿܎.A>׽ygޕ5qcK&w%l|2rSKRf1'@ ^ȎrEl|gp7gX_,^kyb!1 9|ϗ?C{\֤bؚ7n yZ!C]z5ys3`Gң V`/^}?#9^}t\OFKu8cǥUOfRV0Da E,~\`wAa{{ Ɋؕ)A &?76|ގ{5n:s>l'Y_Ge~75! >e^AV3G+*(@,i8N~?/`D\bZܠ@Ex'C5EUCƶPAa ډFj#6A `ʹ0H!kZsɈg-")NMz}\uܡi Jbg߬}u!k?no.fgY/_ 2]1̈fKRXOa/ ۶]l%O=e*icpO‘zc>󴝸*qf-|2Ü֚&ЋrNJetBkإZS3ayvlΧn1F ZCFTT) q'Js Ǥ,v\21K <`oblz#'H[|  K cO_ 6zBs**δsx&pDZg 6Ygq,g.MxAtk]imI?*Mņ1vf*{6IT=dˠ|M*,iOi'dmv_v 3R.'㾀t}2gj .6}o_Y"}<JI'e/MS)%1Hk$hcH JZ ~ 8CC2νtD|޴l\P6Pؤ;y1L9lX$x מqZ Ĉ[5M"J }|zgL$gpxNV8.GinzS O8ǻ/_}#9a cW/<ś3ѿwAggeW~۳O#xs>Ywe|:nV|G!0R*ݛE( \ z6/0{)#ˠUS*G HZIMR;fCk]0;!.mj0TµwAyP&TgJO9&HCJיar Сpwk=>Dc/'dL ? I[9!Ih)'Gxy[`#ΰŸ8h.{GƞꢻU+m+4B!A6Ϣ9+U/S&3zuz:T ӝ{]35g'^0+"MnM0 rAfq~tBeyKGhZJŧZy5A sMN>kC(뺪V؟t&h{ckTJBa wl/i &F gtO U(땃mXpWDqW1'nLM:88(A]^ONv0 BQM:lhHVoP 2l/v+˹):)FP{8&pn6RMI- `k0b+)`D ɹaT1KrZ{CHee ,j˙˙DP)7W1hH"N?@'W{U30H!rffH cI@s90/ N{u9o+ 8ab˕AHCr͙iopi쇨[jgYͽA2)ceuXlPNX75+C l/S~đ c AJcH[p"c̳ Sֽ}|Ǘ WV=\:ĥ~w0BI.'~xEC+pn#B~dwrG QQWL89i ؒ^ m?x h'۟ >`szZ;2U94s5lm0%w/&:ά9' .Dت:@H3u-$X!bV*^ք1W?.`y@3 fޏMR؃Hbg߬}ud\p3|30 wy p9 R*r82j 92"(N $Wt􂡣 GLIaԹ+RhՑk!Vȋ2ȜX`-J.hx"dS&;;<n*K+…/ Д.l` ZaݤpMy+NʹT60ƷO̷-ǿ%*l s-S%4rU(pwt{之&=wo@D QX_!cӨաw>."V>=v.굛vƍD^7?];L9o߉Qܻ'+0vQSdpsq5%kS>߽\I"lWF[hYܬKgzƎ^yG#cohz|d?]]Ǜ2e-z_nb+79_,{J9 %*eO\t5q}~jvYjy.WiG \!E|JO'{7L;R1q2n.8N{nux3w}"Q6מ57/ d7n=>V4-T# c{Ff[xbki{-Zgm.\7#o1y.ȃtꬮqfصBLn@7ߨyc}$N᧫$ƷfOpt8ޖi -l4rQ[3)6+3ˆ^{:וyz}cFw٧ypd&)v3 ^|V7ŦQNbS+p `HM>f] Ő|ѩ821SFy֑SLy"n#WƢ[=+ҍ>suG=]Ꮛf݇v9RN`+*&2`ER sͿcJh4?&N |S?/a'0fdD"LpV1GSb<SήAKtآrbOJKb8~%P [TN#W1{ ydټ`aAY&xQ׼y]ڻu蚡([AxQɁR"InQARa9cesT(lnhQԂDR b fINggw3nvS~r0T@1Bsq YbP Y-G+43Q؝:_)GU/ =ѵԶ #q00 vl5B* M$5'c`(ϙ"MJ\B,RShF幤D虩p)V:UݙNϼP yo5*XH1 )m;J"2\%aIF]%yD]4ɧ(߻@9-I-Ja8m9OdV ec9 W~8`~t-`%[ʗmH>ƺu:g ,^uS]'r;#^,Ic0Chnm(\J*SčT|)>IXd*l i{wѩ[gu3* g轢 i;ϵH)<)yH_1Y n.ʘ$w]^V&aW˚^βċ}tV _:YݲZ0ú+bVJ|Gzg%ηZ I`+؋ciqVLh^V+%OϷ5`-|n˦bZFmzUilz-[#nĸNjUzg ZoggTsVY󱬲7dHGrd/$i+Ex A~y%hϹd m-͛S'FM~}(w·J }x+knt-~jbIs'-NPx[o8w޻M-U.@Sń8+/;YJ)  ih ve78 *$H)ZK'hn۪Ztt\YSIN:@gK +O*ݛAat [f-`pK~O s\XYqϣ>imL k߯ԔnwG˞lzqv6ӴcRLsL}*;5;e ۅOfE0h&YG$mκ<\ b#X=5Nsw$MEMq3]ܪo7\-hy6Iml R$q|R|@/׉UN韜ƧZgP7nfpQSeEJx87TupZ(K;5jf$5ӯ*R#sG%"XpBD 6>܍*JʸrϺQOola' ;a,Ը=,Cd hűG]~։?ݪajvmJjF{ TbR7C,S Q~F50ńk ulUsn)-%X +ѬPa0*Y3a4?MECvq9 z0_Yl# Boxatbِ[v&/?!"Mcणx3!)6I\gD┑7GĬf9:Bz<&1<72W *4ZQYh j1P!LGQXX/'2pʍ41ڎ^ch)P ]o-CSqtu&=RhGe7ThSv(N^= bptI",fA @kr) RS@}֧^*XrGjkct _~ M$6=rƁl^ťpH1skw&=qVq"37x<煵1A5/$@ ɹ{Q$B9m7/=Sfڈ~JEUY/X5ٵo-SڋVMG֛߯G~Q+ oxDH)8X-z&~Ⱥg܇Y8++}'r?wnb&fOyo!.2Q9 c7r}Kp@cpKɗC͉lLcN`aSŞE݉h'0D|'b?Q̑3Aj:̡ g!f.:O 29r#MqC8Ɯ0HaKyV?~#ld,=L&^vp>h|"PI!ltz uhfrvjZBӒ ؅'挐"ӖӌTI`4FAt1kVozt"bQ'?~~G1s"ƿ,B8o!'&E`"ĂEqS"Ӳbn=qD bfH,"aS;CX\ Rb`^/%ٟ-zM&s eh+k[+BIC Aj%"N/綞rL)PY3)$%빺>\s̀bF}~@$ݧ=*'{=c= I Nyp$LJDYrPh 1uuZ)` sF2G9W&F ?o?^%/>^@>v#jYI~~>g^7.Cҵ6$!S˵ع1֘r ںddgI-XpN߾>Wпx%'Ԃ^ޯĞF`]K7^ϕ2GO6(t%71Ʌ#!HGYn ȍkߏ^~a?q2L@@2#qQHQTΠPE 01Ô b8$7N-l7@.X][[+yzR_TT9d,aH9$Mrk߷q< qx.$53׍F6*-H`YRX+2$.QX:o!CNtW;x4vT&z)H zuzQzf&x)oYR9#X*ƑAedDL;2BdB;32TE3aAduMf22)l)fP@ @0bI9\RZQF Z1F)۟GRF~Gጃswz֒ )8:t ) P9\f1;CLӟPRHJYL (שXaFP cTD's4'Q~+5 sr3F>?j/5eeY)+ 0e&plTFKTE@9aKejd 2s^c$<$3ځE;[j%BW.4D ygqL '1 z[. L@:1ɮ.7HSyj i'Am)l^qˣkwd>Jm%Ʉ86lsWY r 4D.dZ1Hi$fA)7fa}l"81Њ_?s1UsE‚8 wzoݱ8ɹ /׵ASġ4[Ҵ_jz4v Xo7o@>0%CN)i r?(I@o|[0kXk>Z'&T E"ݢNX\&a'bֱ-VƥN/WhMql߀q}07_֮ȻEcWpgW/N<;85jv<ަa 7=`Fв&msTr{sm=SZ͸Yniʤn[լsm76mm5q:;~2b f99W&A[tζ]<,(ħcAi95 ֥0˰3kmy+m9p$Vtщ,"ս"ۿ=LTZdg5";/+)Ҙhxk(0=ܶܙq ('I?EdZNJ>4Ҿ%Pt :[ZUQ-)u@'/_pH`"}Fi !F6QLxp(7S{T"*kN;YO~8aHYnowx]uYZ* ǧ-ޥGKi݉۔K.פ(>Wp$a}`K\:nװ@z`XP{e3ht3YZK7sR{*pk.:8AdVy5~S7'*ΗǼygfeD4%|U ٠iԆ; cn ^:Cg"K3&:Vs*1 AY#V:u;;@)b`"m]tE&a{dӨV^yC s5Ψ $*UjEd59!*R5Ѫ7ѱM84ǿum.M+0Bݡ\&)5,f]oy܋71 뫛X)DU&oh~w/$bYSm >ݼܾMaYuXwI3|q9'Z0n韏x~Fxn?ғ$+@$\DkUouh7 %hN1h3UujO4U5!!"ZC4&wos`w&xg[A jQO1-Zh@Ќ)ɥ3:(c:%cWFe<;ǜ_fvW~ +> w'S-J'c-B%Kd.gYh#phKhHo 1|=|M{T 0"EpHª?bJ/"C!-dCj~ ]ƪ~ ȸbXūw$'Jn=Eh>,(/J,e#SV(cI]˥ ՛{r3?`=If#y8.1AL=s_NP]% CF.+ʏ`EZKO_'UNPy!;FqW'oèO,oٯenpc͌|—Jdi)_T˙*oj#'M@QjdH5OGTdZi/-' L8;:h+Շ+F̤8"{g>;:vی:%bdZQ-aHc:(F1XB)2"<߉  `h^[A% nY&%D ƸfgXC>B*P!Kɜdv[ЋC_x 9/ʌ|Q%,g2tkte7}{?(>ѿFo/;p$15T]B1=w0D)~:0OŘ K+glD$F$ fS5~8H%'׷7C W&0C\,Bfn"35QlUTq)O'K`_#&G;BG4so鼟hki> *WFp(N3TXޏ̣?0lK"pb㑤&Z>ONQp\n3I?e4e&qߩOg<Xqe)lZQ+IIwys0yz$v,Q5} !U>C iX2f$U)>%UmL(DiGU:'D[W" 7 I/$v`MEIƽ9;GBT78 4>jw5?kZZ xEM ^sH"B#Ϝ)q6~e[8ǨY),!tE9 .=c;xo(]ކoV@T~1 eyϷxr ?|拐?mnw-۲WUR,ˌYDXIjxf,QY`fѐ1{χv:H݁^U)b[5 aYo%l$ڃ.JaLhs G{umԱQޖulYi恥,8jFBNHbJ<+Up:-TDDdH H9mnQ|s%3g2XMX{t,x{B`Є*T`fDXGs4Յ:1t%Ė?j& ofX۵2lz49Nyᴰ"!`0_(C3zm,%wlƳ c ŝYw@sbFR`|q4#턩W`4[l2 [ͬHy0PnMK@JvCCryp8,`EJ(+=GRZ; qNvPb}ƀv1 k[֖{_*|%_=1 &%o<60Yx:O]*|7/}(}>~TlXr}^sw@ĕdݻ{GwNFffcNVa&4^~ӛ,~<'5 N7Y]zKtuK()іw&LxZCn%pNFK+: +̅кjgM~[r!hw4~!'B˴.eZziΠeƩeZ 2*RcGO_-Z/9nXr4q*G 2(cnBep@x]l.@leyqPєX Gd$?G<߉k]ck;O>̉V!݅GWdp&~gqH)=!Iz,>6hԂ9CXws_yn9UQ86% +]9#VGg?ynb@EK4e(+4gTvqQ5KKp.qm7|,]dɰ0]mh*)D*LA?|toE*.gJ\!])L]b6+vA噃C)]x`X*“dßyMHDH[XQ#c,]X8@ixi u<$U j;JC8)ԛ띟+3'"% Jb1t|Y5wj(?gYY03kʘuYckn똺LDSoBIٙ.&4]+|Pr8.]/ν (dթl/$6T2Z)%ɖ,&.Ol.iF=I{%NO'\,]_CX^)P@%qB#F+H)X$<Ȝ '+WIqD:"vH$ Q)O]\/(@!M-q]3a00)f16^mhD1@fF6׭xߥÅsB,FUQ2kocN裙3> &Kg-:X҇-u~ >>6VZ#ɔPQZ  %A¯?\mޱj =^ ( U[Jv5 9E̒ވIX\ӶIY_SjoLDM]šbxUݜ3y i%ݳGJ@ƺrdb&}Rҍ $ZȐ\fG$J&M'ON*fFE4QvFx6JZ*]MP{-Hm[W] r [0_mK2V%m+Em⢖|HDYJ>x俼{uWE od,SMFo%ޖ@j='Q;I]$>g-[_ljC-ޑ 5HyV'=gt KF: Z4㢖}aQˏ1&wQ%AmdXZ7[w+KYO| *EQo>ܧ$䱣y 5`pL5Q$;6]OOB H+!0U[u\LP9kKÊ%Oce"n4GlFY R_}8R:n p\ߎv<[zbu,t#>݇HMNF8 ^#Ih2սgO9CW/}7/^_Y)޼K^~pw_}oݷ?ËW?F_eZG7Ӭ7QJ^7M߽vL[wyyNr{gu<Ճ{47L]L"nGXμd4vxw5_(6zբ R4 Be&쳫 yqg&*8M1Þ0\А--A)J4Gm~NϷX{!7ߢpAhJ%yTv?{TM;iCՠ3G^6)q[7ï-\ztuc)Z}JhvR⍽7͕Gz~ zI?fl&sg~`bUzOGT)3CUoknuZ^ߛ0~=}¨H2ƻWy^l"z $o~ ߌF~*1G/"T`};)#?=~?" =-c̝d0؟}vn?R{ퟥ߽%}ml6cm^%ڴmE]*IK?ǻ6_,<6]IJֶÊ,_Ǟ )p}`jf1&c7IdTVP`uh|$ӡOU(m8=*,CCr*CŽ%La^#PsBFPP=cLn6Oeto1ϷIitÇuc61V21wc&Hm?F2{]5x<.KV]A. r_+g8 N&0C+a&|Ǘ oE μiBlz_&ňg-S? WxZCҺv.n'j 98k5RC2S~9 kپޮyN@:F@Tߵ9:cezLZnv!VB'^INKlf~HBq|+\[cJڰefrx)sjyԔA:&BNP _MĖδOϮXӅkvl炃ډ;t֪e) TOpH^ֵ3p;(40綨ֶvVn#y}\\>lZ;P7 6'I')'9f=x9p`æRLdXjV,!褛 ` ӥP@$Xqjm/%XB2J)i#(gxb[jFҼ1KWq 58:;T`dŮ0_|0'Gu }Ψ3%gi[ +E-λ˂ZY5X}iJ C8DCd f,IX70EL)2MPdKΐdD*O8;M03OKp pYR] MhdsS+Kăn-VI£9|KuR&AxG 8wh \aAuAEAR gʼn謓,ZP2֐5mK:o.%NϟTNJM7 3qf3ΟQF TguFYc>%dy3CNŐl0R.e *_wT6;.`?ewB_ρRsqN*{@mߝ+_\<Ͻ .kxY!ܕeJeE)I 4#&E7L 8$і$ ¹BbkMfZBoFX>$7bg L<3ӝ%uW+>; )֍L?|toUIp9j{]%;xSH1<Z$ӃwJo>6`bhС Lpb;3XYV6K[l9J@i'G޴Ȅ_xVG&7f4iV}lRMGr%}Oe}G9\g=0sO@Đ}= h>#iME# _uF# hX1F{\\   ? Lqq6S@W_gAeUZe$(킯/Pkd,dǗ* .8 VέezpXY0*ƹ,(R138"`:Q`Pv2@ӟN'V uB&m}oR'M6|g@]ЊgK\)ێ=_-:*3#=hfHTC9.!eBr A2+10HxXwVC+Gh}PeϢ+ugSOOGe=4ι ?|M>XFeeJv5TYQ62PAk%-nJSY/גSJ>W 5!R>#JR !  %=" =rA@ *O`U١eA,$=2g"ID/F!BP?8" Z{ 1b J z?Wj8,*Gf9+I>.# |Q1GҦSŠ}דZ5ROzPЇ8L gs1RLyZ%(>B>@*irӍEwþY*ׁۊbCfa;j.Z5r«:-oVq=𛌕Z 0A I/$*%dT~񒃵`],pKB".QY4ߑ(+"`$yeeNc^\QXD17 e$ۯ]Cv[ZTK?wИ3e#;2ؙǵ?9nSز$yB ~SjJL՚g'XJjaGtvoA}o\UJDZT=JҤǟ27[]( NA 9A~&  #Ӛ(ƕ,$;+5Ke[25Uo-P/ug>R,ӕ hղ|)m}Һbi#!Wf.REt޴vmXhשh s?Y?!|Iʟ( f gש6)L@Jj>[R0OPt>/,r;EC׌pҡ3YRL8?]Q-b"HET6z1΋u<ގ>֝ HX@ .bV \ /=5T |P3 AbV)ףN.Ee rTKagREua@Ih!"r7 ڭ(YD3hSYdV݊;[h!BdM ;Ab.6ę9"C2io~68DQ-1J0E Xh4M/擕^,1FkzAr/4[к}V(n5j Aq9=C1ê\uf[E5ݣɓ L^,Y9[ԃ)2*K?g=C_l0USSM/{׶F\C/^G E>xu^a^TFZ&5wHvXl(. !ŌY:K(I =7У8j1v~i*Vm>h`wк]OUMdfX/Ro}Gvub|yv{/R꬚J4;H#򠴒Lf!5TT<|W."C8|٧h)V0ehdxv.¯*JUNb~]Xk ƤLC @ȀO/O\iWw}W޵>2Ċsn4)Ftw]{sMLImy0Q597 5g=53g{)̗_zJ0!+N\L]ӷ-\dK92Nк (;垍[|վ|aB[@a\.EMh ]fX98<%_y:WH$F\ unGꯤð==2nX[s{dDAd$ 5 ATO~ &W8ލl4]bb|G 'Py\Gͳ=tӜAULr rdrO2O~?܈z3\0X FEcf$ţ=zZhׇOZ'3?/?66IsBj]6ZϟyFeE;o1$'dnj^@ջ! ~7̠굳2(zlc'f?1J^ %*S(˄u%RTҎ U@-)FTƃv PV߱:h\gXi<߂;T_/7OOo?Yg>ivol="_§ӡCת?gUu>EhNLԛ=#݋^mYnW>VYur'>*3Τ7fF(g"otY:?tvж13q+P)+ #pWIA0B<%|- /Ͻ ˊHcvWAgix`ƙp*LR"plϕRǁ*!~%&ZkXO&}2{=ǶI'xTE1fFXu82y0@{"`F *d~.L<'-0%y&( g]ʵu)K_0+2<4jd'PIf/M8 ?(DA&)Ld8/,еCҠL#T9XXTNFw\|z?^ƞgSp03 s%HVexKĐ (8_>?W WN_IFj䒯mjrnṰҾRflRV "_@^ S0FfHs!S>Ɖ]qHs;[WӸ|6OSpQTXU +r%rE‚HBoSY:t; g0# 1^v2Q<3Q* nиO{ȀyȀ%kX{M; ldƟ Mx@,B΍ӎ!z&, CKEZ;xi E + ;]0 )T[W>L%$@&gb)9U${b3ceW̩U%KR㕫lK93 8^:YxJ#Bqeh9d}ԗX2L 1\p+xDE$H0ɰVc)}ȑM<"K v*xϛ2M1Ao{og/-E_kſeX"~X.!,4=s߅~n!ӏoޭJnwP+,%,tCo!m#mKƚCc(`p[pj8)k=$zz9Ly@F)Kq8n1r O5.ą 촞`;b1gc$,ɼS(P4Q\[.Kl͐SI|戴d(sDZ6OF'GIVO=Ujٲ۲,뭰&u_km#O32̸I;=4i< WZ+o)mɮ (,sR1i:2M6Cnu[-7ф6E\~%e:ǓB٧IClI(i28xxz$9C33B2S6“_$Kj, JdD SX P  SEaޒrT[aoEX(vu=(ƄCJ1wأDB4ÉH!0eExJYXsI' n-́"H(!&0`ROĽ4wc@Q_[gEya3w#t}:W5#|rV` '^ϤJb~onDg$>V Fq]>og5QHaTM7$؅njPHNFnZ;-5't_7xB&08s;ؠXfJw;-Ms񬶷;ť^odg{#0 jR8A.:"Q^jhz4rrڋۂ@or̭aS;F̈́LaTIZ4xߛJ}xw.OYl+J>ͦK2*[?m ^|Hc𓛲bȞos1 CXkYdULVZ+p2`QxxNf^BŅȬ6DK5ն$gR)`mI*WZ$ԕw/*83Ī-`'tLBuMO$)a*CCDZO=#RXѼ.mt0 `bmP`6.TfVo{΋JGfڲfF #'0%4{8Pw0N&FQn˺jKtqfG;Y w'd):3d7ra#% A¨x_Iok{i:A o3c$9HZ4/~+lB[ EX2?Was4a٠f\jo@v/RSlUuDZ~JOBש0/ޙMy02[{[uZT?>^e؊X/{ EƘ1#]Ø壍o6.oiuVcZFsh#\O6۝C,֘{RzrYۧgŧ(<#gȁf#G?9"-OV{*~#wuZ]hJʇWA(IL"ٙRP\:C+?dË~p-{'G$Ygϱ"1kfP^'şCZxAiK78z,y.Xyfy;z<ԋx~D 5Oaa}(,w(CʫS :f@ 0X|'SOPQo>"6,u)ԇE_u ^t Rbbxѣl `l:-؄31pOy{1"t=ғiѽ{rF\`3j0oCj3Yz ˹]![2t$s$/4(+re W$,xB$T7tx`1umMH sI&NNT[ QW͸0W663Rz8ōξso[MLȩͦ>"tf)15"rz_"r-7Ѥ6u6knȴB=.ۄ 2BmtK_Dn}WDTXaz'F;#Ɇu" $ l0;/>/0yxB5FD@ԣD=€t1FCR€*߬ܧ^c~> ՄxY2Jȗs&iTnIVQ)3@^ͤvF3,yЕhxqsCa7ԏ6 }W=Gu29K2wgfjl|(WyY9H$#*H+ sJ{SB@ez 5HdE`2] Rfj9( n-1_^#罏6$g%<.{]lAٻ޶r$W4vG)I?-ЃY!;8YKNw}HIvd]xt.luNlb˩fל뚴`!&^;PI 9㑒Ȕ' I8s""'% Mxd-өMUIz @q.1eZ bс ,rrY@5P|rz,@mQgՈNԦmz1to=V[-EauK4 \7^Bpg &~Fc"zFTHe ycE 0,<{Kd &}z5wJu\y7sھ^:˱_qKpe UQ"!ݠ9h*(ToOL"TNP7 E9q>G+RI͡P,2(-;~_mk`K3\JD߾\dE@teu($T@"ti[y{ʍ1c+Im^{%7(p6>ޘ[*o+wP c\_{:UK=*bҬ% J8tDz -+`5P,+wP֭]Y`_}k;ۊ\D N,XC NB[c!cAKيEA*6yb֫ʧCeE[xtۼll̄ܬZO\EB-}c[ZDp A2H_R_tڧ (h+kZU2.ju4ZhH?&w brDIG>SvEп~~$,B^ڋeԿr ?c7Xgh[k O&ڷ`֜@wsPW$ԵezB]#F fZ0rveJƕ`EZjyCt%{ y dr%pih)!h7JϕohbPɠS::b2v""HAxhHc+ 2`8ɜqsg$NBN73@"BZcEfZ(s)7!pNYw ~ k%5r^MRZǔ*-mmP*jj,p'FgBMXHL_-7K5^UV7~N'uM=/d3a4`.;d7!]"!Ioz. 4 pTYy@ G+Ej~ԆZt7{'rrH,CTvkjϋ󒇷՞?#Pf"yjyt V_Rv\Oy`{ocanLА%{SgdfG)p:{:_YD%(iXI]<=ó%*r;GۛK=Ϡ),&b %"i.*Z+wL zs! vw~Iیi_Tֳk399CǏBeP`Z@GLoUoIƖ,Ӧj$]G4_[|6w&)fos-~qADM5K_F{NPca: Plj͍@s;[!Q`$kAX<%t ioDNec6u4*U<7=L*X J{,PlB{nj%%/S<źKoC <ݓ$6$BaH< ܲKEb2 pr0V-Y[ 98\ }X'ONA&Y&Y&Y&uMB#94FfdQsCT!rj@Y0 N4Tb+h;~BUc۵qF .L[>fiSIEVgIGDmJĒ]O@(izw%,|zt%2)TFIP&Horș 4J*(v&iBԊè.@ wH`$A=Rm%ҿXdRkjuV\B@'E: 뤵4\(JE ă !J#ٌg[f"WjJ i-:ӛwS9ZE~;vTrV__@J mA~>Ƃ+4G KVB2 5NZL'BR&4_J%MT6ELl{aNF|Z~Q&pӯXd5a#:juUOAsc,.BG{PU<80(* &}'m .RzCn ntnUCTr:ɖdRQd_o2 ITF Bm Y}d$r^dIHJƞ/df;ލ/eIÓϼBK۸X3mv"W~u'ho JzUf0+V; 25ѶHdoxǬ@(˾n݆R`o>^)rZI#+8A&cLJíXұ-kKûuӳK$2iZ6$MDq]0c?k)Ҵrp/rXӃ s36XW=ut* . 4 (%NKMQMi&J s*i܌+\9WpO}H7P<W\]>6d"E]ڽñcx E1Nܸ $̗&":,bD#n^_r mq=:(hkL#O&l Kp57Y 45V=Ot`;p#ia[g'mJ N?UJȹm+s)c;啎RW^]1(D".A6ʵ.a'Pǖ.Ddrcb"y,^ pR6MƓ, ƇlWw{HSnP"lrhᅻ~ӳCIuGN+AvjwviVnW%[y P.BD1Fja'kkdk}Jb0$S?ʻS/ NO$IvsCE:6A/={ ı/ t9rf<Þ9_H$/ߦ~Q^tWڎm&i5%D.SbYǂM`g`.ljܦ+n}q iĮ/] 'p*^#!7/ryHN{}QBʁ2׈oA!|$ܙLr҉c (=3Խ_]SDv/誢Yoó Ë 25r0 ϯi$U(訉e?qV /As/AO}컻/sJ0Y9; B8LE\3aƺ*g$>'V5Pj&Π P4ő>buwwHN[xbRjjdv 8., s@v5EwG~>JzƮtd!Ls>&qpE/`#YZ}낲өyiN^o5n9.ff;bq3Mѳ_ӟңf0{.J=]I|x9U˃QT N)AvΉVXh;F@| lC?gdɬ\KeD70:gݗyN->j;ЛQȾtVݧGa>ZRݟf1GPjt[~ #`J<?$cLr2$'cLr2ƤQWcI4R3a4EP- !ԡxO3j[I_RYʸ6WVN' J$JKRɺNoMseY̠u4Nm(ɌS߰ipiR7~G1)dL)&1u CLp] v cX1rSzn ]pDaUƨ#{]h;Nm(ɎS?ގѷSێqʎ_xUwX/'@_>gx[TOUV)*iYʝK3QeGY!q!K;INX]d0gJ>c_rA)U-ق^ < ( CALMO (:Ӣ1׵b9G\oY,Dd=%eFiFIFx8mǥG9=mRW[[HŴ>ktΥ 8aBJ"ĐYV11-9"FZ 8Fl =n8n؛t8/]8&8UYQU:~ɵ.D+%rn9zl bG;9d*Ebq<U27yHo'6l:OE΢SF˩7o}o]$ilN=X'q}X OQ. ]k -xOO6Ӎg8?H(F$sES iÇѮRg$Is<*2sЙ8 Ȝɻʡ?tC%1TrFy|d6  KsUAxV1,p|.\*M G9=|3z(lr s'A엟 q<Ζ8_jv43 O[;\ts 3] I9.?D?tC95Fq xFz IbXșk=O|;փK1F]ֆ PzV8x( JuH"U8gֳw[O Yoz$ނv 3EUyqCHWGf+ȩerjvtѝRJ%+HNy8^jCf*զ\8{x.1`TކfM'k>΃ 3R*BNU; >cw1™No>pjF(+#LI֡*RG̉OuءgUۓ#ɃsU*DOf M%I,KU^ABxQfZ 8cCݴ23 /FBlp݊Qky 1O/`_M%ZxXFbhaOI'JSpM;/RО6UITQy;q0"Y4Ii˽5gQug߂vk(b)?](`-dK՟^JzѪ՟(Bԍ;8J InHASUU46QFH/?mPj#}$NѝS# y]3\wf_C*PRrt gYat[>Lmy^wކ2×Cϗ<vr;ys{,~%@YyTS򏊔]]I|s2^,&i2qU&LTuIt\>bʙl#XVq,5G] J%ǒ\zkr}V5ȟ׃=\R%D>B+ٔyU{gWUL0*EWV]>c>;~6f>[\(-F]g4 )Se^,Y{*BU"' |֞ޚb>W!'Fh/|֦jf>-ډZ]H5p@L|Uu{+UNQQ1{: 3c)}*ߡ%S:mT Ҧ_PۨSiU{jis^v攱0EEsyI4,/:s[S& 2F{J^SpI1N𝾐+yC-Aj%-VSiYbJ+);ݫ 95NyBSqY3BP8ݮL.,~"N/s?!Q ?BB_u&//\B`_BС?k:iJBޕdyF|>]ptv8 nsEOsߜir)?J.̯ 9sM)'7%k卬x TL)xTIU;VNJPtt 7㮔MvڕƷ:~"mcp:RCntEQ%nzG*^-a!gnI6UaNcAƈhwwVYjjZ]/n5/&u NGckst9Yr>U6:^[ mzы9"QPH{`Ya^SY\=1gzEѕU9?t߂+ lZ1Q.ݱrP55F9{T%eZD0SIDAZDi$t^]: )0g# B5>B^W?yPkx[E/ٖ- ,2^= \[3R55=!-:f?Uv+kT.ÎPS3[~… F7F˯ե`kjqyw(Xy9 /tLrSFwv=̿]h/Ȏ"_WUcKó%=x{;e !'1tajyHX2u CKnulp+^n g9K5(<20YeIH#䱽%;/7hO|f?QeCPF\Fa G@sŝD*<+-+r$̔-L9?%\yyAjCo˓`L}im7yrG۱TJXVAN AcmR#JOKbHUF<|;'5Ns6Z T-DoIexY̽T% Λ SV3U =rEVG\ C!ԁ xUpUDӆOދ((Qd ³BjM[B)iPvCʉ>@\y+RG!.8C{Ĩ?/2"X0@g v4=6">  TRCYrjr\<3^>O..W'Ipk}Me N™.]zՃqAPu8s<>|.$W#g{2`D\{ ~t[ /- y;SNg?,c_xw WȫwK .$pj荙S8$_%2`h8ʧ,W@0u*Ak' -mJ f't-ȝ7s\g~% @:7Betdt#^Zkb97hn[sqoޘhTerΡW8ZOc`ឡ g!Q+# -pϏn^A,-zM+X(< OM0z9 @UxK.p~ T2ʦǣC/B!c:B9%zfٍRsā *T{#)\%!%:%jS1!Km qw>5TָÆH.:,M4ɦ4|8[gy R2dh_^ ه71bܪKUqoVUr< C{ XI˝rh-!fY(QXiϐLI<̝? zj^jڹzy/.M\QWjgFOM{UqY:h%[yZ˞)ojpz  qǃl &Ss_e(  ݍupaZjhO<4)h%ZX7 G Hpď sSgyDhrE<ei '8SOMa̤޵۵}drIے֢\ӥԹ*v&8.2Qi/4)R1 B- b [MW).P~}SovR-,b +؉G/çݤP?GRKuWߴJ{8+A*/W}\ܱc64?G/K}p֖WQh`BGҝJs!J6YVhIhO$@!yBpMY/=-x+<,y8Y$H0žD=XW]Z~4\Pc\ NTnMcIΓ"Kr;i/t3c:l5O/wZk<ѳ3ҥ?"9 {8<,UDB1dt6?}X{h7EHHMCIBi ~C@::4 U;!*'W;䠡{,w5m]P&֡kj NX*+L=htZ#Zl9,E G'- e ŽZ \wldD;޳2n3qb?rc属ijme[%F p&9U`fݯ_z{dF<&1{Kwvjqq]ʬ !wmmSɗ. ̓mI@Cꑴrb ַ~:̚`,;}O*KaN,bu a3@NX2!mD$K˘ qKDE5%C|y(UaCM)@S;ԃ =uP 7 Wjn a vi-}T?04Av4G ":2 * 4x6RB!^r!foǴgCRcF@ծv2v蟷WKߣaZNn -\zkx6IffsSڞ6Mb^;s~V`gDeZd h@+8 2JEiI@J4'*iQd(%ӂr.D77 C_t3Z r·MJsl)/?S#n5ԛw|bJ$ 3Wpe楀Tfb̔Xl^k?2K$ o2nX0@lXa[](6e\fC2Y SuM!5u cp :}mwK:4ש1*49rKEuČcTmHznfܹ-i;S* n$/"ӣe48?}FN%z[zWf}+XomxxکZ_k_YBn=Bbhtz-i^Z Z5m ZJkJºwCM$AIv)k{k]>yoF+e_NѺ,v7/H[}/eMpWY}EM=cJcn5i%{~C;K!ΰyC5O겍S5Up)D uw;^^>9h+^g)[K>0JnNA%*{Qrօ4N0{6@R{Yeڪ.\E^oXh?1bcK-?dc3,9Jx@ͣu^P#Q0[H+-e3d@Tv lJl[|3h]} sz7"C?8aܸ6*c*(I~T8pe$Vp- O;sp]uoRN3 MQ(>TIF8 i^("v%SoieE3]fjE?D gq붻ԘK"S$g)s IsJҌK9`NeNH%F0#djS c $T]t;"=E$W6|vJ N3B -)fK }dЙ*%i%)"89V .242GXpcRS0H /R0 X,o, BR N7гKfz5ZifW/wWAQg--PcW" f"9LTH e8yVd6.>E %aX(A[ON [wNd) bFQӚ |a1v^h=qoҚhd;ElIkG~s*}ZʩVp^ )M4| 8oTYb!R>P5 f;u:" ӭhӝ,ZLHqƝ=<Գ&%c7xT;ɈJFBtm2 2m!-rSZ R^v"mF~ u0I4Lܪ#tDlϵKe}B7^r-EӖ^kKI7Gu#fMN~_yB\U3T>ˆ9)(.r[~t7%&rK(Ә khDAQ_dsH{{yl "tF82<^2(6P~"&"נ> 5q.qiylĂeSCx,ݶP K4Zj Yi`I4+A .Wx fF*NalMh s\~ji:qZ!J"Y>d 䠴6W1,Z}Z I/亾A7t_S.ghŴĭTd2,͏+%uUvvŻkuwa[|=L4 @@Yqq9|$z>)G]+[]7)U[O!O 璃I[ Wÿʏo| 3;0o]˶de:zlQ*1aZ(+pcGv$e)ʒp*9hXcֱ$7ZL LSb{| ˂Mrf\+E*2@LRBjٶM{d5㔑VtbB>CNL#e*ZX ;vP??Bx9r(+Sy=܆楲<zbeC*u:bqn]@P@9ΝI|W`^0(µ)gR=DUhѠRe^f&%ӈ*)uP@!cDž*u.U)leIKʹ֌p,J%LRlؙa Ʈˌ<ǂgJ ukJJ,EedsLKӂiTYLEY,idvqTzzh2&))"AHΈRj@٩J#iHv/eRH!:TThQ$#k |BFT5a Ә O{2PYepKT~哏sz/kgEl\" 9NZ[˧Wj?~|+}f(PBhPLRKcsC2y o-?=Y$AAxMF#lz$UPӵ~xB`\ tui7jANNtK"vѲר͏ ickwD&Dn޳Rck=~𧁙 Vsmxu.3GqͻeG-M LL*<>,Ճ#Aѓﮮ?דzkrrzVW{@[92I`6/K! ha-a\6;g-~v{XL&_>hj=6zѬߌl.Kٌ_O/wWc~ϟnFVL:;h?6X8?lb^f™&e= zn ޥ敋2^v]CHR)$ Lj։"2Z*,JZ9+c4ebUlr1:}.B"Krk2ݶOFO:cz?,P@Q._Q<;!(%%e t=kKXtu)7%&jEdzy6iH6D /rhV"Y~zƣ^s.i^rZ6yM;. 6m@LHF6K}~O7-0^f͍/O_?E6<;mH@;T Ł]O7rHd!>g9x}\J]7,UMmz7nEǃb̺P2)!g A$iixQp&P+05-F! ] =mL2إozw\bCo2!NfE@P$5=淇/B0N mJ@K߳h;]%to\PB7PTD^)42OCÄ&ŝæ)'JQ&6@6$Jiuʎ3߬ xٻFndW,i ,d`d͎KN2Y俟bdb}$VX_X*l Ҍ.,beEEѠ^:f%/'#EO!S! E)Pc:v/O+! Z#,PBל6ēW ).'3~rZŒӪzzc tT9~//~Obje3iRIMZr[@Dj.jZ;JiU44+,ե|C+='jQV }?f_,}I??fB&,dB&ѬIԮ7˝d%yTδ'RX wJç['p7H3-5|hSJ)=TIc?\_A;*{q5=!Hpス[߭x g8>93\q9O'{}I o/_>^' dT6N?;x \iQFwn>ʏo2Iˏ7zPڅ_UePWq/os翵 տҘU2s0iD,4!>p6 I :/wPkB$ʾy0ɹ|8@Eo{5vi&1ܥwQJ{URX΅v/r\O-4iUUq57KtR-(!#h޾ Ѣ?9@(ex`}H(tX}hƨ#T'Wrƥc Wq?. !<c=Bxh a\QZXm a\QBiL60.¨G "UxP|wTQiãpw}}zwf;{sj4!zE4{װZ=t$ӽ{7mDP]=ݻ NpqU7Ftt[d􏨠ZH=`JR%z1)hKtPu!s!A%_%lYr~,JUz=9rYy,dz%sPR[gNϝ1GVT^لhƞeyV>GX:|oZFtD tզo6JmlI[O] L:@>گ~k>XɳKBiU ^qjn3IڭI]L:Ym7n8EޅT$.E*4G[٫_qܝ܈~n2<)4?baS- 훦I`c|GV܂lQkw{?54w7qi/KpoRWtp@[={ve̷]8еlE{v寪T2@ŷ?|(/+O9D@M^Ti^[Q`a@ 4-V^V](Ct"PN9\mnEZa@ 4-j֧6jdbw8Ouĸ[{bh6gUNLը\@nQcH|AED햸Y+ʔݻ wQ!!e$_ǢN?zdrR\Mn򐸒=|l+ՏUl hho*yI f:q;WS&*PP[Em ȾY!B?U"W"Y6HKXF LYƭDP-i]1]guNĀU>U& 4|k&ȜrT~bj +iz\`Mz/U]A`K׃ پ( [ƍN TgM0+`N%P!i{:(5qbzJ(Z䤖XMII5-Cpgw"#S4liР IlH(:x: T&_Yt*n!E?_!cL*~ S..1bKDm ~YYYY]&ހ&+d1>wj^Jj+_ww7e$uI8_L.Q_Be5dyk>_^LE brIF.a(@Q1rp3qGp z<!^hKȥ@0YjrVpA2KŭGZwog}-l86Z%o YOO?){޳)Ap$×QGohE8Ͼ_Wa*#42 8CpX>%m BOߞ\TLO"T v> ?(6T"P%;S*NV-1g'L7ORXPMhlzVD2LDE{qDr3g&n<8[ahhY.PǏӦ A>/?\7ﯞ~l,=zy8yEFKM!*XIG vyIoܙhqE$B4](՟}%r\hm@{=gppŚlx:Ou:-HD{N 뮰3RcW~F׌v\(yv)ӵ]T. LITmtih;_B@ 뵨]*~`Qͧ;wLSzmrS$h^*mDZpehҞOAdJ(rB6WfȄM)2@̣gy8RcFO\014/摢?:\]E ;(Hvq|PF#OHA¨A!k9b`(Fս)Y&\oBƹfWj, 0-um8`Dъ7 i ٓ0}LT0-q*rf('2@Bm@enΉc㣹.Jo7ohsA%V\)穣"/Q|~j7t'$"i|SrloU2ߟ**6h>d{Ez!n4r^W2=߉+ c Z0YU#TK`H.T>qKҚiؼ*FDV92- yu.vǓl$]gv##_&Drt8JpQVy Xauĵui#b%abwI6s]@c kqs ay@^Lk;I]|ލa"EBe:6Utyi#iGM.y E;ա`B)a@]+RULQJ!Syj-5iU .\5ROA6= H6·!FquW&ɗk"zmVbiәY-=Bfu;bL.#v<؊%5-؎,d&m#S:7똰puz{:cZQq?6+AVOhFԢLb+'1%'&eE)y&aYq%*FU5? c޽{"F x5(.AzhQGrIT&[f9ZxcgJr=VeV}繑c0A)u-GYVȄ23ZnV钑DyT6;3(W9CO$/m]d8\7nnb~Vb{lƠ"4:]<cـk$jIcīJaL hGYm' Vzqefw& $h]VLQZ l7ȿHSo< |G$G`{3]0.(}Q"O+b^[D c^Db|d4a5جӵ5kat vo*6$ t<@P`* F}?MO`=9Ϲ)xbE`?E<QcǎG>[63QbJGl c3<'0UrJy քaWWUѴ8}S B?{WƑʀ_yw)}Ї}-%668鶹+QI9[=<4$gMrꧪj}&/#hBf{g&L"'/z/A!~ ҒPΏAGuYxi,N)Y7I|beޑ+>TZ(|Ld9w9kOazଣ)=A5<V=i@E3Kwd>\O+^F qS%uw\Hyss$!aMc=!0C#ƛNu'|S }UI??ł0Fu\. ʅY$:38Z).xCuc'=m#++7v8Ko# C+{[9yX,^N 4K_x&T؎oqŗRg8 6󝃫CanG_.<xF؊Xk{Q /8BNqkGXD893, rJtMy2%NHǕW }xEZ$GaErQv*sDVac=ք0 p.1o5DN6IyEPEAݗ-Z i>kc.Ô,b} F3Jʂpo5hcʝ +5[cb)12aB !툸L "x6qrڙGf"dF._@;&vr3lL"#K(+I!Vap *{{3U =9na+] \.}#B:9d%࡬QhP^ ij NIPrL{p ̶fݛ٘X^-`)Q](b\ĉ\L$U+e2 y*(Kǥ3 x\-GMOԅU>weJZx`?ց9$I |h(Y197ФVdobX& S Co]8ZAg0/1o6T- =XL[C3Y[ CXs6BDGRZY`=P VGj)-z $VHIAy`,$FKs1U1u0Fvb?Sb>YAI(_.puǻ߬qASJ"W f#^O]p<x&!Ç(4}x2|ɉ@~Oy'{ϵͿ[z@e]dhN )%FwG73%1% "x͸ p+.KCЊnţt<=6֋MzA &%;Hc dB1س Ҡgc.[sd' U]hPb5ȑKIpb rK86 cV)gI=B m΂ӥ˝맭Q0kbs3Y `{# ӊ),,!=a=|,)^{@))hz CY߬􈰲΀CxbRv0ibbt0 # ˸KHuf/X[ o !E2xJ* 6ؽ'RR<'`vقpAcyŖj-y牐ªl$e֌˜X;ӔJ KIH}h2C"čco b||͒sG u}Bt).RT8@q/x ZPP x0~*s*(c (-xHjĕօrT@% 0ᠻe/B.xI) oKᑹ! Q^YV=4$V\FCiPY#.g\K`Ph>WI!*r lԊ"gqPkDA"\YV>T3!g"x߭1(+@sWhZaKѺ׉4"XHZ\QDE^U CBI\ !~8:Dr.l¯}Z(Zc ށmEʝZ\)Ur;nVdYܱE["m*v61xhI 4PRYm'7GpdPRrek~NUeᎩ%ҊtwT[-GPI~3@xs?w9`IixL-,wgko=^O_ qhqs}W`GWȯ;g`>qtJ=^mq\LӸIO6;K{*e˂)Mje&ɂࠦ{ |:f<z6į] hT$')\bx~|羙ӽM_=~kt26~w[EVNGFꓙǯ^ʫPZW t6;6~r>rrȃD4myŨmŸz_ t/L-2FCt. -ȯ^0ᤩKil]k~VtmpZ:cл+@J@ΐR>Dp*ҌnXn_XXPM2ni/[C%DwJ"(?*fGVm!*̈/ZC=m?FTk"3TQzSlvFMGcSnQ;Ll?Z0iשzM5tLh;tMKZB. ehB]0”"`̂?_'`q/[=Ljc|{3 9S3jcAyyxxrfW0fި_)CkFitiHIe [}6e3ҍ%+RuSZH_7:V. I Z2iWrvAѩ*,W[{}n]ڭ y"ZK4?67 A2i? A>>K(OkiMhn&01gR}ժ Ͳ^sJֈH-@yrL^q1U9RkzTj~ZT l9B5l8-f"moC9g}}_ek*UQQT(*uqQzqǖa]q tqOFS8Us;F=I½ ˻dZÅťԇ( dzhS[QKЮ3\Щ*8J.LQîn% q4AʴB2짫C ʶ)-$ݮEa^wN>°j'۰bP{;lPc:_^ȓdFZ(s$,h{77s޷ю!Me߉\L=?q7NrKo߿sf7cN[` up;f@}hp@>{z8[ MMBǓ9"z8Tk~*u$oRM=3XMs(s^-/|fkpx .F$l(:ʋOnIa&RiK}OO$sA?k ՚-~]&OlM׳aP&Gd)R؎bu,GJK1c-Z䊀U.;ZJkPWA#![Be*Jc'-şTd3b &2=_hD 7I\WbIv4ɢ˅>OPaOvƣc\Z)0i6cM0 %3-q(©:{V @9_ɪ %+H5OMbM׵®u$Zyjbt֩N϶&ҪP'Qܥz5;MmSHF<4lTe R*)WNaJ=u&'(.il 7`]+w \.-oObI;9#Ζ)O8sG`e ԭ u(0[%g_2K;EִG9}B$7* Gkˑȓy#zoL~3eƂK]ےRG*k*#.vGi+m,F gȟ:D/R]li36l)w6˲E;SCt{w|W({,d4˧͹=tǺTAٓ${:/uɻv:/Ua!#YbSh,hLeF+l>Sldǽϭm vK~ݢl?wT=n} (E?~\֤} JsHzf׀0vy/C*QHB([|imctsIDK$sZQ391(mðQH' <~ y}GW<.QGӤb[J nTqR[JJ.;C e,]İ jL((tGـnV=DhFlBhOD\YwqƍO'_K p/[A\n}p~NETʶ.j;UUpb.S]0"cj_aJx*}ZoRK]q-W*ZRH\_cH×0p+ՊLFh4R`)<10G5[C#.RX(dr0E 0("P&СhMJQ&(f80Fa`a*(m`3q`0׍+[',&Q,A)HR[O5 ;s}q:WZM]pNkz*InRvv|Wv(ъ}oS)5?€o{?/]$Qd>D>}EzG7}ϰ 4 "b{LHz\,n凧 лOoЇwffnq5<}ϡ}l=CTs"ۓ׿炫K^Q)j}C_p<QD"©7ZmERWaUY40ZL~Cx]^ BSrP)LD6ĘOqmi҃#~rpǍ!ܧu@Q{h2X8`kЂwS0CP䚢`܀, "##edx):Tl,0S.2V_F JŬaHủYx,LXP!y] i(E*B>r:MC+4SX6ʘÚ<+H7<bӠZhZnO1Ke휣iK)pi8pa;Lq%ҽrtoֳgĴ–JuCd'վngsl{p; Db|sn9nHkL{9!4T@֝$MI [_KxbFG PT>TG(MיBliv{X[hjxrmWtmS]ym$VyCꮟ|b#'):f;DPPWt'8%"{Ąs|8D&jD] kG)ZϜ g2QH L,&a+ql†]L?/R| zUJ82?EjVsU^5MfZEǛi\0U"N_)l_Q@-s&tMuvNmѷ_%@1JP0qgj .f= I=aUYsƘ RtU|)qS[rkWȁMC#͉# Z$' R:rV]N҆΁V7L/MŠ>L'ucuć_],I%7cn5l7~Q͕CUnMtt*zOZ̛K?w[q*Py65eciiMWF6ιoQwF\ }5- o1'<Mɽ KyoDFH0S9#\}P!-F'/55yYi1fIa-*Ac~VMRMRMRM]mu k휔a攰F*M6_ʀ#O_2B)˜l QX<>kk9HO 8j{H9>$inشŠJ5*/pӚ)"_k1|!~+׼+%ᮕ>^f![5h h\S+dδ/,3R/`| XqfeY97cԐ}2 ņG-h$YnnF [NRJ8f_)%<:%Z GmYTyKz9Bl#|ԗ_Gj=` YG|AՂzoifr/WRz=}?Rl9cJf}Q6D]ۻYibQM"g>΂Imskp\fy{}q<^W)Ev~wAXwǒPeB:M?ǂșkoU_˵6)<Ijhj_UB0W[knVxƉ`P;K,6OHr Gk$/;- 5z\Y[h+ƈ1i!ҨkI5*1şIݙ-ςMR l^IZ닅zS4 0yA i>bd%Q ՎyZpx<^j"bn!&F:F24]rP'֧]0ôSnR .%RS1#馶8n:r#`H%F՟.D,)=^bU-t,)B/̦/DdNjwXdwRS*zs~N63jT *}GS`x8ph&-[hMQ=wDbc:hNhˏ2[hMRfG7(T-})`S5RseK$gFw}&zz|&|kU*@x;9_ݹkw9Y՗Wߤ򓆳@nxǘ!t _?Nc}ADOP,Ӕg2eʳVM^GX7[9k] fo*6Lor0\VIvJ<4K! 8u}5=&x sLxGԵ=s7/̳)oVRެ&_I?sqF>>֞_^ڭ8XLO;Fu4\k;?٪\&+k(I`:6aIFM򁮪coWdlu@.Ms *}1Ƅ|o3zG^ē/˻Any=r]}¡;`G8&Fԁ I;} }_#,{g sA5GUvʡ Cu=}gӺIuL&MݴCM L a*r= ʆ  ͜S{+)RˮZʡ/?rE`WrPKjgMCР%Xi=E7R#G"9FRELGL!t[ nER31 \ uU,ABR00a.8ebh֎ cai1fD`B4x)Bji#Ss$GxU}_󄦪?m:/`}q8^|:3;/)s1P m8`b"2""dr-DaUR;u0XԤ; xPKGK>e0bQ3IL.Z$+psiӝhN 0n9}iЂT4aO{)ʝ ֎Tp c/e*QATG[ۊѩĤbK!@üZ.fUE$E"KjC䵧4 f5VA9ag ~! JTn.=@݂7X̧??n[Mq]\NwZ,/w0ߥR%3 1LO?Ç+L%%Z!(Maݢ|z{Z$>J8YzYU9OӬ}} @]Aj6O WRءOaSeg ǩykaOL4s1jJtQ#sW *蒤J,Y"R3@Wբs@rW+ojw٫Zű0(!|Qa<#Ar);P` il|9WꦲT4ZڎNdf=RUPg B 9ɣV.8` H¾Fb#:RjIpK}Ԓ1F/]RtEr 1t/|G"9'Dk5@6OHn`$wn`7kEV>+"y8{cDZQ=go[KdB Rrb6.oKfj˽VxpO=ͿA j3/MaNw-AvgWۓah*5V(h:@aP9)֠떖d޸ s1S9_vbM J=H(%AM,[w ާ]o=<F/Qf ~ιnY`RZNJ ՄXm_?chi9̗ʽV"f<کY"PN s>+u,vH?\S ׵x6(gQFwBE),9b|6΂[oUǴ]ORj=_b}uVuouf9 v&tb\ e:dgwR"Db`we=nIz`g;#5/ dGI=Z$۶4Hj6:XTSb_DFFF)\%W 8j>vSm [J)LZXPU2c`'^c$k~75WS$iyFjY,jxǩgos;~\{OdWF>)dGSASFCvHOyaBpiGjiu^< kf[btd{8z۷ge i<@?r˒g&+ynEέE;DӎX7#6N:d aY2 9g5[>՚[Կc36z"j\re#ɇ/1 \JEm,Ч P\K*QVN E/i J4J"U/\Vei~+NDA2QG2YYNk,FAiMh2*DN7R?Y7er^d/3tdFzl4b;u0ԲJaVUKUP*\z pPEwA=,B .-fH&Z~Evy"p[e ;ʁԸ`rBZT1uu0h /h' 9y?Nk4 |og)CN(9Swq&6b4;7tsMZ|uqnn膌rC3+ =gL{c mx~? p1l86is=ZAk{ss6hGE91'K^REF7dv70z(*!b{e0gIA k*ΝS{ vVΩa/3KKN[+^~ڼyrBVJn$>>PZi%y/T9ՍMgÎLոP?}->қi_l[)1l JY9pM8'&{HC/.V"UgHp :-(a|\XD%QXI[h]Gaw[1f5wh?3[Xv|gu 8YU>8'[y꼠%QX6؈緲$UjA1HgfT1j S |y {g>1f?N3˺A4LdhC5vkUqd_fsP Ro'r-'p\,GOu.MKYtQ)뢮ЧnbheXmNO O(X%:^yccT**EFI2κR8֡ Ѫ*vZ:uw;%Erb]}"tO֜ Vc*(scZ*䆽w=j 7†9M1Ό8's V X Fz9mF+sPƚcaSr6U'6YPZaoΑC){)2e?8yq_du3fOjGFV߻1\[mFV۠RM4++.7WdT GSf#Ҝx|+*.̴XxP1oRZؽ1j?v5 Z$t X:a'Ugyy!߆s98/?6>eo{rˑyt]'ke?O!oN~@aMld&_w ݢqFh>LZO^Jܠ⠬Aic ,(TFC0)XUa"Cq=o$85`גXzLt*ePY2JK* 9'!nI{!ĜIgZ C i*r7kgU ʮL[ :WM r.{1](S| :NA1 /%cNB`ȭY :0AYQV+njZӅ>a'\ZAvGe*% kh]F@AD,l&Z3'ht:4o,輔wfW \ݙf~ȁh3hu+{slbY9$-\3ѳioXEWEuPR)VBmJKZ\R\:bLXǰ)rg)DI+~Hr/]&oKO& .*/Ӿw],W$4Ho\y?zj~JByc$DC'#r]\zWķbӛC҆M3+& +k ά8*+5M ZOGKn, }MVLg2>7PWᑄ т_HZK6uEY(qR1c1S>]9 %Ufԫ9%;W:/L̷ ەk4C9LY<5dL Yu* 2SB DIʈQjaHK=^Ed 2P;K(gИ/6Q8cs(sqSOe qpi2]gas MFOupTd$fM$Xw~nYŭ>bȻAb``1{8(iιf2]d @SeY>eO]: p;,|\dWȂ2УAz Y7cʐߧ $;ASVY=.{E)YOl'Ra~J?8{9RZi L;?SXU!h\s9*9c`gEu f(&cgLۘzG6Z`3QshBLY;>rXw>\UkC\XG 1Q" Q"ՙ(dm{⯮~,/wxaS:OU!}YDa;-:o?Ə- Oȓ^_D*ŻˏruAqȎ\;V@^HcfIi JUJ(\j?ٱdR;<,ֺ;XГbOW5\Of<8 bzp6 8-ՇoogIPF|f٤~rɧU]\P,HACjՎz؆v#1䳣H0I~0>'za D6Pg/?~~,!gʺʟվP'w:o\\TSŧt˷G_ZK SIH|ABZ G 8VUٛS h&X,+ΔfBhCLS%lQpuFII%M4\7-zo;:_~*VU'~&%6)I+'ŧB)I }%PzSPb\sΔ\AC!gL8Y#j:JÃYL15#z%X{ -4T>/橦9Ohj*fyy3O > MF_%劖"bI#Ղ@-WUEYhᗳ8Kzi dKHH2K1D!%LqIX“ԥűVVw3)=+BH %*JSm(I_{f XSª +@fFZ+ Ihy!*LO-FI$, pGu '`)S4#X F@$6L "<(b@kC$Z7WoSܥv $ +pO荕ps,\o|OOh=7߈5 5x/jlO~~ aU~k0E}7wFf M ?^@1qnX1j~χrM{> %]E4šgwNKϼqC%i%{ARhV%>ٚ24_X\K&wNKh//8xRd恼`?C^uT0޵q#bi#ފ,7kE58A!G##ezY Zn"E}BJBJx6@E w}_~s>6zܑ}ASK3ij\3eag)s!~ 9b Дj}v4\!j"ٶVNcR)V{/e0׋?fe `S#UX]Phb PBB=\~2`[m9: I7 W?\/޿=܇oRx{H?),rdi߾{sL?!97*,fy/+bԪeBb!y%ݰ%>Pv="HU/̅Kg_%-z jV3A[uѡ[,4o&U3GIg{մ=x i`Ho^]"1es^ثw\M%蠍ֻ^)ZM+E뢇wO$y$NғHyۑFR~<+)[I6g=Nz۲ad*\GhaGc(2G#=0~>&Ȉs }Ьa1qPNyCH-$ bt"д&ZRFi1\ZV"3w/ge/rtk׌Pn4B ٳJ|ݲnҳV|OIP;L@~ܴKի6m~tdE; NQ |JM|I83N?$YYB/^;K+ꂭDJ\*$~ɷ(KHjxgY ^Ie䐔MY7\SF^Szuxt~rVdXrsB?7j2jI03ɦ&G-{!8 љɿ50yv@]FG^&BsLj̅giP BfʾNa "DdI"0N2 8Pg16pAFPí$-nMF 弧G@唘h>pxerRP-나 gڬA-QBFJi1{4_3(T4_}qL=k^2$zVZVuᐉ۪$Wښf<bu;3S0c+(&nl)0z:TB4}ofJVJx~elmLTgԅF:ymyo`+^e͸J+u(hxBzji=ҼKx}O1Ru̇tp̫=:&?VOUdxn>9'& Qmd 3PgAw$#0GlO҃V6sks]^8g{\]qF}-Isf%Zo?G:,:g9h#%5a<HdLMr%R8Iq6 Q$+H.OCc*PPˊ㭡ԡ>:nC2:a|ZZqCB, K(@V,+EdŻ^5eqrB;u.nĒ`q+ C K^ <5DX Hh֡rGR7#Bpk܈  }g:cWPBnk|_22@A) OmpTpc!MB&ŴGg&>w4$!HZfCG4L<kcr?C*h(EZTЪVf?HZK%,,TUʹ8h4Q,; g:NJt0 ]7Wd'Gںҏ9y)u&֏t*ңB/4[riЌI:ᥲFY4,MԸ,"'㗟嗼 A}.{N`JM9UmYmBsPLljJq/Y7BW[&1f pSz؁V:GcDDf2x$i0Ze, <+(+)Mo`5RaL}Zp<ԙ`FOE$ VyشZMӔFW,ZƀNUBp/x/r"Z\kΠm΅WMi^c5PLudb,C*oW~ s9P*'ĩ^2v_/B[}G+g8E@7c#5i0 _ݗ!MOP0L43LiMʡ蝍\e+ޗoGpR)6@X ی2xT0p5S-VY1VumfF#d۷FH;oaM=`Kl.,;ǿ:x*@A>8F/M |ޙڂG#JCٹHZuJ֭:p 'd{3"ƅirCvC)C^CؗѨú3àT: 1Bm/{@$pؼp9I䳁ֵKuizU.w=Doȩõ՗5/k؟;cw%ǕEFX;-.7o|/&O=ٲ>o?_oʓ./W7pc3H'K423D60wuSgS=<0B"xiQ~.\V|T)KwjxǚBZWI@;cB+'g'YcS^x&]/U^\ϫ}Bqn{"k' I {WŞz:RNоޭ@ȁ@cqS ΁w h/Zq8NN\&~Śq2ofО>gf+p)L]ZQ0󨟓5X;I: G9{FəPT)=•>RƮ MI1ԙ։yIwJIYIdD¤ܺ+V؛>V-162c:)`A9 "D)  KZjIbE 7#Np ,hDtQ 3A%S]JN @WAANʖ6Bf'=PNr鲓]C2(d[h`Ie ]b-򮒇6 a2b)n.?Ճo2iG˒;|wsq}e/VRŰջ7?(Ղn*@r_ nHҥngu+he&r9,^ӟ;$)G++TߠΥxYVkU0J9UQ=SMC sAȌNJHK%TWHTPow:Х-CEPJpQs+rn%U[#=kVfMݰ)*P)"p2.) y}Y 0_χIkd TOʒ3QåTfEeXQd:-aI}t=x_?q_׭O¶8V=:M.xltmΓT05aݔ9λ9Dt&|ݒ5= &#^U|3Gc}eaK8 X\n\([幌VIIe@3RASj jS@ů]|z->@ $8$rN#E%{ A1V)JAeHdZzr %H-D%j2jc&/d _z(K P7 xT1TvU3TkۇCOarn%iɌQ53-ߥ1'.2[G.)*E.d}Js,߄?V/ߘ/jI.` RX;AB[A R%w VfJ̬ Y;' I[yj0f q̄` 2fmٍM2"FbL}I3xP1բ Ⱥ~vo/k^c"K$áY#fk ̃O>o?UN&3JԀ&w2 s:swuwrp+&[X A"KLi"$I^v=G+ODdBgYT:*w47!V+*i|Yr>32T? B9dW9YN"gs ,!R 0>3\B,-pHk,"Tv ']˝H+sDAȥB $vZ'} ,umrN6O.74"o})!4Z!Sp2P #0[fԫȶ0*t$b4i Jp: B+W):%*C#Z|Cck𗴺k0(k8o8'[j˧"4t$ dd9>-y.'wS4'j}s$NtKJR#ۅ4cD񨿿ìi"xψ'Bf ?t~y_+;QϠ6l{Ӄ'*+K0ƨE \PK% k1dBk,0HpL@qٻmlW4w^Sٹ/4K;NolzER$#u2IeS<9rq` ҆#oe5 ΀)%FsNYX'J ICC!$XFk kJ eO sɱRuFKP'Zἤ-Wtfj$(ٗusHRFT}5פ9`,t;M&Jh9Wt{r3ڜip:8^EMLY.jX"?K)諅LdGv;VBvD2ZSv̅<؉еzv$e %}p;G 7;*=J <_%Kog 0g=$6v.{lj ҫ95'JBdiV4Z|Z񵖚ԡ5+ 3R %Gi <3 X&D\e8e0 J!%Z3%NRJIst|Q;I(cHdQ){ECBcg!E}MO'+J4KsUŠlYĹpVt(ӧӏnq?;_i`H{:e4 nMF,B'f坹aLZv[utrxƔnL*%ALI S-8{@y|&L10#Y'/~YTMエge%as^NҰ7sT(֗yIHBz?+g]kIˁAJ;Ά1rp]S sk5NhGqLݏc~Sr꾬?,"0L)E s tFPϊ`-ɚu W{֌Ӟnփ'?q4hԏ˦܃ LSɂe&vPPᕔLN]~ ~ЬL-L{ףݭ*zûG;x9ioZ*KTԾFiY[YO{=Ac(Y#D=4=eCe R%vR|BYFd{06\nyDnéyX'n;))RSS<|+iGS*a{~q\٢yHĔ*Rʧntw/bͩǂ &>|cG? t %PvZm=5[iUpdEUL0!H#Qh1^okYIQZC`Xމsq$IimT,^ŭrfܳ'yyns$`ygÏ1f}/?\fꖱylM|y~f&ywcSf뷋1w1#=WUej=kՙN"7X]-/թ6궼A&4 5y)P!B03k6w v/P升 ު،DT[)!:`a -˅"lR~mrvPE,ږ#0椼*:'ǯ.P؀/h# p,P)`h٭YN.Ӹux >i<G!X[w^&$ҥI\E[㒥L#LE/0SgdtHczn9_ϟimH7ugn\B4|C Bn q5Ị[`$B"RW6N)BK#":R2mQ* -UeOjIn${'Qk'$wg9%iTD3 ޗy^jDO-t"܂t4aN} 4 .ߩε& &v$j8r*!*V8Ut u$diNx?:Yr ^ Y4< Yù~Qx )IkX%Ů iN>" I/"0.9F[ÿDJl*,>s4lGT ;szk5#De⫴@kda;ƃ'5582rYF9&/l^\} 1@Z;ܽy!/|_ۇY}KѫOvv^Wtx ^V#[w10P khV'WLUk:g'mZ*ԧ!EǬȶƕOv*8Jv'^c=L ^r\]%*0RQ7(qSJNRoxkJKdstS Mz3UU(E(Rњ@( =Bm25_1@H 鎌5:﹵$d:0(^JҔbRa45\-Mwzh?2u#$v9PM_7{xHDIbHA,HǃD&qP򐸠LZ$02 DJU>Tڵ&'$h?TEGgghy$_T(N$Xz +"L/(Ύ@өP9u~ v&xv0Q />ުFݙE`90@7j(mqK̇R*t:նbWĭ_08~|ؘՃQq'R<:7D\3Ou_Sްu!-HP$/ v/ %BeΌZ9-8\ ]{JB;΄;HA\PSx- %U3ȅ+D^>.1v0x3tiDiw&EWWvr48Im+x<ܐb ]XAcl ݿ78N~8˓ g&mM~,b=_lg9фS .eȩv ff:.\LGu  b4ٰ( ̿eC_'I^!&83$*B4hԔsa, LV,*4yf |֍F\ڃs`4єzX 2'xA8T=ÆЭ@O!qwN)2$n\==XO/x_?5Z5߰7_>l/a&퇏aUF}qt:-+-Fskg&wܯ.J n24}o>TZ] eT^-֯_㖗 %(I=:z(=O'˔F;Pd?2_wB:aϔt& tfI ]qar+T '+Q)׆Aǽ bcUAf9X)'#1pTQU 5#8 R =zՐkl=~yrqp7!0~r2чbKvZa1Jh>I@4z|me Brb 9vLgL/0gq,s.xj #F3[VK P"'/qa~01X!u08 М0.j]1M4GuGκ68A!pv-3]k)5(lai&5 %m@I8 V'^0$%WK8D^` AP383sK`L+#O`"782'9> `40L3(o!AAiP+-|S`$ppkVq)J7ԴfLndصI7lm0P3S H8b֔+P87tz JJ ͨ__ &j`3i p{AapppTLgtߨ@ҫ:ٻ6$rCN`wmI\a=w%RKRvW=$9$g]چmI͙_UWWuClD`a hl9/E!c-o`p.t6^ʴ y&q .*  ͷץJ_PoeFdLF9 [JspJ&v`xxAy-6S@)"(cnUzNd%s΀TTxJYaƪb0IWNJbdc ʀff? EX3MgA=FR? A(RJ#-Hĭ|(' 02fIT *WX=Jm8(pxdnzEGZJأ7pބF&1Ձ 9TFLc(Py[[P.MQ}y#ЮEy۱4M3&&P ;R"A~@=#ZvŎ@]iJu؁4evŎ FX4@Li+5RP8T^ЭzJ8 94曫AmE6>/rnU\ :䮖džɽI_ۧZ:mu cȇƞ-S=v kLi fh_vEBP ؾL ܖBihΌh)`ەy0q#vyҮӸ˵ 努Ppo,f u@$ZhϨR cL5eu=a1ʹ'ܞ a!O`@\zR.G`BqێΑmmz)FHݹݢSډ)im_2hE!i$ˠHA3%?9%yf3|{XX{$Gcb^>&:y^_?L?'3Qu~ 'E4~ƍlbӉ-<7y}..S1.T-G}n,x6lð-!+))ǚ,%92/ALQJ)3 91843& LyZL<%G e8% vɌs(ʱJh.1$ SD_g/eLϧ^ÿDm րJh}b>t /A_zIn%}^riXJKrwB/V` l|#QN鱦nBr%wrўjX8!ÿF|zzfi/e|>e֡2dF>Ci_B/_o=M ^7$/Cf|'-Qh=Mߌ?L&ۯ]u4iT+(vI_r{FA5q< (DSiCV6ب[{+27>Մּ_X$PNO_{ l۫a^^}m%t^;fRdÓ;>4y'웼g3GIYUEln'Nn/@>tn׻>q#j;fn,?әRZS1 0,-BO,e0)g۫ ׫RkY5?05)]^srrr OY-[nT:˭MGyƚx`<<&}{[~x()z]vW6.G le|ĉ h18zI^$-VIQRmL#}La}!<&C{.+#7^xyG!'9^n=`zBVQ^z eozX<̇;& 4:,x t u_Ӧߝjh%?7[Oéz,Gwj2I Pvzr'g3bZ>)]@YWACSI괎R*%N@v ^=jlFbِINqښ( =mN[縲U o[ń6!`pܘ:45D!\SWԶP{(l?cP%jB*  CT& ֤ RJT(girB'24)L`Jh$jfet75ZkF!0|BI!!W39ZEէ~5WR-ܡr;n_X6>`Rt073k/!pN̬q;t]`3:>?`],d h lV7.+%vB7oa0_Xw]˓r|k&hFo|=[teoVM2Rzd'pXِhL {-щ}Gv;>+Smxj&$\Dk˔"(e9<2 &='_NK n܁ KwK- 1I>}ʹH3!矞_왰)ܦnc2I>-abq`ZS)m̰&l[}\kݣy܈ro>%ǟ&f8 pjӜqlpwφ8($ K7gǂHes`A`fC" _ :ĬcS>{6ı9T8B_ NR4CvA9Ɔ{b' ,I3~=Ӯޓ57 ~ vn)Zl7}' D9uېK>6u,:ʔ_l\|ij|2k#HCC2PÇD$NDu>Ĝt dVEK˩aSY 4G"fO#:)*r&fE2eB7I* tFFrQ.23\.5DZJg:O3xҙRJRP̥gr9Mǒ4Gc0O7% 3߿쬇ڈ %MSEs}fJ7eFT3:m춦i`-R*!dLr1}˖Z`~s֩QǡbZQE:ڮ/B ߮p1zWnS\XY.:>4s;`\aƑ8C*{D fDsHL mːE!NY+ϩ!Ty eJH<,ytgO&׻ Yӳeާ8j ^j,a` ?_aN1{;W΀eپ7VG? "P?!57WbWK%1[|b9,ME.0Ū'ŹRDc2$%dsw &;xBXGh4!gΦX0ec&!})d\a M5 WRJ#EfF3X`,hJ$96/R$+aLZ JIH2 FE9iXk/JjRZKHA8Ujb 3@g)69}<Ȉ=cis>j5;pзڎ0kUfr \*}, s]yloL O" B:]oo䝁~6m}mw(hu s'&^>Vd *rw0$:pNpSw~s[d?Nd@@Vu/xscW:kp7C۝ڽa] 4j?hI !m s5[6rtm68Ҹqةm(=BgC0 ~{;O*ݴ,'C1ߜϷOcg59{2=ƏC ӔV%;zvQx_N # p~>3i&Qn2mH9'0J5:7)'.mxizvxUjzQR8XoU]z2+/-݂:Z>B"tS.F|4.`Ѭ4*Gz[qh}=x2NrbdN7 fy7(D4ɢXrzb҆%TyM$O#Za: Q?000ذ.c7p"jZ25€:ɣ e10'=Q;oũ1W*=/ɯ]uA/t9Hj B+9GPB/j::]F /WWP%_c0X[ k&2==BU1 *RPExJQei9:` e$%x^u-`L;OͲ;ō<'x30PS^LfЪ~cf ȖZî/r9/oG*wwmJ}d~zJ+qi8m;_^\s܋+Yg_GDS(FBrţ;r oh|)OvvKg-f̑[3g>`<]mBG&M䖜yehlEt$d"h !0I@h'9"w1sYmtq0^2@|^'kK3%J̞}ɁeT~yr>MЖpr@/+좓=h-$$X(C9_nJoIv]? Gt=8Ϊ]')è3OVh~7dy1VG?q nQ8{ZD]a9Og{u/ɔG;9y9alDH~ּoB`k =1 =R64+F,PtnK9m'im8kypU=yjr[ Tsf;(ueԚRh38ʳTsR+܈Χ+9RʤtY%~Js2:UN0OeB"H57y B-حh=ܬ^GCߩ-^BYbkԮBьTf>P4#쮬G J- "6JA>۴},ڗNrK BނtEޞ>ۉ;ARleER~ *wRPc" ўhBiVcJůPheю=2#Y~hhtq=/@T 41ě` @c3z<:w -m܈xX̔Qc[MW#/qZGP}A(#Er#\"w|s2Uk8XNyW?h*ʾT?Ad.`nj9UP[wVZ9v;HV30L8#LzùTF-?DdV fx\mV2z;՗DǨ}i̿`-g'.HD!ݎ֩ՒJ`((ڱ}T5j0T^Dʤ[sf{)k"Y}d1b -F*!%%EJPY⟐:*s:@,%7[Uƒ*g@|c!F9|%ŐF CI };Lô;L:o]tBsZT&'eikK#T+aDC|L1$]d5o\]њ5DoT]BGY|.'5Ь)rE7~. FYa@ M*I B$OkJ!/Km (08M|$"2y(i@pb AR9|PK(m[.H &D3y 9[[FN%3@#iDѽ(Lkv#HCx|3#L{q˕Vss562@:P|w9x9#H-%SR8REE"H`4 SP U`QGZn]x! @q棰,FQؼҀ\:n0O12;P0b[ eڄb#E׽V`,e)JD ؂+ /ʾ[pcWTSr&_4~@Huъ_AE@[J>Y큇^1ߥs7 YZԗn~OFғlÏʯ_! _\/O߹[pO-@ç=INDr_U/EE@5'y ^W8-*"f%c 60ͅ.&R h^#A%G^j]#z5m،eFc#:#(TB 2bB!E7(V914L:*Jn$#[7ugkXmcyE3E:Vs A_)P59fum۫1Lqk+4UW4M׍lB$*|Da4];-[NQF%5BȦ3dzqQ!ꍄNoB456:ô$BU!LrTػ꥘L[  ڗZtj|qꞘNoG6L/7'~yֈXY?]Il>4da'|ҁ8&z'O~&3Yt9z XZǟ>LIE0gC|n>/flv?:甐K9vz]ى܆og?ƐqK$ſzU[T TvI)܇0iRZn䝬o@3Kˉt>4iE/絲[ȒDFҶȔf8OY4tө<ÏlK$%餜AIeIQSNI魳J/"90^Hu*oesEo޽כ57mC p<|WV[Q˰ƆaHJh&d%{ך/erc-!J 2 aRa[Tet|V-痄oNrA;b ws@ +8H>2)U/_jOm2gwuI>@L|>F[QN nLI0%z^W5 ) 8A1:@+ST%a"UTi Jjmjjmo!v/0%ÔS";'u:%R Z"'^#'4x뢌h+ jc@8F3a;6Æ]t&lUeެޅ*.޻s>KѹP[Es4wmJ_;'7 $xطSA7_+,ߢ$ۭb7[AuUdjcrBD$rvnϽ IxѮÿ)ך~vk-7g|R_ȭ_ͣOϵlkrYTʩc(#葻[j_<|xx[6*w? XI-WOWzucScB%oOߞ٘LgKF|7L#BW?sYLg-MgnXpw}#hCy"%@=}ztr+Go?> ͇'(NLd ;:rS)Ca-0^C^) ;e44Ll0BRap;$ly6pTtaj`h Hf(·Dw"ưg𷻪w[L%҈E__՛H'S(!9D ,4N!ZA< `Zj`":Q.0+F^Da?kA4JR`:I/U^L5hcQciMsHbr,3B CjKQ:IMWK}HAQ}`:;BGSX2k R  AOh] Ҵ1P0b5GC=eZ #X~kBdG6͆ bfK$a<pKX' , Q3rczMk.9ܽۅ'R%yl'CxiGR5:CVWxpP@ӣͅ Rfk峌DhSC%;nPN40G-CA`z/`1ڮA Y%"tN;'(U}vc6~şNbE&Ǔ =ܔ~<*yL<BDy˄o'<ڑU!xo7!&~"˗="vKo_Ɨ_+(_0%syƌnx@7O>j觼t:QJ5e?iiG-#=MzY 7]\>x9Eu6TߠZV w7I=a%s]|z\;IYk69Y=eʏ7A1`~Pq]h OȘKUiwazڢ=#R8jjtq%WG < )]Nހ̡P7ĝw˛gv-6whfoGn=~QelC}7EV9!I'$Q:׬UNT1pb{yyĎ]8'Lz311_&;ΒUpla_"1_"8_7>ƅ)mt$qZ| aScN#HY8\h?:/%1>hi)\U  R)H+;*5 Z]JMa%^X_xP<2&PFB<-=XЛ՛$Ľ- 1\2Nk﹓\ r-?GtjǹI J K|( -L m eW/@[%Mk"=I bsLis9ﺼe~O |4q夕9,>>-(%`Q<*o]Z_'Ɨ&(5"fkUܟֳg_Zp)qRQ 6Qk%y >ғ)qHj@&IVl( x6z&7Q Daލe(ӇǮJ(=@Ȯ!]WAo =@l_]($;1`ᆄ"@/O;Qp7vl]"\6߯E˻OcriqY\ r(뚣y~'/ 흑[hx $f&}i´4뒅IB ps<ߐW{v2Wc>,<(7@nZ1:PӘPj 5T+X[{s tb ՂzwV)$(Yaχ:]߸'__X}1;_}Dz@L2ԒyDFY`TSB N{F/b\pE ]=֙E>3Idxd,!ZSQh0[UHQ99eV;- j-ď:tj~!-܃ͩ暫;o(}͞CmRߧE- 4ZQ@syتd3hLlFC%7XC:eS Z*o/}vt3=5r]L^wE}19(ϖ>$9gwdLqMZ|Zɧvx.KuMy; 9vh'|LT^ܮ6ӹũҕd燙|a޹3(P lݻkz:Xm^~͝z {ϝiVRwg(h%kS/f|:/wQ A|&(l\?60RPϸD&0,4W{XJKg3<) g$Cհb6S-vtz[ٶ(w%+AĦR:P[;j "1))%Id9(e+ a+*V(*KJ߮@eST| 5GYS% |Al4wDNf[n: 2#?mt KgNnmftގf]M-lo.UT`KA%wǺ_cAyЛpw_K4 F"Ob!G>Qܶ?? &^IO3ruN`h}=«nyKE4KnF[.):pL)޴[DS[E4K ǹ݀an$wEE[D}$䅋hcF-*KK=?w, Nɫzu{ \.^uqo41f9%].^ﭑA#Pz\ e(S.M>O Uk`z|{3@Li-l˔T <:V i#`xS,=[1ӆ!>$q]aCDzӽ0I$V|sn=H HEׅxV|]+ɯ^OHR;1`] ՚4obANG4_A_̆A)ۤLw):,˞X0jXhS6jbtq;UNtXr}.nma@8, 4g!gz7uJğ[V ˑí.sKk}5/GV+՜)&lNQPp%cSSLZ|Nv8UQ)&@4R ̈́U1YջIܾM ƥFH-h,-U0i x/tI$ՠ#]zU*:PyoZ | JX/kB0\wxJs\=뇧V5)J'Jؔkc=(R)HbE4&A<%tScrĕkEG@lXXNB+g5 %*kI RD 8(,#*@XkiWZipm:@5E-}4B"p LPvyشsS C's;OK7?{Wȑ/{`#2`,> w_lyJ&n ͤZT"ŬʪbM?&*"<"2xM'R^*fʐ_y 1Jϵ4mg_u /~it60xsi<'~mQ1qkt +8/j~~gWv<~~yw?2SfqkF+ @O^[=w~y~zu{vqso7g.ޝxȊsp 8)+\\B%$ w1@T9/[U{M8þR45$H./?RiN_&{toR8J >T=nrp^r~󭷝RHCHP:NJ<߹swuV#<D2}IQt{BjL5`jaqi'(yV/ޞ@r#뉥1DBCC+_]B,xXw6s)8 qq\uNFHEkOh4v 5X qhQ%$C'$"xJCg p 2{1j08Axd`y6 =+>KiY(׌H9i(kש`77)9aMƿ7n%}1"bQx]Jxb'ʋŐ᫓O9T~ ^Umz!߽˻fN?rGpYpceegp3^vQ.;p GynK[iEYkJTrJT7(Qm-MjX?_.0tIYwM)x7 B xǤM. բS^nZpt|,~ iy1d]ՇGCɗwVoS[xϰhVRuNX4AA嫣-2uZB(Jns=:cQPF;m+hRz'Ba܉cC$RS8 P(}y^XD?AIzgz5CH082&& j)Ǥ jU?q5Ml2ޅ`\B Zq%+ ⌴P eR+ {IE g_wS68Te'av9W^{j 7i5  ^|ƜJ%VH[mHb ; V HV3mqV<:\# $h!Q1qĈYϭcFR,dSy q=&*uB UjѳFm]jJ:w{|$5§л%jѦwۈKL%w[!!Z3__nVkS6!YYءWpucɇ1 `SWMùa#n##Oo$}(;y Hv2[)πwI$q0X${/y. R]$:`( VR+ѻXu.b2ydîk:0xƂ?Mk٦y.R, 㕳޺kXoYA/Մ rX;Msg(18(6b*E(U-RmѠ$*"5`(.utJ`ΒjΒd 䴳ߩ'n[ΦG,4n2-k $,6Bw\w;d,s>g}yԞm!Ku|ov1zKAdM&1){t9)t D6|%$9H `5ـ LJVx1B&bat--ݥ1$TpԊ ]t;B{*jznTq6Ct!hܱ 7VD.(rDhJݖTx-!Q/H Xy))AbcY!*DhPRs*cJR-8Q:2̀`!(C-.X%%~U):b $[ibaZl %rCr $ ˜7Jj8BS2nj`Xr5YRCQs H{b=AR*7.M,F4f'hA5&&vN| =Jq%yS>Mpj8Di1witɀ[ю"Er5S* R A7Q$ңkmP{Z6ч`z%0='FϨm#vh#$&Ӟ'-RIspoZpv VA(ԻM58HkR|P `>H3va,ٷ50sa_BSx{ >sU Ӥxœ$LjaQ-PdCyM\e'\&T!* 9Hҕ.# J%^ĩA wT/)ԋѠ J DRdlj5yt pnm\ZC H{%2;5%:- Ow%z\+ K9Q5Yy{g< "xT33K26 ԋ1Ǫ3zK$Õ(?|ҕQ+H׏x*!E֤frN9ղqm8k>^:5bsEw#2ỿ~!57R4;d`?U',_q]ޢ/%3X~5WQk4w# ̸%/_qY'9" -#ⴄv~w `0̟_>]?z>KLr+k[p!qΏB C;*sǏ, ,΍?"86jSߖWұX3zGgBFxWT*8H*Jb4ꃓºRpb}p" G!1er1iDk88qE HGz IAp^ BToc<녰\d} jQciJl'c{G3\(F UAl,7VE 4sHH2B"IK:ya fKСf6rDaXG$4#uTH#bx%ь&=8(fv;?̮``(ZF,PTHpU29z&cj/9HҤFL%xb5B|`kgC郭曃$MRRׯ8>j2q껨'6eqxƠѪTe&36(3$b|mNVYyAsmڢ/M[PDxʴ@"+`7xqGqqov1z*QMpx:}>:uoW&q(/ <~;W?RL#vIsg!:rON.Vx:XlxTkW\vY6o(< "'q7~QxR:FN+06ѥsXT qkTs]l^&Mǽ|hy\B ZHoC !R-nNeJ˃Z1d5[PKϳdu+ ]ZdkPC'V+?oH~MX= 䓹:oXw=c cvYom1טCV!yh\H H}A/54[Fyj4$͋G{  :Wʈ.dӀ,nĢ⨐0xiE{5P(,)I/F>2|NV>M/MAlquE~1󸗘+&XTgښ۸_ae+%ÖMmmrKΪ0憢X8qIICr(a8Q2S/+]EEtxW90Ai &/Q`:+ }'2 ñ(ܫ^,#8h F+ۜ|K(tYPkHsTuaӉgGt*P5nibmaKUnD/w҅yL缶2 +8 %w^l2O>y res4 -=0饕۷2¡5G"9+^z*dy>%;dӔx)u0U8C9p7ޏ"0YCs;Kw<:^yhYbZ ⬱ H7fMYcKh)ab,}Q-gi(%d"rQf[;~z7qJX-#st22gʠ?$E&+>7stFx!'66,ܓl˔,.:X*& Qo!eD4%:I:*Gbd}STٗu.envp "SRx"C\JL'p|wk8s%6/RmD c!JqIf,K1Tq Lz`fUѴM"f 8G\Jh``56-F"eBlPW;TO J_tRh3HNWC]q.91Cu g:c*СD_Y+! *8@t2g埗gG3f1lZ_5}o? b6A9:A IdW~$_ӯo0bNg5s/F#tʾs" ÷{7? 0#љ?כ%C&5W5isT NJ+/ R GG@]kp$Nk]0]'ZÃWd 1O2rzwh*&;H4G(=(JDIQwM71P&GcyoZ zU DS8lA82$2^ň5cyfJ)ă;y[+L<0e9㙍ƯvéeD2ȍBk-JQG _K]*`.֬-&N`##L'Y3f]˘4ggh1(VdqJ8vYuI/E+gl'#a,ͮȏ'ȏb0XeSq ;PwBlJꈏb;^;W7/qXZ0ݬOjw&ARղTJs PE*ȗH෾ Q< |A.9y K5]]iW,͏8@Y!}v,nGuSkPl1 ?ߟGug@x X[0U %fa>'FΥϐ՗v/҉1 Swxm^l_b1'޷9X-iq;Q V J]4=SF eV*7xQTS.󗶣??_5=BIϞwVuv}&5[/C&Ѓ/ * ]տ]1tiaᩗtq-1ii֒86etX"ҖpRXrB[^?~%V@ p;:=E"2!*;RPT˟ӰKN';9PMrM/~̆XL S<uMA IAR-)d~Hd7M'O$4/W(C`f2JBۗ#`5b;bǷbE"JB֘0)Mi" Vv̇Fΐr,1N!~(*ui>#Pv3N^^XJd7r2u5~yܱ=%~8Dpψ>Mk3 \Rw@VphS/uh[qi"CtI6*{(@/<_6ďض/MnCcί^(7$q> |6RsI*;ty C<դKyK0{í}pZL~E̓q6\N|b_"c0p*F\dĽ0ZvWeKpyy 8t+}‡Zڒ*IįԯpI@rܘXcYcvk1ԢؤO47!KBZ(5<$&! ʄPcqJT>&85,!fQ3xy۠捞B π#5 jKaV֔)Rh_TV|aA%=Uز 録-wt XWj->dz;9MКAYB.2n2QPoѐrG?n3;ZY7 2a ׫s`*<+I?R”n}T[˱IW) G=[ )cE㨉kAek;;wLaFZ9|f \xEO 5am 95?况5[tVL*WLksDWx=à_6/3%{o5OC-DC^.><9^g(b'y"ŻUN31~i_߯끠4꡴\#[[U\k{WT?y!ilM7=g6 ÈjڗNZxJUni!x:FHT!btX6}LЧF>W>kyHNʁnRXi sMhg;;M~~S{qԐ6pl8 o?Å|vnj3;s29f5z,BVuP"ԭoָњ+b/3w윏;<{f!4 uYXxw휇%4< WD_u~[ L2'J t9Zl3@ѕ ӫaH f4w>ۯk:ԡZpqgUfB)㢹r R,?r0r-b:Q0g\BNLdGAh`uėr 9MU9cύ0NsӒIeD  :#i6k_iAQlQhUB* YH LM؋L7Qr%PLu@K.Qe2[Qz8RYB8,*bvyVpZ^:Vo寗*IJ?Y~E]U_ٍ/ww_dKBU>Wo>?+<0[: 8 A!pUG>欗ZN/F~ !_kknFŗsrI~qr,e+I:`$RxS1!BJbKb_7ݍFL2hϏ#p m 0[~p\KH(Ι3:q۠",%΋R\10@DrH YQM̜)c $29łK @m?;* ?o#RX+&"bzA˾,_D,i06"*(h'*pqJ" BDX#QB5DRbǩwi|A9!MG1o.%YnO]^m) Rn;/Za+0ns-6yՙ̯a=͞ #o["]% oG=;Ɍq'C?g6L4;?4[|Ҁo]m1 x6Q/5Z(lj4,Ie0#acpIo[ tSq|Ȁml;I,[_3-]IU]B5*uM0H_~ $ g2|LJ,qx4~كw`n@-WAd_ < W#<< γ(A'"6 r*,JpL@FLhL UZRF %j㨲 j҂QۧW?ס!Ll'=}ΐȹ$b8$H[b%jYZ` )PT8R8>F%?g#4&F6; ^`f#U|%,+@/J4 O(hU9ف1SW cȬ1I(BcbhnM(e"i*(')A42psYu\evJrl; YuS7L?u3y}0=bJ f&۱;n0]PXbY44 m}^@j~p[ "<57rQlMOSQh`P2IIjm}Cy:>ep!$KɬeGꓝA@;#b\ 9(DxU ɫ;+hU|Ndp"4u!%bP%O1!&"$H8,Q*MDȺ$lɈ ͐,eLR,%$`mI1F(@kZL5r`-%Z5QN*!n`Vp`b{X@8 e8Y/9'l[S;ADFʩ;w?Ok6}"eRIXny|`'qPO'b ifaIHޞۄYʍ Tcp K5$%+ê <XEeĻ^~OzYI᷻{'ќT BԄz4)Fv)RCa *7YC{$8F`y0g&9'dĽXYcM3LeksOWUWƀYMAs31\FKt08 pU%:[<*n3< Y'g'u`JBz': !xuD*dY &jb"-SLRjK5~#Œj nx)5 -Sr}zUsݏ T̄nx~f8PRB5=+S@7a0%Xk4ݞQsvDd'CdCG#8Ei6Iʥ9ª8t7`oVf- wL%mC0ΎoCUEsy;dp0q6YROfɲxSHZ`1[g &^aØR4Ȕ B!-rr+Kd>bv=5WIZ SɒHt J6}zMOΕog]ro<"W$[ pOm(5Spx ދލ>X/zjAe0M-_u ;te]Ż.U6ȬT YUDjP}6ἸlZ1\8HJ/N V%/EH!r(6.)-2VfV5tY2/ljhs76k>pJ[mBV)ftEǗktuw*伻++h f,*S*E!|p\VSPS28UHNժZњXՏss>F'#wƈqz*Qe!#16SJJu6 &̈́}Esq 3V+ЄEG| x{i6~ʼn8v`Tp^*8e.A *\V;wW&/VA)g 1y2eR5 [d׌EN\ S9R8j)eKpsFTg(EJ.9#z56]MݟK3[wfy,oݙm̲v HڔaWX23H$aRR+ >&VTmB_v)kA=ŖHq)cSNEWj,stY\\/U -*@|ڰ,JwTӈ3&%S &bB&‡aI 0|Gm,SNpM+iN\d*pl12jsB4YjB:m˨(3Bհsrk*^o7ٿl-v;~e򜃚МlT8xDRefP2&Ecuha\UեHYJ*O:TYg܁;+nd h͈8Wq4Mx1+s{$v'ڡ7>s)~2{Y1|%Я4эW(!Jfm'a.5a*@SBͺ37Mik7]JBEkk -mj `^l$V]rE%lkSI?[no^9R3M0[3M>>>5 yhSxѤ"|ys~3fo:CKX*Nじsk/ggѹ'^/eWzZȐ ſSO׎XA5їy> {һ|xq@zO4;?>KV~|7x̿N>_ɯf OI-MIt\ σ~ߏ&S׸ (<}{`͌T=+c1=Eꥌ>cJk޳{A[w>I+TcS2TpS Ԫ+ZpЉP> XBN%>?;/jsJ/m z y'بW\U9*s~C}IL&pŒMqbT9;03 !;$s1q ^Wl߃_4309i%%gg((HiJjE-\ȪIgKq]N>.P~^EěӍdh>Nz|qF{д9r\=dydY7{o0ta۽4ȖnAJـ{!́e=޽aIW0%I ^'٠Zpt%d x]zT.Bu%I;iV_uOq~_%oi4-w;xu߬:@Y/BߑS& $}0%eZh6++,4/3Z91nu_ɞ3tQIN %FB^i.hQ';ҍlPBGke,2;H_^O!7Ȭ=ƣ/-44jr,  Xod]bMݠGWŽc.)W;wosBC>R/PMbЇ[JPQ=#<%%Hn8-|0)2k$F*_eV F=d]+x5^kʵ—)f׊Ǭ:1(\c[+$L^m0l%T^2_+:Ud s޹ 0qG@G7dc  ;&F8IT|%,MC1Ɗ2#V2MDO׭{?:ڛU/O7# X.e-/ |G)ڱGs-$v@B^''04)ܱmQb#:(;lu8ع3&0G <͞)鞌#^5D%Z !pZP{$S ۡ5N!=?uFR s׫fz8^^xx={w%RYl{}/7d<-Ol{x*wH7?%*5”huLV!o6HTnsQ0ćMAo܂6\*,d،!\8}doCh\I`77ٻ&7dWmGc'`1Q]h4?Y[^R_$qۣVweeefeg=4#X'D|rBz=¡iaTrJ]OENz KHyrJֹ$l#=)‚܃2 \,({>3PFO Gjo %׼Bnyƥ$ZMsi^6{m? ,hB0bx,^i$k݀6p! |wbp(λj3N8׺˥K?uF Yu͓&65)g 8f·dJN81c8jIҢ/q}67ѦR i:+Ԃw ) * ޏ#8H7\󼕈i&uφdJ"m/FhŵwrRߍ J%~\qRQW=Kgԃu_l%Ir ȕm ^ԯ~2!Vkf8&ϑ\`R:ʎzwfR%7yOY;ki5ߔδ_}vomǚ'$YҷnADZ,)TYmad@a):#w[#?G/ &8Wb/rO0L.' +MC; q0it1fыAn|?ݴ4A5 _/ 󺽥LB&p'N|uDEWV>ivuNö2q`Q[-ۯxA$0q!{ȑ%k_gҡ=: @3s$Ĺ笹{ḉ1cL"ox)#20LFY9fq6YϑR+h?q-w)Xj?w5o'3Ss{:A)lwG}~=M`N 4EDGrTGfѭftn 2),Ѧt%`syO z=4HAoCp!tOR>l્.,O~B!{a\5ÿ o R>dk r2BIHb9hu;Yp)][%jrˡOui\Z^RܪeZ[t"x!u0h8>0p&Q&p 2ߦ{5p H=% Qq4CWIaDdg ӈ;@O9A1Վw>88CBU4^|`EHsz_~8jC8WoYWJ"XtSL{~|[qm(ü9D{3&tJA}U]v.Z*^j[h3p˳ɁNg N"BB>e^9ѷ䪋['`(nLɹX_?{{Qa KD7bLߕ$'Mģ1kRiaVR/㐾! XCXL[a3ϐwmy,gR>5> Wmf)ld5"$NoYt`M)?< {E+2-򰣕vU1O.o#ƒ jvU.UU:]X)՗\{L CeKEtcRi' Jiu:B!bij0x j0&7"ϭ!xo@qW(*<եThl!΍X!.NR|-ޖB)Ί_^6hwRzMԘ#)ar] q@o9 WnJIk%A:@{ַ5[ٯLKwKv&F `J y@?o 0dz|`S^DІOzJC2𵏗sMXoDZ$mYMj{!HOzMȸ$c+2r l3V[@,+;<`g:Y$EHIֶ'gк=KƥNn{QKDCI~s!&D4+Ps;Lblf@ɦTewrՀO!VLp\O:-\]M˥NoՉ.Ni4K~}p0ofvӿ立!)Ԗ^TP7oh[=OkA%Yb*1xZ#Ϗ!HN ɧyv ?ot;@n(Y,aQb3rzI~Rc|;or)8t̚ lWP2A[0x>h+^DjN;JP[b879ǒhp03dh+LZvE.QEۻj><gi;?z'"~{r??U*b3h ~QL?~y f;cYxx,CG~5?p#B5e bQJZnB#"^Ոҭ,C Ɗ̍-gyƔBE]4r13Mx,1Cqm-\n` =[)wb]:^k0h΢r۴Q5"K9dG!-Z!xC5p-%RF-9C(eyy˚}6Ă%2pL~]^*о 8'}i{b$d7Vg#v؈ږ{R.ZsF~"U[qΐ3BGdh0dpH+֜[qi爩, l(:n WLCd4dJx[ȴx0i>Z$1ؚs[Z\:[G-h7X@28u G74oXNٕC6ڛ)gkL);WE g ''WNf1O)j5g5VSKgڕ5ZZ(ϊ`i+.m 2MVq۹[\so-M-M&<&ZCr\*,ԍ@:"w"`筝Cu &hIg5G_{<,\U~XU]=81S* ?Rt6ÝhrA Z g,` <h.sg,@5UgArdԪC U TG`>Zrk1fyЙ+nHȂ9Kr X%FZel,,S=. g]6)8ѡj]M,oCZZJ9mtքXϵ97Io7A!1JG:<"b{l'ĚQjڔb˒ZwF+1 DX kyEp"%#HZ.<9&k.5 InH'=H]X%h/M9YE})>S?ӻuvaȸZ,3kmo&XxvkjŽpǼWE_2ag7t'lxwD=j￝<0iq@ogGUm\&iuZyɮ}49m]5Ex<|I9.Ye?o٩rےrM)S k/dnT1wn&P?NMXѻa!DSlJP6Tw¥[*UL'upcFѬ7^Xѻa!DSlKMEې;D *卑3u*ogLYugڂrM)ް]2tԚ]BRYcx\A;eT7G}!`HҀX=IY* iTrm r1!Ky~ Y{T-_mK$09J@7<0ber"Db*,mGݒ#(6;o=TƬ')S26P>cD>#!⹎i|BVW!v6aW*z/烮gT5H%TKNͳlԚOa}yۅ3Uu˨3QQ=A f-AC2稸 h(1ȔBb``_ˉG孃0TK6KU9цty{]_Tq>I}JݱG:̶hK''T+_@dEe5[i臌 'R@T`T&=hƩmzcیx gZ(vÏ,,W!G₮ .: >7)΁Ar#m3 yZPsIe.sRrM$>\x ɻxz}q$Eo&[Ah^^݄ 3O':~X(oTCb@AA& rxr~g35F,e0ܚC4!`Bq34J/( g43kƿ K&ֻ`Y kxoށ&z4Q`$^fځH= .0'T&t&2OrAp'H 𦍀25(ނ 3p41CaLrj|2 ՒRs@5 R7T2 C҆ .#2^FNn0.yÕX#y r>י-8jI 4px3\!; 481݋2I mfp)2G]B+j} *r?b'lׂHS֪T6%،ev[>JnnZ\nYS\QnqV-qRиG*E)A߇"4-{(7@T:hSN;,P@Ej14#P^2 9ٵC0ibR;خ1ΐNMIۜE/3_O?ǟ._a/Gnt,"r>Wfa7sN½yOmEM[H+-l.'yɏ>6-ד?ϙ~%iQ޵Xӽ+\JyfLsp8˒zBnĸT8?gP}kKn9(m|ÆkPl~7. 6-? }&dL :=K{NmL>v /%J H'sHcH. 6I j|UQqFlU!ŮZF=OrNR*N]Pցj+ !5wZʙΜ I>ԛ#VkX)(\-M)tUTQ}u).Su\Z5FhbS2*TXc[f$LʩUy N !an.Lhu***QQ]<mWU/v=h4q4`TZ꼵vևb4Jp 5%^ cEͣ{ _͂ N ץ l=)*\>}DO;GߋRc1Hy7W\{uul-ّmu%;銶yk|~O賧~6~ K 'ZUQQ hZlV y[ -5J)"fJro3s2P Y*):]M[UB4o;&1pe[D ?Qw̶Os 1 >!eBpF܉Q@8M1FʛxDg/a25٫^eB$ @d* \N %D3i*6QƷT"0K!s `!" M9 qi#+~N<NRȥDJGx^dQVh|aNxYn<se 8"1_}Ud^Ba$$:׷O,e'օ flKR{?WQxCX۩<&Y̦7WWɼ[NR`6 cɅm")|5gn&T9Rd|=*>*dKmƥF|}*]>ZRǍ JJus(1qBTV3$Ř\5bcŐxݟ7?CTw2)$AL,'KT*x>Vb4B1m D]j$My c9T.Ԭ h&K[C3'3;EI+%Ew |ck3Kb혷G=Em֣$$-=t#i9rM)u#bUʨPDNScyޓ)BS(Rvڠ2%W xݠ % =5 *W>zÚjEߧߩT^ kE\OцDZEwvuWeWW_JwdNsUJnpղdRI&X/ۗZ3nVI|OMm޶odV$!{/"τWEt*Od _oY_c#yЛp?Go~-W!@+Mp\kknVŗ7!Eڇc+S'[vɋS* cBQZ^rRg6!91px$!nݩW<:*Mk\dڭ y"%S>]EgnNlUc$m|ڭ y"'S4P L'&++d Ρ1[9ΘM2tj@fV._w1w|rMԀs݌n9~F1؄ׯq\}>k"PUHKAa Vх Uj/-s :"m.фHb4v1 քLa"G,CIZꋑe?ڋ UI;g^AD lET[o~MllZ/r(R%lͷ{߂E'ӂ1%`c3z|W~.~w' 2OsefjfẉwYvoqYh&lui,d0ywۻQ6=<.0F80p*dS)˯>z^?AVto(bH@*yQ' Ga A+XrF$ZPm-i &1H R0 i=bŹZ@S ^l Q7:u) R.0^K%ia]TdA%/H>jPT 1@s|IՎw7!z;3jNǏ7P*l#FǢ+ӴBsW@0,4XTci5${z"TӺEhNdmߝN5T쳿L;TY ,Vl]Q mYS-1hϕ87(m&" =WN"MqsH5j4[HkILk "R jM/n"&H2V (,@k)$!Z_^) ~VcZVzs4Yl){ \6, XA9ǥ3J#T z; ggo>Opx`,쒓8 `a>]z{?^|Mq޲F?xOFq1nmG}unm>8-Z"ۖT3Q8ސ` ׺Dk9К)PQ?:xTJN͹wdg$J %ֹXJEی))FDVxHC,$8z!_|tnD߂Dޯf7_"> 8k3Kx+e2u-; 1؋l55nK8*++rB7U%~uqq9ڟ\e ǭ&zRgI8t<jAhS`9 F1!:3]igOL=ѣ?q&/&f%(ji5]TkQKm$ R݊mx?>s׶C]NNy9d}kqm9y;fj) X-S}T)vK}SYkSGCBFɔF?v՝(lAQE"V[|B@VTvCBFTcWڍ $I7<8 >yg  {=x y"zLq4|@H+%ک~],GѝD?ϻFTQ4Ф qq[(fUolN;;kJ*A:޻erQi_>>5>}2%,(LsXy!  Yt™S׀JEIص[&Sܜm,R(_|i˦|Pn?g$5~g͈V[;PcBZ)njlʿB{㽠t5frfUyH"1Tjr:/@M C/ƈ4jP\bYK:]/5chU|YY3v{]R{>{KEᯟ^%ӬŬrThARb=`ȊffIB cVQ8%U{p+ם\6E;,adxp• KrwpA}A?KLϗ}RN{AL8G!T͉w:Y:BE5TdOM/ TU77=xKD47=)ebiBMe-ez1㯥806^l= jJğL95_ kFŠOk%z-Ѹn_/פ"u(ŵ'zl@Ωo@aZJ M]bSg<N*ӬP 3ݑͩX7z߼W[Mh2+@?5kkuS'Q&8J"eWIHpC[L[v96K<ؐ$KN7F1PvO~sn#Zkn2_HZޫ  #_Gk_JaS(`hTIGEF;I%dSM4QVQ7‘'c%tET Ik"iR*LK#b(1Y,J &kJe%\g*9`m ׂ!kw YCfJʹ4A+'$#0ʥ F*fP+Yj#B)2MK?@t,eVP'0gԳ8@E aQZL"Ь ՉSlf4wVRXⰠp86F\)II.,@㚥 |h >HF2Ꭵ`Ӥ<=DŇ0Bf0\h8@;I*U!M0CfS䕚E@-bA Ľ|ԙF3W 4r +{[bbΧ^Fx {b~a΢?[5H$o%YA} <(z{Q1+xGOeKio[o`)Rv\ U76+gÏZ|KsO{+DZL|}/?`Or==yB)D{޽ o4I”&L R-U)5fh4LPbM w`8STj Á5TPNfFVG+%@>i E>%қ$wU?~.~~Jc;xcD&i̤wY. 9xȲGmR: x]t>=K?-V7"8v0@%6L9|d ϫKOP>U!GLlD<@FGNP'z\E@+2j lLxKۑ,!bfea3ӑ)2 )10+`Y1@H0GF#HY) i0%f $`l)`'_ )S #c̔ \Ϝ7ox_g}lҽ93}aNV7׊yMSî&9 2}/\t+~rxSe1\̯+ϳ߾m-H a|}KsGU 4ݰ矢fj=Qp2+YèDF_TZ0QO ֜ws@¸ Ub1)G*sӧtx'\E4Jd&DX BD'*vO¦[nGV[E4J bjj7A:ab":}Tngty-nI[EdR!UwɡXS%BZ(-9⁃CVz2U&a /e8>,5xS#î)3֓c2]¨ĨqvT9'!d$>}/sr# )ܛ!K> _Juƹ4!'SxieXq\GN oN睊qvޮ.Ҩ^Ǜ96^9+~(<^rgt]|VItej$wOnSN'5b;7̮~˖w|\0ށ&á9CaA[Qco1;j`?~bӅyØ [yo#}Z6i),A7ovG?+%I8O, D 0O^t NRٵjR lgТSUe:Bsh&AZbJ`F_?׹쒻[W4z c@ ~(?Q~.\g5iƹ$ ABŬ^z::^"51?|>(D,nrZ`U.&?.U gYvkjdD=+hK)뢴ԦqRC'y- TqXΤ1 {} X) cJehN:'=*rz$8Kܚ3n1.ys -C‰r N B,WAiXmHXep]0C KC)xP:Ie VNUr9iLLohd/^{orB;AO'y39uqN_s]t*Z˧͋1RL:#3&(r}ds1-hG`Btx] ~ aPl7̓=k00ZmC6͛=sY76a /$yGQ0X;gmV@jO\uUC{ztPZ {:b5ܙh+yע)J]"~*qJ8W@h%y݉7G6^{q(iQ<\DZ]l9:JP"PU*Us2RG`b,7`g I" /"4'#0GA !88BaXd(  xh*oo.c~Q,n,ۥ:+wq1Wd@ZSs-y V))Bd^z拯~]{xͫ ʅG| f X\eȡDcE~?xȅD36ŏfCA0砋7] 'c1ʸӭP(uReA1(UX$7:0N2<ĠYKk3FasX68X3s`m[bfh<-z{\%Q#Az)oO +c)r5`+c#Z)J;nyΕ&^KAQzTtkCm7ibO]{ d@"r%-ɭ:\rrc|%x ۓ'Dl@d7$jْhKyXvm)Aݴ@TM[ zYW;ILƬJd]p1(6%qb sFBcbu{', L8&>XTe0+XaD~I5!b $|QL "̟/b9Bg NoT. <"Hx=3Q'w!yU8{€_`^v૝CKo7xן/UySÈ?@!o2Zg*.#P~58S^Ģ|X l&D4sjl&g9,ēFeoy(xo>N ]~G!u4O ?5N`KEesf#^F+g{ޱlXʫ^5(O(MġXGBHגxr0lq1CGPRshk'TVb%o"tK=VpPn>`ݗ% \gO>PJ\<]^S#xlD9 lV_ƋU?ۤf<[=G𗈈s=%kGؼg+R(T~L7J T;#%ED+X?~\`w'm) {7_.-a1aț:12-]$a\ jL&aiLThkVXPݲXz y:xo8=ZD!` LMv[}-hU@ϖpS Ĉ%i0u2٤chT.ħ6'0=lZ}9V=<~1 xsm S >3oƼ7oy)RǛq2_ ӞN+\Y#ߛgSM8 Y}Veeol3uEakpBgJCi- 2¼G-/e]ܴ~fp_ާ; 0_FFK]v'g>& u`[m%lV{+u7'.dm[kQ Et꾣v;<-uh[՞hvkBBnTbNBDӍUĠnʹĒ[Ѓ Xi `B,w+ !**QE%>Wýqq'Av uGvm5#~ĕ(}}j`y\& D҃xǪJb:ޠ5В(yR2e\ak.j\S0@96б[`) Sb{+ *2¾"4qtH.1c)S[0i%8F]T>ŐN3Eo\y{[vWގT -wߧ&uh-Q"XNYۤ15D-'C&J*(}Tؗ.MhL0uC8^b@ {c) OhZضv'miC0S}xEںN]aA#nՠ(H1`T?MeS{vON1]*H  ){\wuZ=Mkѱu 1}l!Ǩ?i'%L'߽U|j4%YaJ.[V,ʝ|[?}J_>ʯKVuIf/h([ G,/s$O0*&vv\+0O Kc^0o~aJN>eϽgg!: q ՘*CM C'Fg?@ZoE86fQhyC6/>ۿqLt~p* GH RdH&}R**$So\fYMaO*K@ahbX7f;D:j#W%]DI{ho7aIO^ʅ0>,85E4S%'Ok !IGn|>\,`Eɣj'5MFz<(g#ă.~)\79Z~.T{RuQm|0={/fړUu39XZ-wkԘeka p-^Gjܐoǯx"?|ǣaB/.`f WjN!udn>s>"uG nt5~uQfڠyMkm#W^HV< 4vSG%匜`&[캱.dc=a˛.|UWf.Gc+ gjx6Nmh;kd1_?'R2 #]WKH+[  $n id*Yex=җ=thr^_?}4'}|^߾ۯ=['uWu,u\/rVz@sWo>!yF\yC(ZS8nl+@B*fkXd#˧vƆz>mkIhL3ڑ9Fr|c!Fʜ3,!fAw 3mL@fz 4LunkF7\<}po&ct^ac[Vn{*Fȁ(ȕ@V~ZCZ4ntf}j LjhgOM*9\:ϡTjL5^D9vo.]:6ۛzA8ǩճ޺J -H1W΄$jW+.m^?jQ?݋)N_ͧj֐lF%Hw9#wBye0wM'͹DR N6vBc^[P>1tX8NX²jamFumzGZx~u?tJ[~#X~h-._><\V 0sv0)1&bl)7OVHc%*zR*)S}%Y< Z?@$Qt~gA{TJ(e9Љh4YeUjSo^r\oyg?+Ӂ]_\^޾Ѥ- 4NФ䖷4.9I;*%N}qUNSK4O)qGNv~w*K)y S;&Grsh-'F;yJrP3N瓫$&u ~=@xk'$'P۲x;K9Ş[`QDI2B̘6E&l"یcv&Hc z4b53Rb2ha b BI*TgHI UA) !b,FM67aF(P4rfUVif6{"̧V6Vqن4L)fx T{L66Jz\c)4so5snT#$BjRj4#Q>FOkTK'ݮJpťDF YLR8 T `-r 2|!z:Ս)Ҝ2JXoE#/"! 6`("Ӑ$hBKs$ބ4)(YHk@5D]I.cxzׇB)WdA9DEK,5*R- M9(D X+ MbGژN+NEql&,h*T"vd|Y@J[laKn<+qPS#4S(1`%Qo -J[2MuYBĠ9;NKM 3ՑZ1D6FL~y.C '\4%!ZmˊG<+$9snyetb=)C,XbRXmB6NG*-Qu£@ 'BɪN{tXqv,wd-#l 8PZ 8JCM/[TbmȓխB#08BO3a{?7E S;uUr\NGkr zOWK |m`^+c5z1Z0.P.G N(ZFȼk %IfmʜT`ZVВ^IΊJ#b6D8jN"[YKP눫p@yW5МAƗf HV$/T[rߔ.L6[{,c2%)EZ8kSho|rKVsz$rՇ}̦щnQn:U#uuIݍ`@LuehaQ 2^-#lt7xT}nf%aZ/or#b{l/٥gj|/w:O2__3^}lهۛۛ/~7gx{ssKLYr/fqX9&^>12Ǜ3s͛ ҟيގ]@p2goџbڞ7ӀMYR ԴΠ&UEզUɂ`oIRx}h_ GLɽR.M݌p~⢦ww( 2C#,FݓHj¥mGg󱠳T a082;pQff!=6߄,D5{VkcB(*,=ʅT SpdHi!5[U>JcLBR,n YZ$6_HkX?(>dDId{΁K@VJ1`C(yr5C- ffTf ^ ܓ<#-@͟ߧ׃%;&j92;ԯDy R8:V0`D@ׁ(|W"[Rvhɼtrl| "Wd]؎*ShgemC#P s 7ȀQH][o$+_IHI~9F^,lr-]KɁ!gf\$#ػb*^ NN)H 'tJm/5cZÒѿ;4Z9QƵ"0R2[X q"8X#A|C1] ׅZjqJ N&p*N:q+ &岙++#[RgqKvM[n3҈3Qpܡ * 7_Ew *"ËaC/U급ZDSs߯1]T~{;焽6{BW3:nҿmɓG3f)jpgC~pd?廿ϩIޜƠ;Zoٵ b]Ͳ|#&>r!TD?)9j[8QFg!) M[/"4kPqqqeI‘ͥQMD}:%C>hgpF%7y}6N54^dj}QC;_9ŢIuq.隀lv}s"ζrP~'~h]Z:}}"Amw?ߴknǥ:H,w[~vp`};PGV[(.gowmj2kWu#Wۭ|W|:tk5z6|p1*S/Bh tߕБĻ)9P?wAtjQ&̻'ez!!Z)汢/nwAt*Qݦ鷻6ڎݪKry퍩(n;z~) {0,=)-r*8 ((dq}ESM Fk--sK"a:4 ,6#u K9MX3ݓn}CG_~^n='ՖhHAL>ǯ6vM;2-wj;Nf{"l%C(L)@5]oOqp0h~[jTXo>slf.i2Cb~7h/IˇIT̋ c+ 5͚.@{j$N( Cf`uȠ!5#7E3a|(69YI:>Pu.o3Ӛ:xH"L]%{ɏ!dyCیşf#FT-jJIp. bywsEۅ| co?X~廄w .uqy&uR\x 'ꄄ1ZGj*Ņlh%~yqnWO!j?tyɥ nms|߾'n?3-р$9HQnP6ڈФ t଑yeit+VR 9tppQ Hq eu2Y{j2Qa7TÈV;w2=9e-J"x›mqu  AhA-Ik$tJP5Ѫ~أ =(2*4Kzs[=#4 4i jS@LjJCXET{s<ʕ KڛRl1M9&s_zK-s4waW;R9:V 3EG KɦMGXɥu!o|qzǻ .[rSvWijlVz݆@hs׻sn:ȁN7J1w8MK2y)+y:a/#2Q):$$*`gđy&Y%_UJ2π/|A'@ 7ɒ/7PŐ1botlpWGUz\ ~ +<Ÿ iD0>(sBxA(  1T8FDDݮ֋CAؿ~\wSlI *lf`2pxxNm6,B?X 'WY|骺Uh@窬T s.Y h-md!h)9u7-]mIUɁ¯o?/** ¸, pqӧ7*d|_0~Wtt_?!-bBw_\p` ~ߧ'>w/s u?eoD)-50=3i'WWR#ғFiWz.sF`w>'Đi޼37r윳@NޫyBnQn?z,\,׋)>Qʇ$6$$CA G5pSc}HuRԜ/ȥ9HΚ?8O, Z`_/2сos2ando׫E{0i]X(6@P˿a?`+E+RMܤg^؛|yS3Ys k5I20h&hG&Sfia퉝Z;"tOש # y:^EJsR_mIq)(5|%1Óm kQkpFjaCi g&q@e iߙ,Sl (ɧ4b\j)J~5{ج &{ʻO`J(0ͼ^+s2BJf{ K =2{ l,i%0.V z rH1Rh0#!njN}oNkT+ 9]vQ͊9>e~*4LG]{E w(zC>nV/ te5)5WI?G/Ls]~T._GiRU J!C5bBtBFE:~y:POt!Kƽd$qn^Nu(ar֢[W$uMvo~Q'{b: Bj)I-a-K ;e䵵:aF yj)11ym(vqVaJ?Rz7ӑޭV9ЩFw;K)۪'Lfz!!Z)_!x78:ȁN7Jw=Ѧޭ%ӻ 7ޘ=?x {K9OϥjUK}%5)9W^ADX2`$ G>l>?,Zp)o/VxL3.6]6_/k rv8ѻX̯4Ag4G盉{XX)|V)c|>P)DShJhEˑp[j)\B*-cïTD\"v;PG 9 G2 SHZ|Ra΂ѱ.|^,"b$  ܆6L(=@3a| ΁\1JErg-u|C\Z09>/նԉғF)y(eɤJCZj#}Nyܒ0nzQ"D#Hz Ka4ElƦO" ^QʴFߖZgJ,꼹:S)&ܬ$I}%uB96Jef"gR)Pfy5}(K:L%"!?/Ֆ 96J塔#na P:ғFiޞVׅA bB: _nB.!j-n8G$+/=ws:Ӯzdj(ɢ=LI"ϐD8ԉ3h`y~KCpe,>h\˲@IVKm([G"z ~9xLFɯB /V <ܢA %x% .S)4Wk4qb0;Vaa] f^(>UZ |h+U(oc['}i$r=O&wtR7Wnx|ߊN?Ȫ~]?-w[U[?-#i%5^} _mh^B=\Lu Ad7#Zpqb$|lXQ;4u 2@y&eڕ厢2w 6΁.YlR]D[a5Ay +2%1xdE,CG,3RXi5!VYYdA 8 c+\ {Mu6ٽQjVbc]MUgWarq96H93L5@'\K`\K)Fkep{RCj .\;E|F[!xz  lfUۛζ0ؐ޿x<tBhXgXebOvoңO0*kMr`'_Vgx\xZ"GA;d||ehf q:b boD/v>Jk|:Ư|̬uQI ฌ9ՃUݠV%=qG޼+8~dN%^ xY'Eey/CfJ';`|=MΨRǔ*{`=XrZ/`Wu^F4U76 c'Cw '/oiMx%^~m!e͐]%sgz*S,%vHAdWslH `Yho~Y9PџD!;nidD $摢AH )@"f М 6ёBː 0gj6CaPa1 FB,˜IBmHMQıcdhe"_&(AVJʍ.'禙D>9J S7hVry5?e* s)o>n;J@4ϟ}H"t>m"g8|ONgǿ|\i<M>;ZP$_RrhpySbê‹ZMW i2g{I q<=RI8~2zx5{>T*PHAsO]kۉUbR @UAT´g{n`k~sF)J9~`"UWpw*'tHҴa0~^G oDH(#$<1C b$'4/BT,W@К$\+&!&Jĸ^&F& "K9$ %H[zGWyǚDKs" xf ndL L:leZeЕhcqV$E: ^:/оoՅKc- {.Qy($ Ňu/@W#Qk,@OqTƽչrjQʱTSͤPpV3j bd`U.rnbUf?jΫ"Ն=$?#AZ{Rg|swJ]]Dh4ƔI鼢ym?o{ϾU߱iyO"l S MOY{Mй]>9΀ܞӿx;USt& hߣw ڭ*|D;h3SAZnՃ=[J2ujͻOPV>Se9oBnՓ=[ѓeJu#vs9$q~ݸR4Q}<>)7-2i֛gήsz`p׿oƣfgݬ]ˊ* 02YOo3YѭZ2Y! :}B:lWP@Ӂ jƷ!BZFT !kH>ALiihY{ry&TfN7F9@yetP3PɈ@I,iA\AcLrMt$2d` (8?I>y(>?-6E([yv&u )-Szc<9[=^ zol'L}jIFGO$ <{>,uf`7Wl\x ڹk>nJfQq,b1LyX;XJjNCzYa&,šV%6|(Nny= \ub䛻Χc̺Rg" Iwgq DFq/sQE.1nje/_5,Ç/rS)sWP̟]9EC]1R5uk:c! 2-B ˏ W !*ʰR T bb 1qmN% Y5BIDbT"Hƚ0i5Q%4, 5 &8v '?TICX y`#״\8-wMWT$db8LEOWJV#?9$9z;:?źwz7V{9swgѡ$H?Nuo65vǗվ2N]>Ok)(-CPPnH^szK|q.Έ\GۧMgZGuYu7z=ICeHV MI̤%dB[$0! gA"j+RF0 !"$6BaD[&0ҞLTJ+TzTAF&3{N`$s2 LKY%U([y.H=(R+C{28/J}9QN{֨jcJ= {6wC5Q %t=\,;1J^{'b2^{JD+X'ZLV*xe{|o]JOlj” *:{h>~Ho8UzMlz;D"#P>_F`ʬA喱3Uw H p>YbۻEE@ԉo}v^_RvbYҔdvs# _fOg3V]wxN3CEZ\_0{wOE'ݶNg]Kl,Aq]=OP r<_kzOpUym;YQ!96ZUI Z!fUI1w4 EDJ)Ӕ*I' $LjLrT'`]>G__aOa ED$@yi(jdM痦/?'7\3_]0=e̜%XJ7k8BpoRQ5k6[~ Sug{!\E:K,ƴDW;f~..,J*}WF进: paw=a]ֆ8LH&xHDL5:*e-qYn7壜⸤AUzNoM\VXKuYToT@%N3[ ݽw/t(:-͝N|[R6jJš2&K@_($ʠ5RV/Sqi}S7?s(=,zW\گkZb0hsVQ)$q)7n`J Tp0QQDb(A[Q |SQ*|of$u(sݑc^C #Z[ ˆS5+̖ӭ47J \*EI%{J׍Q]#̹[ܟ R)u)2EeB୮" 凤Ng+#88FHOb;#9Bjz@cun >7zE$pT N=q‹w_ϙ^|d1Q <.L.u69t y׷tx&`sNWtEGlwߟږ6oe>̪t+YݺWn_?HƦƍyGn^,G3=#^V 6]>͞Y=5ze=k~N# hMF$m5'n:1ox<\ 3K[7$j;q#]-U!vSôAۻ,z.,nG۔ jd!V`[coF-LS'*EKgRVo?z70BNq껕%-xsjf_n&~v/z~=O_{-b`1wF_J(r++Q ը VvzRT9!9H_۴Q{:L;AS}RTmGVPaƨWޣ^e|:a9\j!85kO(Af(5+QЈh CpA 7O Ĝ:tw[8sԧ0&867)@3;vx)VC5U?-67#-IvC1ŤkѳBԾ|K۶ E˧fs k-lQA;&p& 7rw_M[UL?Xg=×:r!B:$u5ɓWqphx;P (Tye ,$jTp~ b,@GK\4 Fdw[&*=l@1\č7y]oBDB]C qfujNݶ'_S *w?gUcfWWmAM]Ϯ>go D!S~O n8e#5BUoSٺ|`f~I!zq㓿ַ[ K8P}FڢKޥ&w"FJ]!X3m1MU&B ŪF :M K-ZQYyus.Pr$PSgE.uz5#$A=AtYzΥwӸ̬nw eF FR6"4:FP L:/b8E3CW!oϖAMm\?Nt uU|Yp& ~@b6-f],m]y{i7cKFW As$+m dyUYj#Sr;6HsU157ɭ'.ִ"ĩ) Tm&GD3]Mio܄n&lgԜn(?; Jq 0F:͕šsc)!`_.%(ՑHkNArwx $ǽ)h=F)nN`ӧS ڜ\VxΥ$J5qrPnrqTsRa+d #1hZn%Hm;EsH2'B6}f1nEFJv?%q@Opbʙ{7:2DA G<84M z8#>b D2Eb8cA'1QgcB>puGI#Ą-W,N;ٚnTdtpf67 HIgqb4FLGL]HufyfqͲ{NKQrVF(9u+^D8DIp:cDeAɞCs/a @$NfH8a%'Aň `]GšeU|:a;X {J%ʔsJ|2\ː͛:{F+|7KQEEmyBYn{b)5!;Ab5Jj 6($t*NC(̜W8RRFґr V $HH$b"gBoBl4$`Je5D%"Җ½B%%`y\ Af14sBqFJ*ӕUHatj к2widܤҗ >1!` !fs$%g0 Ndo/{'*w Cb#+yO0d<FT hBnȓA T=vFEA"!ΙaՉ>R@7'԰=s;Il3FK*F8t?>J%7pqa:]9`@-F'?o$`e iHDYTt.yq8 w' nbã#Q!7G*zmLo|g"H]ÇÀу\M7E02Ȭ;` D^ɭCIF5p r\"DzbF͵"b+Y*C̾I b"N#*P|_\KlYt{A,0c$cRج0DgL,;6(C>7e # #E8OD]mIl@G}~Q-h軅Wb\]1 \[p9 Q q-Cbo2p(gނR#!+Vbs}?}{-[]b+ VVo\HUaZ*]-%TYIM& H!W~({EǏ6kF fLz;˂,$n8L+YMQ(,e#Y1bʊYe(HE 3R .sK$-rwsM{cPlVaD;VKl$d!$mw ĝv1eԽ9b#hM a6CQ$b>Oip6cVラ&Z SZ9 4B+lG2?LmnͽQ&~f@ڋ}λ5w C|ϕӅZ\f· 0 {+3" JDxRbc67rjd<-tPg$FFJO=~?:fg@@9Ir 9(C4_o2eO}m5/5 a _PFc5 oyE0e@b"<|R ~j C) uR$ĦZ\Br.6LVBCi+Rrr 5 sS8RƀLCAT: սGJV)*P;  PTUL8E[A%LK&1*4b`l˜s$? -J9V!d4L Y]9EIQ%4bSb# -'rDMX%8\AKg`<-JѺ"X?s HYE \9u%,&PH͍@Ees*b(:Atd^E"iyHG"9D (Hk#(wx8"1F@0x}WbĉޭDR7H)a/e8"jzTPA@ H$gdk WajRjmuW5ayD 7X,6Zvp&э}26''owΝ/QN/wZVyC6~MA;֓䛩[)|Y-ϋ*==y:Kwֿ_.n19r)^n&fge4~=rLwgesyXe:il (ⱶz uAİj]1C7~ ߫ /{ח]==8褾l(F6ZA, b[g$a\jrU?dxH]{Uױ|ݛBn<8~VG?y$)N Ua*$hcicElU+v%IM׀ Tr ۿ:HԈtߢm-5gj_a咡V˭7WWĮ8ɧMmJeH뙡I5;htOO僰&"OSm/zZ5UA鯠t=b<)9]ϳ1xj>z렼ڻjV^TOʗg|]+֛l%ksJ9x,`|u`3u|b"1ڙu󯎱n!f)vnOsNy\ΞdZT?F'嵑cR2\OƫmrUy`u& L! ݊ m/=;y^2ށ{P29Te.%fa: $$vm';ʙdyQhR6h\ u.V6nfX=lU#P. 6n ;Cw[n8ii4N{wa{=ٝR(΂'urK9*FQ1^nLkPRц9RRݔ̚JR:`a"xzu[pVegH@ f4G 7 0A{_TVRldgACxY΅8Kc9VnaaHv5c_j FGW"B"<}:8Hd="hI1k¥8(Z#AQ-@M6vs߹*~3Wl/0^,K?OYBHSiӫircge ݡ'cf}>}be'oiT3!]O3< жhT<^U r,T:dmd<^nc悄ʞ7f=R }^/Qq²f *hqӊDsjDF'XnwWo:K.fH ^N˥wWd`LW^m &g+ #(Th9}sI ڸj%%ڀGI1# [gLhC8~3~Ҩmj25pYx ,B!j!boю_IAFbEH멮>;i6_g?U(EbrKR܂[')OwaJRvAjUFq:u|V{̴"g[][(v9+)CflU9௜BGMKMFѐnRAM90`b(N6InkPnT%SF+ Љʲ(KN vJ/~ELD<&\%Jh)dHjXQHIaд25)cYLTI ך{dE8UIy ' ;TrlqM[*8Jg` .\SQЀfIVÌ '1mV^ b)$cC@kRSj!DߧgC1H7Rc/(𘀜iOX6*-HEշ\<;5;)N>SMwɷB|uUM~\'m#)]"YG$==C۝BOy=W^lp4˙wY.O6|hQN!8|I'b;W?-eeMeE%k%7[Q>)T<'T4)iqǑȍtf+tnƊoo~Hlv,)؎7\ #ۙ8=@"ifv65K4(a%9Jrٞ|EQv0^zW~UY e+e -DF)S<)*2"`NnFXLnen8T'ɯyߖְ|0((fícʙjTzA3?s{$P i{rX~~qB͘" tLGUz8Yc\4v4y0hT7)2+?3LDSqbL?3 F"AAi! eTE%ǔ3Aԯ,3ՌKq/_+k?ҩZ!k[OϷ% A\jO?NQҏj|3>a3%qڃ,g)`˛yN{qُ0=K ύ %,}ٳYG32C0nխ<;F: USvcOfIeœjiW JZ.5=C炜!uGC+Y2z)S9A/Wu>h6n&Z=lZ#P> @GEnǞ;’2ˢ.fȐp LZmֺ 4!gZ_-Q;:euwZՀܩӚr+nC'v^NaH@81_~-Ob g!)F|4/cXGkΑ1q?Fre8.ܛqXS8C4u25\.c]Юnp!~7mkшൠ}ܺMT8Hex6f͠gfGk<&_oWCpv uC^| iaxZǍ[Q9֢?)!NTHHG 2RBCUExqRmGȺ4QR`+$Cb,vm H`>4'ߟ`tEi%Tu!Xn_[FFts_y zneG^?ҟ#u"oٙ:77Ckmw[@tؠXopH-w\; 3} եwUR2ѽxq֍ #5/E-v;Ͽ 4!;ljV+1xm6à%H+&FcX Z 8|bƀ@H1<G1Ztԩ~L el>yNB&cL Jhkks3;W☈rMU~7X]ӖSFlh71ڸNd47^wV"0tpRи4%v,1cL=ZKBIKx^ Lɖp;Z@C1MtSm؀(i`ƄҒ3Cf喲ׄ5zL i?n!jxpn zO۶] v*p=AF2Ф iN5:))@䴀<QI<!,)fB Rg8LLBQ*Ȁ Vȍ%P,' IN u4M5~Tޥڬ⒔/)H+fldV`ە~~W^}wR YWxʫ_K٥!LEӰ$?~:)nWۻNic\>ɗ?/;L0DL'xA33R2"͑Td][6+F]_ 臠&fHd^4tخN:KJ.[VIe]tE%~!yxx.[z2Tr(~>s0BߣMrK{)՞PHNf@D0 8&Q*R"$"kY"RE f!ghX1GR ZZ/}LJ0> 8 FȤgTi)FK͖"5IKXKs wL#56vGڼ,L\R#Pϝ 3RVx)_ =|i%x +džlwۈ${s͙ЁNDŽuhP޷c7ty>* I`vm4V#Oj#l >){,գ48GzbPPUíR#MNk C3JaAM]PN#'gmy"a|Ql5oa߶?R#BLhs^agƽfiSU^Ѣs ӠBʈ3!$,yŨ- .+Bʅ.z:rd0IN^'1'쎓Θ ^،zĂL`οMZ;cY29MQ] q݉Emf˼7}u\ dY uaY{%sK)T7y|xG]9^V'Y\.c.⽱D,5[^q|SwE ^SHOwz h(ԃҖUELI {s0`qumDVo^bez|2 kp!fnS`򅡑>IiȔto?DG|ohp-m B'QB>5>^ͣͷWjXӆHɧ?;0 1Bj=i!RM_Yٗݦ8g%IDy) yDRZ ZJgfmc:H9oN+Q(~}O)?[WFo+'m]V-sĩ^m@ޝnֽPB%gdD RԳk6Iqh0n$TAKn=k֮'" #o"o$mA n @pjO_;Oo'CT[Kڎb"@m/ F>9ܻcvS;Mg/1F:91gR% 9s8lCbfñgNut h#Tmv9<`Jٻ hnWC(d|dy|Ax.]+mxyr |w#;٥i=iD|ݬc):[Mp@4*4GBZ;<q2hcx0'u Z kdS$#Yj9eV M`e gt\D\q:͚)<`h:B@ #I8DgEtޟJz_g˕Yw2&۸_`+0gȥb}LX7*ӌn\Y{?hޏeݾ!Y-4%b*1*&R` 1@1BRzG018ӌCլ(-Ӄי뇮f&:i=#I"MF>[?]Ym6fnxV#!D. Dׇh8bL 1Q !O &H8O%8K(xBIU2ZET};mr9C-u|XSkcBGE})&MCr)lkj1$ZkG$Qlji"%˗E`݊jm5%H R)!+Kqc=&uUj 'KBjTM%)AbY7&AK ֋:e\("vt͗a `{1҃R|f=iN7r+w?VBX&C@9>vt[>!H5^[s>iGg'\%u]$ԧz2ꖧoK (|Ns '0$vj#d6ӽ9 L`(Dy[lu R=U6:F'YmSōln*nF7–9Xs,z1z-oD\!tVV8RgpY $(b5s}AbstZpiVԆ#ڙ2껃JGK%V:w$'޼8R iz}kӥa8^X e 3 jYXe)]>8wp=]jh=Թw]RNm*U偶dkg{\=&E$Rf_zZ'mZi4Q>{[g ~n*}+Aih f=՜R3Zܗ< :ӿwBxnTo?M⦬' UI8 ~~#QF\\1QotYƳiޠruCCru)8_n83W͛v_n?}i V5D&XRc`cR_WCN-#0!B ! 8IKQK)RZZSKI}]?OZzZ*c[0Id\[es`ÅzG(`ܓ+Uͬ0!K8BA69ew"-ؕ^Y‰ NKM@kSB$zB,-"LJR+g%rh%R<;lǭ)%KH*?HNP컱).%ĺa}w6 I65 qyY%w&O#''oyLȐxj856 200@syduFQyGrw ȷ+3yEU^Q5%or/^vSJ*@9Fy8[1 ߬wW |ģ↚߽{Vv^6G]ajqq <>},j'>l[^V'u{.UFޏwꮨw1|PZgHCxIIٽ>%SzlB /uFӇ@t *c9%JOc+ѽ$RBzsKr>eRN-3)afOL>쳘]U{evlm QGV%[3m5G}(k~|khhmtQh_EXRډ[y0_JlAxYvwL%d 措f!pFJSEO&D>r>&IiY"gզZ2`3j}1í6_<](smIeH9 #Q)tA)滵]Ἧ?.y@'8(^7o7׭V5||Ö`ȶEZ~2ql|*W;Y"k2}Q'&k$B0dL'[OP$Q${&i;+DZ&WnP5e) "3jAΐa^ڏ y7AbEq~^E O*[Mzb|lۧfE&k2]~sWO4pALXۆֻp^c)z70$**%CILSr 0$#%@'6ޟQce dH 2JDgʚɍ_ae;K =uk6qwt TJגQD퉶*ߗH$@&U02pn)8Y}W̟]x~{v~[%wSwREl6*-|/EO֍=Ŋ*y<}\?vGҎ`.0_-}W .wx "? wx8_$+j>/jǾ㬣3"L z#H~\m{=6.m~{j0uכPL-ױIN ͑f\iNA?L~nt FDȥ9۟qRRсoK/"\73yKW .ĝҷ~ڗZغi{RdRFtBKߒiOjPnlݴ=j)~ZʤR;TvI-}K=17-}ZR\W\w-Ӟ԰N㷼[K1Rf'HwBK1Re즥ZK RBÝRB4Z3y~nr.[,f-wky[o-3~ZF);sϪnZ~#SKrJpY4%LhC Cm$";!jkD>h)%p D+}Ms'\x|G} x Qu>1RJ*ϵ>r9ϱF@ ֢=ZBe^frZ8fw6RΉu@CV܆O >0{RTC<_X^l=gov }}6'SyP]F"x ĒK/U zyNZ{IkRaA0P5Rh2Q! X(8Hݓ #(ijZ?{k10Dhc#Ub5F3 RZBY1qH1gD(.5i7ye4ՈybJpuFoF0 Z0r(\r)  4FX(!H*Z(W2c0ÖH 7(, F(H,CPbD˰cH^<PQƖ1M([FI[@Fq, FR(D! cL!JʕիBZm=v bT~4񽇖C[{S%Ƹ4ݕXeSj9QJڊH,đ`rw_CjGF0;j7b%NJ# k鮗Kn"J `e_.jYW*#jlE_N^%ZGHӇo-= = >I=m@ PͮM>>ܟ w'I6d[)+6W2LM`5N *pj :ǫ :|؟>Ww~NHP\:@ ՜*z"`g*aƛv*v6跛[| 䒪 P0k8Re|pjU,%}bUR CX8xeK%rTZ`jy)GRwޗ?2䙎'zK-Ҽ#@j prU\6Ck Y~ΝcG7_St9J/ .UU ~ oAwd=ƣ]KH:cUSvm${Vܚp W7:ʈӿ_Āy/;PsؿbY& UN2^wu>nBnU1Q(b.}=VxuCCsSBuSE%WRoҺUiА\Et aE:mg)S#k,g|6^W\ѨI^7sW?~{1zNgs-p+ 5$+m/ @YaŠCGx(+uP̢la}QQ #+-VY '1*E\hl\vs$D5|q; 115/Gv}@ac3,ȨTes7:oBRK#KnܦLnh[IK83فybm#ANx'9<|TmF,Gicͯ;-Ų2]!6NrbtcZu5ӵ߃hu1p$؅`|q8U(zô-q^̦rtN`t!z͐Twp=vOl<6e`m|MOvQTLa"%;}(w5υ؎|Krs~ }W ڲX _&ㅺt:CsRp~]28%9_D+UHa]]8suPRC֟J%xUI)stVy(gGСq7c3~מS#NLܲl͖.[c@Cg3ˎNoſt`M3Z%$[ܯKh")#BԒAPhSNm kQҷ߫ NhB.Ai&@M|627tiq[\EXm,lRiј!gVgZ b.ۇbwsʾb:賈6 ql1H|[ܱK7j(9{U1\Q " o/*OId-jq@/ϞJw} |T.{nlfM]5ЖA%n&hSRFmq\eX)!q,ndFLa wG$X:qFy RhT}  GG|[m>ؐBehȕ - qUY+12I80AڇBX B K%BQHbC !GQ_VVcKb#cRn#UQB#ٳ3 =eW61rW6?,^ %t)8_n %?ߑi6/ñg "p?zdEh4=I!~7~AJc#H~\e=R )e(jzכL-ױI @ @>M5IO=(SH-9蔗 RM;B3s=ld:Lp9pcHZ0pFno7a1s E w\W/|4A/gUXmoL+_${"~p+WkFɫ+ ͨi x.@uݝ•HWiȊ8ݖvo]&ZwkU= [lr ky3}f $VV %w],GIVzk].AepoNd|`̨x`Jxt9u#0%ˌ(dcԳ5IrK*7/ZQ)oLFIEԉw\ ]JxoW汔kh!039ې)ʍfMxOd~Ϯ^n~>HэRs/@#R(Wr֝bq H+nP 0Q{Vxr6@wRqSqyq7 bpwEz"W uǐ b[,Bn\rSö6j&T+EI]9DX^jA$X- $Rw֗? b\D+*)-5&HqWGӾԘ :".+'EVXp#s{m]n> i\_ɏno¿r_8_&ʹ6  ljsq$wzpGSH?htƪFȍlUέY 7$Kp>{=A ҩiOJyI,V|*ZIZgC&s /hݪb:UQĺ]r ՚u~Ӻա!߹V);qX7ry1.hݪb:UQĺ]rAp.eE֭OV|*ZZ(|ζN>qڳk4\=sGf&y}G~{=zNJK3424yfh*  M3[ͲeIDM+|!5B7kBEڣ >TK)yYI/C4qc=]/1 ؒ8g'ơ n-#w|XX 0E|:#O︙Z9?ඍF. av͠g.U[;b%i;GB{\O.ĀQƔEAүTp^5bB\(;Vуkh`e 5"!2 b*Bpⱖx-&VQb;^hPA[+16Q͘AB26a$IHԞRsи,sj{e}ڗ ΖyzbJLUVζW@6/mE Y,/qzb8B eZ%RrZ$|PtgFPw%J8 ٻFn,Wd&%~1nglH' ViJrE%d*XbV$TE~bhٓ d +/OC]No"f"]1ĥ 4.OǺJ$@DF$$ADfz&X$MeWv\`(u`ׁrI @]bo>-7l\^YI %k{|LUY\ȦO-RF8 .d>D'x{< c3ϐn+"-&\A+]㑑(~cL%}ku^FXTk)ŤFHK, -4QFILjȅw3m͇| |Xox5ҕRrzNtGipcI;Lt]/.#:5!>tq&> 4Ն*(EX(L Ʃ=Zf Є>" ]tj < 4A(R \ܾҕ_t6 keNAdƫWxvbɏ/+)=ŔC&@Uy*R=Q㔘DkDS"ML8I37'-uˆ7; oU(zv-!GWF x,1Sfp241BLƘc P"qa0"q^k8rn\NG#& [\z& 6q[ŸS,M)` )<)TlriaKM%@7CB!J>rn+B -96ט1kMǽU@l#{ {6p)o-jogo.1m&߂و!<C.İ{GߎU <&n!f)[ HJ1=otQZXN9eOA=t!3^jGLQn)bz5$ň[ӭ]Z"WbG DLƟ Pv T%vU0l3F!K)\l-N/;wNT@vW=t[7tkpǂKM A- R3f7!.#ITI'}tZ'oHnRS0f(> CpAB.ရπuh7A:?NoX7Ơu GuBbݺ{YNs-Ӻա!߸3 @SZP |T'*֭UTW<[NAO7pABivK)aktz"sh-i&0qDRL"AtL3@iRP<(U0ibB5D"ҼTg1If&IlF{U(LJ}ϗ/*Jx ՗Χdȧ,QU]f>.VWAs藪35Sf4ESk~k=FF;x؎{ac?_iUsg1}GƓX |ܽV-|Nhk>?~I^﨣אs ZU{5geFv 7= Shq%Hў&Ѽxz췟tQzѼG6~ѣNZ:GsU3I+[ϗ'Zyƭkm}q:,]T3]خmUyw:6´} Jv MIIpސ~OK'RwltSKe˘-#FVا&v'l‡7d{Yz,8]$6nk GCz$RHϠIcMY<MRyv$J`=eoyUѡn SΑx5e #  8% 1Pf XA:U :YYHbCUco[Y`rmB QxGrZ6ԯΦ. e Qxކi0;s=(MK| @뼛,uʃ9YšjA/~n/uQLh2\Vy\uѺ;!=VWPuSw6Jp>񊔝_m O $D)e^x棟[.}(3k cLs q DQP!& So=P!CzO p' X%L(Eę8J3 xlCL~+.ֺh>$O3.Ν&˧caC5o|sq .TH_~n>~my^ěutM% ܱ37l4/Gc펤 (__ AFX?,S)(lP+b0YÃ)ʶk[qLh8IwTkO!駑fV0Y&bygz0y\Y?Œw"3ȚR\i5^ټGHd:X"O`&,Fqf;W4ɢx>],5/r{NejM3[Zol5/E6:Ph)#'i2%g`Wٮ! ߘcO. >+|u2#=QxtW'I޸XjCъZ hh=2{ooKIV,XO֓Li,15T,b1LiEKI%,"-0ReZ%Nēfjsf9햋"13'4J"@0QL)TOf¹_Vb-nL!+Gyۯ\ܬsΉ]e9Eu:e̍qXGө뢜"E\]iNj}܎5gL/~eveϲg,+[!{]FGxсf} 3OYKԊ$%XgL =^ۭBPĭ'QFAaݬ93$FYFph fgv#3]dBic "j4+R/͆nDƳ>EM14 7CZO꛲ԒpR~,nt7%!0CYKOZK!R}RߔHyғR$#>ZRKzO\KO/$R9|_;8k j)e~ZJ0zii.5sÈTx:SD=8: FSӱ=hN-VmGS*ZMRP^7 mm05߇|t%I3#ʁ"HˆCNr׷c8\vVo\{(Pݣ_4"gG'O-O>='3l~0.#-F8wV8k*\ŹTcB"1Psc}+lu[cܔ=ƾlpl}Yh3!f=;]hsOaA^qT eB&f6X!lV2 &IY"0: N||; 9Ɲ_Ƅ,&R2)p`<3ej"xL"cNDo`(<1CW!@`NhLD\$:#SURI&T)iGy$5t|qjGZB>DaBqSqtA1DD0FӔ`(*y He*Dg+%(NQH̓QDIJy2Cfi(ʆ7;kiƐތuPb|NB&`(a^_|3WK 8$JJ~6f8NYPeK^~~^. ir>aiL1AU}TN*B |S @]k^m?Q=Ϧ"@ 5au^  H검]X&#C:@RAW1|ZRCLĮCؿ!).hMYj&v1܄7lr?X 졈Jib?ؐ鉝v_gNÛ`85s3p;CF.!v:uϿ6N< &Eof˩:E$Mm*fڹx4K rMۃkON¡8tICq ҩc`8FoX7F<ZP |T'*֭yRؚu wnuh7A:EHǛM: b50Qiݒp-ܭ Ut]k7CALE;űL(ͬ:keq:̦wg|p`O*M]GK2:Թ\}|%ɦw2Wkݣh6ybzp*zf " ,@ќC|~}O\';3qC jHFbcǺNk iͅe=B~G}Pѕ-zohqdT'N*2]`.wgښ㶑_Gup_T:T9-Aֺf:}HpnPc:)'pFƥ?fWBO\=<9 8ztn#RuD2Ɇ4dm'b PPf6lͰK<;|/'ιxE:(%6 N A{ˡV=;Q$ֵ,=lVP;P:Ӿ~u }:խ1VP=;ek}iL_}NHj,PCsC#YN $&'D a{J9Mp\zN3=bekz"$UIlK.ͬYڶw:lة" Zb{ Kyo6=\}%s巇z./BSYo?ܤ_9<͝[zYK|:pco9/֙lv#8R!J4\'`ĴJZa0 [Mg-W(E+4=6X ,O"m@# _ v+>&7FPLdI!"+ICNMr[~n%b swx+5@Vfvы 2rK P" I *.H=lʏkJ\%EFk[G``o]ŃC# }УϭG2!>݇GoʽۈPKBro%ѩb+:}GR@9NAxG3dQ6NhWx 8Mǒ<|9x\T!v 7O4lt-AѬzxP`x t1q;~TC %|CvJY%/R|Lܓ3—)oȤ%tv}cgs[8,?IOFo~&nh4*,#)N3=5'9E)Pbʵ& srC} o&=gEm_fDin8unkO]9l̮WCن_r֑a3[,3E \vKkF@RB7Q' NKmm:9"Ǖ|^mOhg`ʔIXEUvɷtFZr>}~.{ HM{Gq1w:?̹|֛̏`R+ /ߞܼ"܉ Rlr/ٞP)SΐjR?=޺|6k)sm.Qŵ&R֧~hb@fTXbEmS44cRpM-gB;EYYX7bMj]9C׻7R$\8s>υ>]߻dS^뗳4txrusۻtvCQ\S+ ܐ*m/{ۍMCx'o7Xp.Z37_-\z9,@aůFVs;{%o鎛UzTOPn:l~7j88?I&r1dnKO1QE )Hj ?!\(-SgxI_o>3kx&C#'I#LaRLt #NqF98zx(=R?z8H"u=]44P'xjyA;Aqm; 񠰹A JYN6"!ۉX^(>)RT9iIAS [faVPiC&Q,ZܫRTQN8TSOj[!2 携PƈA'o/>IB;{72 [ +-9PIhd|6)AįN"#Ĺ%h-[;M4WqBnk,|˧WPx$/)l|MBU4G QLT~Smz%9uج$zx9wQR?~'+ustW;avX<;hJMzbo>7o_ywV_ߒ)f 2:7|0'ʑrz;U^B 5!5! P;nSFUuݺwm:_ny=N-E^e,yP|w !1^ g4cm9+?]f' Ǹ("@]os'U,c^E@&Z吠1he,x;tʜ/_}ru3ԁ׾+J}QzƚXA!J)!V۸&u2N3{YN`ݟP3RX^-pAƂ)0h#g-48~xyѻxYa>:(/NQ!%=rfnq}ݹJ~~5qsTiE\&)KyByJ[kIs^6UΩCް1# jx&P W1NP[q*pA%M^ͯ,X{ H4ujOz9]XSl@U0:6-觚 +1{E)Eq874D0M 샜ђco>)b1Lհb=aA JʕVP͌FYR(GQȢQMS^ZX"[XQ^q=:1'0Q`FSEDFQpjj_]j$9CU#8wҭy6׀jUd#*oF"dK5`cƯX)FdSqr9 ?HSnPgë4)4=~YiVZ*ݳSDwOP-iBq=9%๳ ,نPn-@Uw.dMj.)pU7/)0W)V3Z!5,#& f&<۵a7kC'p}V@y?M=IN3V/8{{eHxg/?^z~5N7 mC>,գO8%< "^~A? \ފs^rJ])t͉v^suioZ=؏Wפu=\%wo[/[Vz! WSAlC"I {djO{ c7?=oZFiwelj Q|- ֏ nk[yKZIJx3lZo=xջ 7{vZWʛ5RJWF~.#_<ݻ)o#RyD զzG.@n _}tˠ\Wk{]tEd>.&b1 T 8VĺȶuC?uPkCJu :T|O*yލF|R1v:YԚ";y5Dq YOV}`I넾ånI1"RCdmcʅ/cmqEr!\>g-Nz7d,s?gwd궲~k͖:M=d?MW/ھo JA82QBvE;:`HCW3qOn16j_9ZلYtk'F 0!*ݐ&kP4?htJQjfU4!(8yL! A0u? ]ySЌw=r %HvkXH665"K=duO7j,-!Xv9{y֐H4 BYWW(Y) !$+ A0ZYԠ,1岤QMamRГH4am2UNk 6)A[X4.܅;'#!BP PK@8 #ǖ{E΃.!;r\pIaJ BT]H.hAh+r"diZMQ YCT3UZ')K"XKY)B@8}zډX={*s{ MAoɠvE8J|r(Q0ͽca9p5ЧikkXyX\NQsS[O?n0n\`@hfZj "qܿE뿴1G q\~tiRN;_1c%;违ZU oն wm2% `90^a"X oSztb"Ũp6۶> kߝ w5r;Iny _=6O#On4FՕT2FTKlgm]DD$ TjJ -'X~aw (D#l qMY.P Br*j0/X\i)M-jkT(VTgUJ8%OZhX DZ~AH/֫N_}wg'/B7ң@[cx[;yYC3Qi֫'Ʌ\7o,Vݺz  hMG׆߿$PADǝ@TMC+R!KW{0o,ukwvqPjr3p*yN{[_)9#mEٕ@ cb^njپFX'rR7hN,Ӂg%=)/1iQ "qOgxG8)3ʁ B5j H痐qwBv"! 6а| *1#SB R8!՞ GxDC!&yQ+ aL!DpH{ ]ĄOu9(8L3C}I<;Gsd9 ? 1tB1S&X(,%u9T+tO܃n>tK $ejaɆ-kQڕ4JQbj.M]\$5P- QSz~n6 ,"\ *!0BB +!aM})1EE>`9FfnFK۬m֘8xPoy]-+ Xո,vvCH,5hg~ݧ/k jH  5I B`e[ ֘VK>6%VR\a[wvq@=g>Pjj[ +Pj7%)X$) T>{Zwj"XjcW@!$$d\ײط ?_>t$D""TBjPFeȲeExmQm[ [ xJfpRӫ嵝v~|\Py sq\_ms_?a_[invryk_,WUo 療 n v?n`p@:lߛ;ꕭT7;]RVQ w$> ?5k ]v o~Sֳ')0K1Bqw# #F4UO]02@Hr0\{EA0e56u3$,kwI&oM|shA\bo盫N_%<;|gi#E5愶&flpgX<گejI<$"<EImCBՕ.Rʊ:ӢO)zI!/i6[r;c)hϊ|R<+'IAd ̎'Ѡheo#^6{3! @ '/bƥ~zlO~ `e۠~ב5ND!$H?1֔;X?^<,?6]-/MVcէrqu{rkD++/PWbG#4LWX Q#ō1+FTUFU Bx)%^{_%RMz֧0_<HBx|3G9"3(x8Pa|YjB~IW*B ;zbZ*Ġ/QfpӼ~;XaIoK>q䃛Jqpx^Rq?ZIoFBHp.0!K Ԗ\m&d5@hK#lW$5TA&g**%*k*))H3EѪsQɇ/luo7 ɸ4lozQ2֮CO^fW }u9pzso{m]چX-sl猴@¾?D9w;f 9e}쐻戴`09~Q~wSV7l *%ۼ%]d{#职 s>;X|Sztȥ|(~Oo5~ b߱.w[pVu1F]?B.=g@ʨ–SbJmu/;Dz+6Sº[)7<9Џ]Vnw~̲fTW7ёY(CC:Ӗi#ϛpaܤf?snR9nN;bz &ɡ{tE-]OI pOs}Tp#O)B .( o0~9hձHA1(egnK Od8`xp \a%by d-zd ±W˼Ǝi · w20 '%#~?%YvvUwRV $sw_1q;/?UIv28ήCm;!nᑄY(u!T'd8Vʲ`Tp w7PAye;F4 wn5o!FY+6n%١=2oZYĝ3PtBs1@ G6t{@;KDT>/i<ʓоO 3ӿR"P:,> ؎8C]\K$-}Gc"s:q؁K=u<ྎc>zZ_%0cS]E-|iǗ;B i.ZY@])!m>Vm:x#yc vЀɌfXt-<yF3hP玝g-vxA i0)qcHkCrX7FvlB 2ZrkYùžK8fVbpcOL1q)bF )D9e"ov:V!O8ywg%&\F`)#B9F-QX/dWu"$_b!d9]\9=RX&,v{iN`O&rl|Sva/I(~u~"T\gWYr(9jW'JPݯI5eTΈ@R>1]"Mҵs6$ET\| Q2k !H8y,bJܛ7d6[]؋*FN]ї1t()rZ5<اr&n)mxi ˟|B .4,"IeD &] nʨ`t|Y@N-ػIPNqv$(Tlq=(2k3 Y0g\zBL\dHg ;(W)ƭZ*GJVRЌ k1d"1SNXȲ ba\c*!=}1fj> 68f^b "yS&V N+YK)5זy1 %ce@h&RhF`%b)t ` pdxa!T" R I(3ipBL]̤ p)Aȃ5z9 cf1$-blgV[˥' >ﯿ&9  ֟#~Q`</lap "p_70+XF BWO;#3_|;ߙq9E%Iݻ#(B>}|5;nʤTRDQ.Ё-T aERq MCT[G>QxULhehc`%׌T3,e<)G:~'L=LVđ`%^8YG(fp;eN XU׹ś*1u%^=t=N h8G*WDbOx /p5ALK!m ˎi[f"=IOh 0 VUwTj!8{BņnwsCɗ#cѰn|'h+⍵9Y )xeN Yj 9S J&P $SaqCf  mUV圃E߆7Q ޱx|iGL٦ k˂9T, V%4Kc)Ict`2mMʕoA:TRCɜ &Ogl/`?ctj9p,ۦZh1ݲefb#-D-(L3Eg3}jY+{,*k;Y\o\t7><*M< |q2&T wMٍs8eQjk0,95 lro )D)ݏ~Qbe1\D)"(52?_rRI{HVBZ~"=>' =8e#'5Qwra4CcqQ*y(MuhQ.':PЂ;FՂ >-K5/["%q}:TGc|>IVQ xpja6=Nj_ǃ 7Dɂn.JI^|TOaDj) B3ZJ]d e]ă7r8V'JVVrě.ی.T Wt-L#0tDD:9hOuetu7n2-[?X͐0AdP$+'X+۾4opGѦTO>Q˯t\)3{un/4&yh=?sl/3ӊ3 s} XS_' K ӑ'}3.]f?Ugg9s 5q֞̾4Mu̓V|>:>d*h*xGeϧ=]!|{1"ԛfj-p2uK >CL) [ƶCrefzް:6S%gU{j>s1%Ru.dUiO:` z:)Ŕ^DeFbOrIQ&?al -g4S=H6 "}JArI%RMfP_ a<# cMyζۅx0>bϱMan8KcQ龯-ݖ&&y=a(8*%Z0&Z:>`iV6N止 *e .[O!>&fhsFvwk$ vs$*)hlwqDYƥq+F";FN! 3kTыòFs[d"2vaWوdg205cᴘK2"C,,+2#YІ0aTXJN_ll\t"zFE#q Tf)P8 Y$z͸ϲZ0*&KoWv!6pFAjP;eN?ZC|Z{eB5q.*6(Kΰs%& 0|x2`"$%ig+F"%bBcL*a#GwMarDj7S/^W`[qR"bv2E5nF#礭]b@|k=ʖ%j;[Cn֞V\nM#X߃ 7/\n3aruyY4!59u%+mNyㅏWj 0W|U!8Eqo*~6!h!0_)P!Qoa+e0ruN|&u>=G-;ީq )n4Snt']m$Rx,s,PH;~QZt zpN>w%5 ~ :xOSo WAle:ҏ|XǥWUV>VƥfxK(O4;iW7gT~9&440۷m[3sx9Iֹ[HA9t\m,f4-ʒIe;fI;9u}2uF9|Dk #{N<Z7uv08a:3F.ɂ#,8&:(||띹  .u74((~Z[>n gO|@FQBJ Yi&LLm4ESZ i L$O7Wu]t(XEwwzCsf`l#pعtNY& 6 vɯ**@X\U^Ry"LPe$y.Ȃk? /Q%6iP$)/(OK?;d;;C;089d$U*C#M@ LQ8QJ !Eq d*/͝uiUkU^oF3\8y?/Q)\9ք\0K.X낖DN{P3P}.Ԍ;+ yh*圱?۩HbIaU$X+J(d#FtJ(d GP%?NA04U!dÒ "bʴˆH J, "%0 J}=Vu]/Y" ώ&| פ`Œ5w`" < " eM0ҵE&V/&q鬲ci]Bq(JAHPNba&:B )9"iul%u@0/JM-0,o2y9+}sKqix4'qbƒ) a9(IHH$8@ڽB=K(j(A^F\Kwt8 #!]OZPqsE#e\[7߻OljG={ejsQJ GrD@H`Y:jhJ.8gTrTʅʙ^M8_hWq}fNvu)"feYTz^sU L-[B2C.uh,1 1cBhq!mqrXFH2Ƥ?FCOvlvs'q4 @JFXwY qHg]Q>]Q'Tꊲ; OAjʎwoɨ2QSQ0l~VYdzr`i_SL'k-1m+t{@)maXB9F$E%)?2`ǀ8Qh=3Ȇv䰲4˷WJwzꡬ羑,P8µp`D"@< 9,{SOAOLlƶ(U$)i&&fEPB,T2U,bHDLD.օ.1; OPXAw3\ X\"[G 0%*DݨcuÎ0LaCSL$Q"!N( Q0W/*( ٜ0sUw*H//oMTӷw~ae]FT.C@f@Ak pqKx-C&aeԃw`nVvj|T #4 Q E"aXi1EihCP'&OQ9.(RHY;f~˃O"}"Nj~u'c#q7|z~ר {tvMaSALTOQ߁ӛP&tLJY[WuOZ&&yywgBeK!lIy .Iꮠ3Cq6gU?OyBukj[?toC!FTp `\5#3tv*85u9=&.ӶQ)QhYؕz%'po/  ]#E#z5͕7d[|>bWQ'2AzN6vGz\NԗGæϺ2 m>jXȥem=Ѯ-n!Pcy~~u]>?3GE݂`9*,'>rTXjOzPI$"Zǁ˭Kh.1֥\z֥vl]"ZhN9n=sjl؇IW?S讽w[S5l;͂o>7{>>`SB]%[.B%%)SSR]IrL!i-݊ і$@rO6%E["YgӗY;dm\ g{"ی0 3K獙dڊHs=|XEfw3Xys{3, \(NIz S A1sMD!g"zq2rхӏxu!gt4(63b:akɻ7] =֮ x3߰}ڞw=:l(I=_! $Q<"S%0\TbTsxPeNcm2qQ`2vAŜqbK؎'_ D&3ۏ@BleGqbGZ^N['V=>U'5?h '//.2N)u4yh..A`Y0xKDAO2vҸa[{sTc0N5,gMW3B:E-s1r!h,uz5i! iPhJ*;ˆI~<Q|!(B {o 3i|g󻎌|RkȖ.},  Y_AXCo2=u2UPI0H4 ¹w'~]N& ,"-bz d1pI:zlr5>&ű_:TM[m zCjhV uUF)vbry{V&a^Ao5H(&3MdzO"?x'EF#I~g;WZC?jgQ+K0JhvH-|ȋPɿ5oTpWByB(T* 2Q(J2m|؏*f j6<݌Y!Ρ'CT 7,M^Pq+A1c(!H)|9BLsBTdҦH$,5u`' sP' qw b#QGwI/\|ǷWgpEnGu֛(#7MW߽e&ty۫G;sA??8rLʯ2wj~էY5j2y?u76|cI{VK\p7|q} 9Z yct0n&3{nѽ}H/o3fet2۪a?%Y%,|W$&n-n8K̼HẒ䏿njU=hf:di7힇Ćg M[x@8q|&)(yl{$Qd]]]Ul|2RX,UUj 64f獿[C8&ga<?y~ {h.-߂1Je~z>_Ke- ~^?}6a5Gi% VYl̨Z҈V tnjaCP!CTQ(!T*"lx"%20[uNR+IpPk:l UK ȂCOQ/2J#h `'@bϓ O0>AH ̞ ))_ }<3q HHxa8"@dF7üBHdq)MJڟ-eX?rdg$4bMmZ}FT%v_0v[TyxD7y܃ȩH eJAF( `r|Q_@R߃I!O݈oMyݺAJSY=~6ۺ 婥3Nu!jM!4IvUv3q^xe+{͚ȚS=١#igMח5uR:R0q ɚx؁]}s3GY /&?M?sFU mOHõC0,rgAo5`/S] >Ss]ؽ]5(Gh%Egm5tQZ賉B}a'f,8_!AN_x1IC.$qe3< Qu]h-?;>y7O~yG4j³\խkcHAQblKS]MYhn[@3F&H _%j (=-N't{=wp^8%)? Q ?[XfD/vX~Y|X08˰A~Ʋ^ ,+@TzL5E*^'ԪSMV˨e,aKvpuQ?9<ד_vY3{my*l<_햜$N]d2_}o'mkY/3VՔ-Y`_Q 喡y87-鲎S)4۩އvXvJ}a_o #a{yGy|"6s'b1wsjO^X8D Q,䝰RTKA2mTY)֗"֍cTOv[U[)wq QVk`x\%z#u'Q-k7+vWNX)nVj"7+$턕:6Tc:7+elKZ1'Tkwv+uKi!2=H;sŋ2,^=s$>\Y"qBglof`?+3Kwre >?z[dwO;sl޾1\N?5ba_%$S)B:*{ Tg@%nkVɶbRh n?  @3[lwuYe=?P:X ΃C/fLSTf(]gg̪La_02>z!c"j?C_(Hb3Ř-wxN\&޲yT U EE-*\-ꥋ.9kUdϨ/V*lޫՍ5oR7 sԁD Iā&".ۇ*A|JINgOòZ: Ftȃa-N饄h~&5pGxE"b🈄8#B)#lU.#bF8G9Qr{ 3kIq&04AkPSbB1/Y bqr++1JKځ+3M$jG#5C@EڣJ!˥QO3ߋ(2W2X00/#u#Q%HS_Gcmr"` }fdD މ#ǨQ-E_pG"0 \}ڞ38"`a(Qh $)0aQ?"!8Đ0(%ȹ4c83~Aё֊b&B@B:f#Pr:ޓ v u%iP G([kHp` #*I,$%wZI,ޢb\e ڀqM}K/,jS{ H<8&1\0BM[!ڬ (tB6M zLPchkP-nW%53p!^<WHo 9DTO=>T8lYZ j4 n,n9Xs)|nU7ꈖ.H落%i {9ܹM|R.LFp|\ꋠєv%)J b&Rq \ݺ)R9rBTsBu!tZ9 tJs/Z?(|-+䗐FXNts)Ri/a+U%e(PpQEڪND U@h|SS؁nTQLeTK/^}N AW:U7YZד]uVlY_W.>LS[kY_?tlY߄|&Z˦LvՕ}Fv tIkޭalGք|&ZǦȤnueb:uQƻ]0 ՚w_ݚD6MaVk!rڨ8 7p"!``7 >2ԉǨPmd>RfŠXh}1[u[)s NX 7D2^Vd`џ#f+%cKuVߺ pC'Rʹۺ_:`Y_ʴ:`TcDpqV_\vJ`0Sn+eJYN7zGs+RrY';ƥ1T [[)uq5̝Y)uO7 mUNX۪>ݬ+QmTY)yVJg*mS'}L ʗrhɡך=n| 1HFN{rZ42`f)Jb]f'?|i,\8ʆ:@fbO-˫lYI`lQ)Y(PIfH)+,.9N`fTRcBEufԀL Ƅ 0.LdxBkܖ-ԇGۢ4)=ܷoC'vn1$5 ۙW,4pn;чX ɪ=Xe,aTfrSM<F !WC`kH3)a|DÝopPBS P.@mZtMmwރvk"/,5Ne0nk,}_~za!/JVPV3,#e|$ Ƙ0{R4 BGPdYZsbMՑvUB3#01#Oe.pE))B'BtqAJB!i{H"b") }"ǂR1.Ѵ(ЌHfNU?Q_ZV@M8`H(d;mm ٳ;|m'Pnqv|h"Rq<{"Lj볷cJ[ջiz=X ՗\rw6N%g4dƕ6u? 'W6kIIu}\ݺhF5.PlHcF[XJ ; "~n s醃 eJn; q,"˝%aNeVt#}wg3H`J5jh`2!Ts}dB82!F2"P-#`2 á^ ).w(lQ4czRen i[|E b# T,"&DpX =SʄҀƸenu*ƙKvI} 4;#8,Wi!汊-WW a]>XqOSb`bM`)Yj0~9BGVmpaba= ⱻΣ??@N,&u#b}ODy{=w6 ,g"4sӆj򺠎 .T7xg/%d L/,۬7x7/)dH{KċAKcMmZwqHtwXM"Y4: 86Eڱ!lGΌ~i qvwbX,>?QPΐloo"uz'U68W2s bnG?즶vqCn?̐A:{wR]װt/Tb /Y[Ʊ뱿]H] %2NlP&TDYAVآjQ6{pH*%ڔh*)J%HĪv!>3GN YaS+u ʨ܊T(צݢ.Z9Ϫ4)Ӓt%%Rr$V nnM.G1,x9f#bFޑvFwUZa&d۔\Z4uu!ml˾Vp*L-5%ƭU׋ƴ;rVKgR|l CRΑoD7/Z׊U<eeIф1ey}os_Pu6,Fu4`V"uj VmeJJau/ͮogK\b(rwg[͙ZU~sW׿Bom+W'o[pd+O̟vUA ;imX bl_+M-E|fF ApW*rZ^']:T?X2T{Ajan(09͓Kgs`f%&#s!ݾDȭϲ'<[ hit%yzbKJLmIi>FJSD4#~&Q[5MG-ߛDS m\Z(I3BPgU#P'64-@=1,{"֞{} qS } C uzϑZjsYXٰgs<7y`MR[)UM$uv Kg;EN-|2{}J2yhn6 )s>ݻ7w@R]ϧZ#}CK?} !gcF:YNBzo<1= /?5a15T: \}ɸk&"wt#?XaOCK,䕛hMҗy7r11ox%-d-һ a!DlJC=A-bc:Hn3nN̻߁nCX+7юMH+Ódg-KV3Ȁ>,f:؝b:yAslh(ԅQ6Q6dn]0(k#r jG6$P9sAE.r18O`b,3QP)+knpZGEehӯ5@Qr",0lKr냦xo~uIjh/x ZqV10 AZ*?}55\wV+f#08Z;|0p *frlZ\-lg<Rёouʉ2$F"..zWFΟ~ؓP+CJ0sҗ:I'Y-hc.}Mz%9[^:7}u4/5䥇&@~]3@xw1j1)JKY\T9'.UPSckcZ_!ȋ)ЖX1H0V؝BEKAS*n[ׄF M #1 ogiI13b2NUqAYW EQAikbՊj1 @Qxȩp;s,`{Z|G3TĦ3Jѯ+v&!WR Q7!q}jEv[EJł lhbfdktA*0U7'W›XGSF-!-6Օʱ4*%J+-qE.yo G䐟ZO~|=Mǂ/`=ь.n6[}kr7fS$ $fXk8(sl> KrլUbkҮn|I5{=FV‘þe[X!}I ;>#H mBVA>E}jer; ݾφ`㾀 GAv'a~S<"-k{* $u'qdU+ suɭK&& A ) !%ɜXv)C*_IePrN<&Aڋٲnyd`%Nfl&4=o&Nnޠ{iAOab~1$u a 79ɛ?/8]O4;=gwU*fUE843]#\ GyjÌ<%`Ķwy\]Oky50;Z<,)>2F-XxlY1Z 0ٽg]ܧLy 4 L#&Y!p%3\;MΩp蚜Dٴ![YR[|M@`J']_j9+"1$&;O&@]\ N%NԄRxWxӕ%QT0ow[]OQWhr"=Y-s1i-ӗybP1Wn{ĊиW$\`sCsCS׿W+2s7Z}foBF!T Hk=R)B!A^NWz=nf_8YJJ)Bz}(=PJpo> zfZJ:XNǏDK=v FqE4G9+iT|ۗRmj<ƴlUg|%h>zxJ$m(J+D Wrӓa1*רa sBiqrI&vs4we.#V XdSQjZ& LdlgI͸ܽvH$K܉yYUefVSr$+q7(!.9ucg Ք]q%AuJ=۷/^Go16Qx.[[+=-MGj |ҳRAqV*(X)k\#R+=k+*JZ1V6^9[f :>cKӦ'_[o)';5Po1Ŕ>? *EX U_}m_>p3 JZכPIby:RLL>}Ũe"̩!fJ%an5Lsf_-j,h3RJZMr^թzB5TK%^OwlkE*:'9Z%٣Y7o>qGCc0B}HW^ii• 7ʦ5_Li{`;cp+"|HZ-tj F[EF|Mܻ2EKP~5 C)3>.s%W7z4\"HgP)X9F0r3֑:+@ICHذ{/1 XGFcT 8TʇShS@EpHe *Mء҅Q4,!tU(;յI0~b6J-^=y!R2Pmc쮝;5j?hJIoe&p6ZԶjP Ci pQ9+^%F8C45Ï`]Z%9 Lr,ЮqzkT_[~@F,$X6ij1{Ggc(q֞l܌0C\:>h JQ%3JW8냷FXm޷".ʖ[,Tz&ǁ=W߯&A~iqlv9v $ϖqc!2\SާV~~&[M-gP<|iXQ[_c͔-#IcDj#NxÏ8}"cs2ހX cs' 2XM)o6r7[#^Y07&<c5AhH튤3\\#Q>S0]LCҏO2 \I\#%B+@,$.|0+ɂ,-DˇB%07#5^jϸQx5Y)Jo$/JOI}ӕRJJY/m%SRtFP+=k+H_Jk +H_Fi/V=io#7EfP2@/t{ $HL H-[jINY풬:T(Ub{g-,QK@ )eZKsCXFjc`3@`)p"CIs.6 JdH$51^/Z|.1=;WU5HQCM$ADa3[귍$ZM?7hiƆA hBe94hEegn=ܓ}M/L,}W3R+]H{|jFeVnг~|V.JW悪h~˧8zХ֦_z6agfǓƓ<_!潦%ѯ\Gz[c|16y,g6b}Ic2ig?pҭ$*xGI.K[FٌboZVW-u0H\uX\?@}qL,|^ngijCf΋m >FmW 4%X>9KH9I* P'$Q>);g1ih"K'={X{~ty~sLjٮ;g.@(/%:l&g < ";M:nJs,o2i8U&N/խ'i3˭RGƩAMல-! Mr_rm D[浛l {b)jto R5<{D0k㌎]"z]3^k'P$C08 k BeڇL H5&$u|`Tpxْ.vu+7<똗,ỽ;07'8>5:/7M+-}-kЮTmEpY ~@&YC֚;,H[bvs$63>UBl;Zs^p=.b.ZE}6 VĒKXf_]bXdW~MCx=x 6R,Ѝ2t/J+qk{X/7K9h"|1R$z:9EY4e!`+uJ"GqP \ #901`9(Ble,*B)AMpV["x0s17R\^NCc@r"5*qV ]4Q].Z0AMZ^==O!:G 2+CƅT4W@QؼXs3 "< 'EY/fŊA?=ǣ : :R ,c=_Ϫyڞ o#CDTdERB II`xSBE .uQJ/P$XEq>o7tRp 4QcҢi&- N)TI)-&c4>o)M i%.&JˠsRAIJ}7Pg"g-I).RRY.V2(ἥ$JiC.| .^|ţOkTȪ*1!-,QA͸j@o^@ Ed6,\٢Jt5c$TMy"|0YrQ~IOdq8 2C9 p'x)7+R*1IaʠOӧE=$m{c+*l4.[ewT,lƒJ<łj*碙80PBD xM506ϩcocqW(ZK\VO \##=~P8?K3g5ݓ2ʩ >$C^琲]Rtp9RXa f*3Ո͘"xss&e q) {Kv a].w 4)xc-luA!)< ߣH`\  qM՚k-(P^IonuJH#З,w p. 'h~c@8  Q[Q\]X:Y!Am/8Ve`EbسC9t ,PgBe` 0)QP!@et_V|ξ/X1*d:(ȻuQErӊ<ԣ|ԫ4izV@‘'YgoڧU dse kdwԏD|&Wy&$q!21˖K>{))%M_8]fCId"xI/8 J(\T&phB5жr G͆K K<^bMu?)3R֌XZXV#˰Q9 hd7=~i$#&vxR{RX1PH:΅ 4g YQ(&'3<:i !1L+c,'1<+Pb^.${Tq= =ʫ( ټ9 %C*LrL#l^", 5 Ya(0aJqdݶ>ͽ NSvx s' AId|.AHWAq, 0hLRI!g3r؊96IC]gL3"j  l`Z?) 4Bly-' t?x!,cfy&t-d6nX&}5c;Tⳳ`4klD"UYjtҨ#JXΰ.EdE >~{LtT5X%wKѵϷ?ŋ}` yp=-U=6VE#Aj}49op5c6\^͘Mzx׌ʾxhlUHR fr]enW5)(ۖ&NQt粻}BFYPk|HpbEqOt\AljX#j' |~XꠦK/o Ӥg@ED`*:ˁ/fբȗ¯xON %Ng^n.o&-Oe@t0M??GVֈջz<2׫W_Gt"]]y8JB,# tn2<ӘS@2Xܦ*hԕ號PrynVR?v<BwT(yan`{Q(ͫ@kI %}KURN_TD.!⨬Ua熀(+4Sس?~(U"G8^{=Rf5p*Ii.LۡfmJY~MaAU;0 (Ȉ!ÐZor) 5 D>+Uc!$vvT?T)Y:}INbe/4){v2@IwƞnTItNy8MYwwrrt粻8g/;Uŝ$ (n7nRVj6vui/D,bx׵*ʭ *J o]TQna!\l/i@o7A̪WLVR$鸬iIQ嶛']l%l?ha nȴys 2vcK{Hի 0k?;{dRPzcf8 D/ɗQkH,4D[N,_!.0;_N+#l})+쨌d,y6oHV=E672["]:B$ӝ;3.q sĵJ{=YJ$(nmp!$ե5'&gG-Vc`9xoV{/trBl. 4&iG45Pr:5+#<[[aMqDUo;ұQIZ2hn!$y2͒Ri*p'JQwVLr;+uB%UR1}[u jӝ`4UQRaxBz5q}lM}n [iDKWVê5lwz+V$!>,ǥF.lYuǠ|De+o?5x s>չ>f(8a x]E4swWpx-ӛkGJ^tzW7acq1{e"tTጉ1R0&Lwu,Js_ɻkTx4ﮞ pv+͟ƋeK.X΋th@ܶ侐k|nvl=|8+JrrT;NSlgn1<#;3GuƄ` xZak:zLܙO { ˎG!AKoU McaY^.Lō3rb?\ C$i3h!nsZ⮋gW0x߂^ u-4،GEt"’bGȈa mXΌQT (X̂ff?ߡp8#BPFCQIs˭kKWE3(u#uQ1A[Bzߞ7QƔQ; 920dquPb$޽`]9``ml̄ [SY~o"Z` 3ix5,IدFS! M[!V gL&"6{[܍8#ϋ4sΓ8?e̬Ŗc'mSA&RA1U@޹'dw+NXer8@{JdKcxqQ;Ba4/"d %ЩQ,f1`"Di(Wx"1pT?]nIO3@+o@?M/;PȫE")^o$%SK@$L։84[pκcGGRd*"V͎1.U'1xg}wE L^÷i,g{iыIzm^X(pY_nNfGLBY&:v^;o8öݰ(7,bs67\s2^nKUJVQdDcb"wc!YGD= ;a4@tmbO6-R[?1D7@Kj9 㜾Av]Tu>{cZy|\joEghĎTd%D2mEPIlMwE;^gwG,Ą³$EUhCFg7;+vEAi8Vk]+G kN)UbJv@ɀ_Jf :A2WztQnmpBXS͢g^/pbD!U2t_Z)Ԓ_ump䢱+iq[[sݰ€$oQرsxyܤؕᄲ?m./K Zba4g9GN6nxceBdqG&lywB ^ 沙ck8)¨dvy\v4 NXޅ,skb!@w("1!qhbϽ$s8k2oϋ#x8 ?f=qηil+W1ܘ8Ѕ8)Ȗbz0J쎶ݿdK2τV=C!x)5J)s:|m'C9M=trv#x/(68E<(Rjo8L>H?Ƈ׫[ w8fK2$e,3675Vfc5}Rj{0g#NT-|>L^*gÞqp6uFF 33% 5എ^ YÑ1΀p[BNk˩k}֋ 9-¯c;є("8CgT5"یHk] фddptoťrS|y11< ZW&ik&dRQl"[vil,R[84Ŷ.^#r"J }h,.B'Fx`j^5%KpY%ec17\w@IwוRK?w۷8gAp=Ley}mHIMͪA+g9"MwȕՓz-69n-pk8vmi}9ɑ(5J* :u^=NZƱ:+.,%rHIlBّP#F茺Cћ8XE՝%HQqHO m4a3y t7˥rõid56ɿ*bnP\F>¿DM`|j,.ZR}ŀ:u/PgԝPқ87Ξ&ٟ 8^R1h9@<|Iu.us<.Q0r qbޗ6-E-՜?(Gt1tO_\зo#@;Vf`i5\ޅ,?}_Ͼq\\spي'R($j8R!kؔ#ohjvGXC0 q"9qV)4+rS} trBJIЧFó,1xۋXnfK 5qk~ˡr {RxjGx,t;4ݦloWH|M]tMC־!|_F?AǍGzwl)%Y')1IIf=-2RVT7@eBw!y..m EuyVJ]3O`?*'xIB7R`MoBS FZNM7 qX~ZJ]h!.sb"_l]5t/v21Ez#vqڎן (&au;]Zݎ'ouW7n> Q|c{kUPVێh.*NJ68У57a/.Xtm>ԖK .cqKQ^SK翝ڍ6eV++"ّ4ݍj&Lzj)\f+w,`3$b98!Z`Bm}b%~kͷٰo<1yV2fRٶqMYSk$#=! LW]_T/W}j58TЏFP+-NǨWv}\:zttd45~R_.*3$@%8bQArL^ʤ]2q';1^)dV+Lwb ӅjIOSۈJ,l9YA%`5ݾk04ƲKcMNxִy>W*yՂHS^lCj^%= "q63D^BU$ t.UOL&;ի>Bt щ2dA#A S`jzQY_O-Wxj崑2֦Q$5V MC-|1րf@@<qY됶%1 -z>(EKNqdJ]l׬opG@F M dP*4 6f uO2&ƟksA#j&PhCP0*8xOSo=fwR>c,M^gV*+blce"6rd0Saθb,v >k3{wޚRa۠@PyU)ש෉ccD+|:ccꉎ]cj?~[-wځ#_xM9 v]K<ؚnߌ?]GQݲ6C?v`b>gb+xFwFend/6gDR4BPzM0VK2Onl>ow77oȎa2#4Gu܇e?Q»L;Y[ϯn`q</(wIm(l\730򗐡wd9 jl"2G5$g 4֟CLifO}9edGBvS+5lzL Miucm,>|O?(ݐӚ~2_FD YD|Bar4ng%]$ɟ3ua& >c$zXeXZ}WOID5Ovi,qYiJ:ɔ1x6IRˊWP p}6G/{2 j]ڸLcv lhMLjT@k%ў- ,~!z(hKǠ j{8Ja!Uɢ;_X.vٓ 3Vj7_2TA3}5zp0(>+ۺI(Lsc\#c"DmBCI+~bMu _|X6:A$x~rv-/6NGh-:qʫi|+/_2߶ZUژ(& ^m?}úm Ȑ]$.+mHlyg?ݍ1ve= */I-^&),(RXTUQǪa[T122e1\΃iw$$jv@]#o 6˄ a(-@S)3Yt8c 3,fX:6Ts _`6w'a;ju}LALeZJIo1 )VG/0[gf9pvOiI1Og%c w﫱-wydm=~0%L. HoL$Xduû MĮL%'M>ܗPq|[UB-WGE@єJ1{VKٞ1-Ipw'=-0IQEW&ko:[ް`Vv?zb+;1qXԫ{O'ě]VrYnm:MCGozZ"<˘M;lg-AϖH80Sg* 'fcZ3i4lF~?.o[g?Q90B[ RjQ99 mYaelJpkh U|p턡BDDeCJ D,hi>PU6&W_]19(D?_.)eh!f>@6EJ@J1j YrP$/ 2T}!, 2Cs*+l ט8qEd xq:h} C]S/r}XXv$=nCM_(z~BK=Q#H,1c+81M.YrLy.BM -P"2 G 0S<\̾@Pt7W*T+P~O|%Ugw?}*^h-jy,8oϾ?kB&'K:~,w9nN6mYrpg@k7*۵GF}K%"6*[ړFLp kxDc$N?Lxf6 |#f5KKA|N0 S}r`@+݉kl k6]܎Gߎ ӿ|mq?S%5*۞a 2Hm$Y^ab%1"b:ƫmm9e9c7`Y@~u h2j6e|* K``̷Φ veʼ;d9n *MO* Eۆڎf1T'NEk*Z)'T3};8k|C5 Rk.]߁"1گ g]m:PƠ\ݹ0daNo f2]'Kņ5FՕ7Ү%ʤV'6{ؙK'Po%&bMm.DcZۙȏE R_X!'5Id*yKCIt=ER̽n$[3[ɵYwϣOol-M:Vd_}t-n4z'V59j_5&}AFJ's7αR?p@6Q{wBT>_FHvzuG8~^5]sTk5/: zص]OuھTÕ+,d_gϘCCYl+7'~ nnzO]gͽ7̊ \SG}2^/;{b:dVBbdb c}5{kb JFOHgD*Shcl1>[ -%dN"O$!D$8Iɧ;8< 60-LqCWV:2s38J&ȇ:v\P(ƈ9Tuvj# 1JS MJn hZK.aaiTh4S:i,q4cU~C Ȇrn;|"h<2,Y:;C*(QHg:xRDjD l,Y=e,۲WT4Y\TmJ{_ UZHIiKռ(82RSZfҌc/||*sPB'9tҢ>'q15tc KِBI˱&yم`58Lq JGBhȴK 2JoHb*Tb> #o" c  S 1c+ S.a F&b%%"ԅCHL[I%`cUͦ CSY|[,xXB%Y?p."3˘yBQ+Q5g@Ur3M9;^5n^&.5 r("$S,Ч]XWX@``MF۶{Q t.}Z-h_Ҿ@ ٬ s!fYtg$+bҜuz,?U[^Ѵ͍P#5{FQ~eVߌ7.)2[ ʘ7-^k|YLRix%u, ;TPf_=n}T k*%eq${!9Fڳ>4Amӽuh926'5m I?~!rDgC 4!h5+-HBnB7a2?KC&U^b R/\ySRJGt1/f@gmK1CI^) C x˘{u Ts^پF+! 5 )tKwf9:Ox8w{eCDV\׵v=>=@Z~ҮxwXM4 Rg*H%V.ZXN'Єi\e)p>'%XA:x@$-2')|$vuAKĹ FGS,BԐ'G-bu9jR sV疮~ާbVOc}͙J \srZԶ1WT8ұq=1'[S#5Gtm wjM= odwiG76/u|z{Жr ?}f+J&֧JJ3Kz'S͇KHG:ZòM]ᙡr HI \a`poy*Aځ5Vޞl-4ofs-#={Su膌Ѧ>GWr2JAf\5M3cdruj;Q"л%^cZl}J5+]rՇ' zeUF2FY 5݋ݱ K35n^nX7h.Ɋv8D;Tszܓ9ю* +*ftNp[suzcAyFeONTvqomue:F϶X uP8rSs<3I)*_$QJ]0]#o|Q*rlfh?=A tr.h:Ba2S+DCn(Y`Rh@[M9Z:1-x xn2,XL.yG)& R0#vcaW*^ fƂ@w&w胲f!j B)^El4k, F7& a';1E Rs JFsN]P)&Fcw(gD {CDQ0[!2^EmXPc 45*6w+]W.R mb֠$cZɑ"Kruo ̗Ar2 l3~EgSlveխ-,cmvկ*UtB:TVXAGNnT BЅ@1xfN m]/(2~@a& ջ"s~S rq҈2u0Ǿjc[h8 9ԍ^U1 > MegPa$N" SrI,sMP]p9kU!ZH6vDfB$N@-9WGDORI) e Hhroz &uL,"TF +I,?__ɯ,@`;ٙ[~R˜$EL}, j.}xlv7..}ȔHWJ"~_ډmz/ۡp4{g67vm:J*9hqEx*_t6/7CqhidjmQk[ a*]PuwvjkA{{~pՐTj(Q5/*E!]V pðk( ݞ W* EiLfu08:|EYM83G>kBrA66Zߕ[  l] VC 1Xm"<JI.%! _02vӶg5)ո'sQYLVT0&?nfr݂,d25 nshsXeAS%7\5;:fA?tl4ВQ+KbBF IZ24R/Yr<37 *>yݧ0 M*gFW㡌|YPtY..a4WRhmWT1k[EJr2FF~Z\wpBeU+ =Tb dBBI N-D0hb!Gܶo3b L2gͼ,ܙ\ool( KU" R!Ji2(tYϽ@їa yesezb-v[EG@*TڲphOLjXYsްɖU E_{x;q-Je j;ދ{x><Riʴm <͏LJZjV 䡀\4ro^'i1h; Ok5b-2e"J"kPσR66ӫ ǶQyǜv w hUQʣQK0T blE ࢐P*̗Fx2ye>?ir3Qt.H YaA/EVϋ(bѨ76V֢Y`6t΢uk Gi:[tF{ E1΃oiR?6>IUSfO] v7ASyIz8\ $:'c`i#Ge9zBV8ǙRX*VBh ZI㒴u&htf|hڍw`4zW#JЌ Nfc%sFb *]q,Ձ$,r$V9+Pc*PM61g2/%&ݹխe_A:m\ozyoUJ2w/hS*Nk2/%38!F- R?Tɼ)+&)_5J) kѵICUqtH?:XMCSt6Shn Kߎӂzz҅ePF,Oi'4ND >XLW>aߦK0Ewv9ٜHv2Xȕ K=ZC*OmAMRt!Z@QŚw<Cq@7-m%gG-NTS-cۭpg$#8 xԛ++&|\KCR_Y#Mf˵N+t E& daMW"FF=̗Jy:<:mj3;O:_ ZD9F>qq7a/ äääæ^F'T0Ԃw`7\J(B 13Er͍/O~1<w/N(LG%oC5I*,\/_̲dO6|Y8LKg Aqd,FVh⣋ƐXU1NGF-!@Dsa|[:B#Ki]gbR,mKS)-bi+ I&)7Im/,Y8y1lz^!r_f" &IjX2OFDVGAH2\b"ӎ\ + pJs{B$R{5=ދ?;Ӌ)6Sӹ83@D&NMxĶBJsli#ekE'<矊K{q6k1*FTQkѰtmG-۳(!=$??&1t|_AMqM#1!3w鏃t)Y >/gUzoIȎ_cs1-?De}Zm^ 7m?瘋?|9ۃZʜd)z S['0͢7=XH{.B򭨵_ Q5(*T 9:=~CP2Re FD83]!՝Χ"B>r3TLu񾢑WlQSZEWJs^qpDc=*nR;nW 梅6mU &DJFnbRV4JAZk.y"s)&f$ ȡͿv'ŭ7k%?ʅGy~U-A|^kVd6uWR Es E;?}o?}_{n ac":,T+B;د=oN s ?t=@ S!o W i(Sm4ٔTK4# FAҹwqt;mҮ6zB7?1jr> ~dnПWR^/1H  uRH0Boh<9!Tӹgʆ>Mi@]{Ӓ`/syZ}x2L!\E;锑-[7!XNwnsfLlݺ?˴n}hW.:%(=xsٺծ=ZNwn"Lܙu략iА ::Q&uϥv_+ݞ3š1H r8ר@5%V;ae޿c ʹ+QMi@+%3uzMSFD舍bs T̛\2#5?/P͔p[G=DK%t4jSKEsy.ucRTRKN QaU a*QXE`(x]K=Wb #G8J1.,pkʣ*IZV _)8聆Q7_e }X~dÆc^:3Kq,1`Sq21KT0Z_NCWd?\0Gӿ8'%dLBZq zKTFt|k,+)f12 f$,!/͆Z<1%9 *WaF+N\ p%+yes*V;Zq3ΐÙ tWGT+;XƃĬwtܕDj*F,h~`9B^9D0m#&4[) Yx֭+y:W6GG߾hai(;Qc}yz*76kǧ-7R7y.)dF=MՔ0qȍSi&PT F'R&PZSm:tQ &`JxjJ4J"Jl+y(MTSҬ_r@gj;̮p+V7JC)W|a$PJMJY҃_(e*L͕@A{&[K|c_.pi]̤}zo ]6sJ 1w o)?kŸ}E+ոyvy8蟺܆G>n?uME$WrGLMcKPXQd:PD3VDXŲfH( -"7U"9( CUJ7A/7LwJ])$倁pSصU2=d:|^m9(Q\7=?Vx'|^ <{RE%_ib3!͞ȕq'`fE1#""*m͞q1$gb0JbQb<%\DE~IYA\4gEu-^UK2dp0a}!]e`Jec$<)J Y1Jz՞ySټ7DyO сZ")='ZF srvӫAkeѻ^---Ҿ,]iA浌r^QzxNF)bʛ*JLPn`' L7n\xz@'s\ UXPY7SхmUټs^w0!@􏢍F( G#,DBi],qW056U$2j򎝖T퇌}Ψɫz+|5X=X,k.3\)#fD:O]-c# r P9\-,1NTaȚYq~r>~[~R4%7VOe$ٗ/_nw0i~̎?}F%͢u?A`Y҉f[!S-)vŶZh`׉J襊m;W~hz7& mTWstqyRo'Wחej:ptX֩V/M.ld]mzSwI3/-5ߞ;$$$4 4S ))p{HI6abU 9zᅫ:^$n( y`$%FIazmD>(gG8LMunҸjjVR9ZYsǑS4ǛT{ҼN\iE 1|(ĵJ*j+wB)J%F3!x)PI3D*cpZGY@zhGUB[aSS&'V9P@,)SP6wZЪL2#bҢM|I'B;}$8ohIDhjyTD cOQ}I`ΎҷPx1var/home/core/zuul-output/logs/kubelet.log0000644000000000000000006342102515136625072017707 0ustar rootrootJan 29 06:43:35 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 06:43:35 crc restorecon[4758]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:35 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:43:36 crc restorecon[4758]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 06:43:36 crc restorecon[4758]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 06:43:36 crc kubenswrapper[4826]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 06:43:36 crc kubenswrapper[4826]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 06:43:36 crc kubenswrapper[4826]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 06:43:36 crc kubenswrapper[4826]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 06:43:36 crc kubenswrapper[4826]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 06:43:36 crc kubenswrapper[4826]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.524819 4826 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.536994 4826 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537060 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537072 4826 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537083 4826 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537092 4826 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537101 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537111 4826 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537122 4826 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537135 4826 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537150 4826 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537162 4826 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537172 4826 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537181 4826 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537190 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537200 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537212 4826 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537232 4826 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537244 4826 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537254 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537265 4826 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537274 4826 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537283 4826 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537323 4826 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537334 4826 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537345 4826 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537354 4826 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537363 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537371 4826 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537381 4826 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537389 4826 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537398 4826 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537408 4826 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537417 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537426 4826 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537435 4826 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537444 4826 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537455 4826 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537467 4826 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537479 4826 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537488 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537499 4826 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537509 4826 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537523 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537532 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537543 4826 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537555 4826 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537566 4826 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537575 4826 feature_gate.go:330] unrecognized feature gate: Example Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537584 4826 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537594 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537603 4826 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537612 4826 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537622 4826 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537630 4826 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537638 4826 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537647 4826 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537656 4826 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537664 4826 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537674 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537683 4826 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537691 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537701 4826 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537711 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537720 4826 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537729 4826 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537738 4826 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537747 4826 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537756 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537764 4826 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537772 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.537781 4826 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.537985 4826 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538011 4826 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538027 4826 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538042 4826 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538055 4826 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538066 4826 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538082 4826 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538095 4826 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538105 4826 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538115 4826 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538127 4826 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538143 4826 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538156 4826 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538166 4826 flags.go:64] FLAG: --cgroup-root="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538176 4826 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538186 4826 flags.go:64] FLAG: --client-ca-file="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538196 4826 flags.go:64] FLAG: --cloud-config="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538206 4826 flags.go:64] FLAG: --cloud-provider="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538216 4826 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538229 4826 flags.go:64] FLAG: --cluster-domain="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538238 4826 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538249 4826 flags.go:64] FLAG: --config-dir="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538259 4826 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538270 4826 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538283 4826 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538327 4826 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538339 4826 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538349 4826 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538360 4826 flags.go:64] FLAG: --contention-profiling="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538370 4826 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538380 4826 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538391 4826 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538402 4826 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538415 4826 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538426 4826 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538445 4826 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538455 4826 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538466 4826 flags.go:64] FLAG: --enable-server="true" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538476 4826 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538489 4826 flags.go:64] FLAG: --event-burst="100" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538500 4826 flags.go:64] FLAG: --event-qps="50" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538546 4826 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538557 4826 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538567 4826 flags.go:64] FLAG: --eviction-hard="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538581 4826 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538591 4826 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538601 4826 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538615 4826 flags.go:64] FLAG: --eviction-soft="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538625 4826 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538635 4826 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538646 4826 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538656 4826 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538665 4826 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538675 4826 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538687 4826 flags.go:64] FLAG: --feature-gates="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538700 4826 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538710 4826 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538720 4826 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538731 4826 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538741 4826 flags.go:64] FLAG: --healthz-port="10248" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538751 4826 flags.go:64] FLAG: --help="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538761 4826 flags.go:64] FLAG: --hostname-override="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538770 4826 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538781 4826 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538791 4826 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538801 4826 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538812 4826 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538828 4826 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538839 4826 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538849 4826 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538859 4826 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538869 4826 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538879 4826 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538889 4826 flags.go:64] FLAG: --kube-reserved="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538899 4826 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538909 4826 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538920 4826 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538929 4826 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538940 4826 flags.go:64] FLAG: --lock-file="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538950 4826 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538961 4826 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.538974 4826 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539008 4826 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539023 4826 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539039 4826 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539053 4826 flags.go:64] FLAG: --logging-format="text" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539066 4826 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539077 4826 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539088 4826 flags.go:64] FLAG: --manifest-url="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539099 4826 flags.go:64] FLAG: --manifest-url-header="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539112 4826 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539122 4826 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539135 4826 flags.go:64] FLAG: --max-pods="110" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539145 4826 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539157 4826 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539167 4826 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539177 4826 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539187 4826 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539197 4826 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539210 4826 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539235 4826 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539246 4826 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539256 4826 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539267 4826 flags.go:64] FLAG: --pod-cidr="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539277 4826 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539339 4826 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539350 4826 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539360 4826 flags.go:64] FLAG: --pods-per-core="0" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539370 4826 flags.go:64] FLAG: --port="10250" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539381 4826 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539390 4826 flags.go:64] FLAG: --provider-id="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539402 4826 flags.go:64] FLAG: --qos-reserved="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539412 4826 flags.go:64] FLAG: --read-only-port="10255" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539423 4826 flags.go:64] FLAG: --register-node="true" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539441 4826 flags.go:64] FLAG: --register-schedulable="true" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539452 4826 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539469 4826 flags.go:64] FLAG: --registry-burst="10" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539479 4826 flags.go:64] FLAG: --registry-qps="5" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539489 4826 flags.go:64] FLAG: --reserved-cpus="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539501 4826 flags.go:64] FLAG: --reserved-memory="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539515 4826 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539525 4826 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539535 4826 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539545 4826 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539555 4826 flags.go:64] FLAG: --runonce="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539565 4826 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539575 4826 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539585 4826 flags.go:64] FLAG: --seccomp-default="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539595 4826 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539605 4826 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539616 4826 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539627 4826 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539637 4826 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539647 4826 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539657 4826 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539667 4826 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539677 4826 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539687 4826 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539697 4826 flags.go:64] FLAG: --system-cgroups="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539707 4826 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539724 4826 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539734 4826 flags.go:64] FLAG: --tls-cert-file="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539744 4826 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539757 4826 flags.go:64] FLAG: --tls-min-version="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539767 4826 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539777 4826 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539787 4826 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539797 4826 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539807 4826 flags.go:64] FLAG: --v="2" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539821 4826 flags.go:64] FLAG: --version="false" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539835 4826 flags.go:64] FLAG: --vmodule="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539848 4826 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.539859 4826 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540122 4826 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540136 4826 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540150 4826 feature_gate.go:330] unrecognized feature gate: Example Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540160 4826 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540170 4826 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540181 4826 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540189 4826 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540198 4826 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540207 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540216 4826 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540226 4826 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540234 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540244 4826 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540253 4826 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540264 4826 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540275 4826 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540287 4826 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540331 4826 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540344 4826 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540356 4826 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540367 4826 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540378 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540390 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540400 4826 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540413 4826 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540422 4826 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540431 4826 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540439 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540455 4826 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540463 4826 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540473 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540481 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540490 4826 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540498 4826 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540506 4826 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540515 4826 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540525 4826 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540533 4826 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540543 4826 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540552 4826 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540561 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540569 4826 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540578 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540587 4826 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540596 4826 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540605 4826 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540614 4826 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540622 4826 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540631 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540639 4826 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540648 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540660 4826 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540672 4826 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540684 4826 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540699 4826 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540711 4826 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540723 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540734 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540748 4826 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540762 4826 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540780 4826 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540792 4826 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540803 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540814 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540825 4826 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540835 4826 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540845 4826 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540853 4826 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540861 4826 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540869 4826 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.540877 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.540908 4826 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.555477 4826 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.555573 4826 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555700 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555716 4826 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555724 4826 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555731 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555741 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555749 4826 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555756 4826 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555764 4826 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555772 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555780 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555789 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555795 4826 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555802 4826 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555810 4826 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555818 4826 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555825 4826 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555832 4826 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555839 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555846 4826 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555852 4826 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555861 4826 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555870 4826 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555876 4826 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555882 4826 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555888 4826 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555894 4826 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555900 4826 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555905 4826 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555911 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555918 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555924 4826 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555930 4826 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555936 4826 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555941 4826 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555947 4826 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555952 4826 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555958 4826 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555964 4826 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.555974 4826 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556013 4826 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556022 4826 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556027 4826 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556033 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556039 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556046 4826 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556053 4826 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556059 4826 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556066 4826 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556071 4826 feature_gate.go:330] unrecognized feature gate: Example Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556077 4826 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556083 4826 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556092 4826 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556098 4826 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556106 4826 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556111 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556117 4826 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556122 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556128 4826 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556133 4826 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556138 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556144 4826 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556150 4826 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556155 4826 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556160 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556166 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556173 4826 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556178 4826 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556186 4826 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556192 4826 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556199 4826 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556205 4826 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.556215 4826 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556435 4826 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556443 4826 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556449 4826 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556455 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556460 4826 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556466 4826 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556472 4826 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556477 4826 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556482 4826 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556488 4826 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556493 4826 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556499 4826 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556504 4826 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556511 4826 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556517 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556522 4826 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556528 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556533 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556540 4826 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556547 4826 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556554 4826 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556560 4826 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556566 4826 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556573 4826 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556578 4826 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556584 4826 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556589 4826 feature_gate.go:330] unrecognized feature gate: Example Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556595 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556600 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556607 4826 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556613 4826 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556619 4826 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556626 4826 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556634 4826 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556641 4826 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556648 4826 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556655 4826 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556662 4826 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556668 4826 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556678 4826 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556686 4826 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556694 4826 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556702 4826 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556709 4826 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556717 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556724 4826 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556732 4826 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556738 4826 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556745 4826 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556754 4826 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556762 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556770 4826 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556778 4826 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556785 4826 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556792 4826 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556798 4826 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556803 4826 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556808 4826 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556815 4826 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556821 4826 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556828 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556835 4826 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556841 4826 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556847 4826 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556852 4826 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556859 4826 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556864 4826 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556870 4826 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556877 4826 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556885 4826 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.556892 4826 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.556901 4826 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.557197 4826 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.562781 4826 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.562914 4826 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.564477 4826 server.go:997] "Starting client certificate rotation" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.564510 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.564736 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-14 17:29:31.414464929 +0000 UTC Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.564852 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.592841 4826 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 06:43:36 crc kubenswrapper[4826]: E0129 06:43:36.596100 4826 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.597261 4826 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.622765 4826 log.go:25] "Validated CRI v1 runtime API" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.661691 4826 log.go:25] "Validated CRI v1 image API" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.664806 4826 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.671886 4826 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-06-38-44-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.671947 4826 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.692511 4826 manager.go:217] Machine: {Timestamp:2026-01-29 06:43:36.689961477 +0000 UTC m=+0.551754586 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e BootID:905e3489-492d-4437-968b-82f79ce0edd7 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:72:60:b2 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:72:60:b2 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:af:8d:f9 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:59:8d:41 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:4b:39:91 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:70:d4:18 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:2f:f0:5b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ce:c8:59:7d:f8:bc Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b6:56:ff:58:9b:6e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.692852 4826 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.693158 4826 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.695249 4826 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.695529 4826 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.695584 4826 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.695886 4826 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.695903 4826 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.696518 4826 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.696564 4826 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.696848 4826 state_mem.go:36] "Initialized new in-memory state store" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.696969 4826 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.700272 4826 kubelet.go:418] "Attempting to sync node with API server" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.700326 4826 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.700359 4826 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.700380 4826 kubelet.go:324] "Adding apiserver pod source" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.700401 4826 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.704434 4826 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.705417 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.705496 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Jan 29 06:43:36 crc kubenswrapper[4826]: E0129 06:43:36.705569 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.705615 4826 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 06:43:36 crc kubenswrapper[4826]: E0129 06:43:36.705596 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.708988 4826 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.710800 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.710853 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.710888 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.710910 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.710967 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.710988 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.711002 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.711024 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.711049 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.711066 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.711103 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.711140 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.713084 4826 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.714154 4826 server.go:1280] "Started kubelet" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.716574 4826 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.717678 4826 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.718347 4826 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 06:43:36 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.722112 4826 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.725663 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.726129 4826 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.726359 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:49:25.588191927 +0000 UTC Jan 29 06:43:36 crc kubenswrapper[4826]: E0129 06:43:36.727418 4826 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.727658 4826 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.727785 4826 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.728012 4826 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 06:43:36 crc kubenswrapper[4826]: E0129 06:43:36.730213 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="200ms" Jan 29 06:43:36 crc kubenswrapper[4826]: E0129 06:43:36.726762 4826 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f20983741b3cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 06:43:36.714064845 +0000 UTC m=+0.575857954,LastTimestamp:2026-01-29 06:43:36.714064845 +0000 UTC m=+0.575857954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.730812 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.731005 4826 server.go:460] "Adding debug handlers to kubelet server" Jan 29 06:43:36 crc kubenswrapper[4826]: E0129 06:43:36.731342 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.730876 4826 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.731415 4826 factory.go:55] Registering systemd factory Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.731435 4826 factory.go:221] Registration of the systemd container factory successfully Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.731860 4826 factory.go:153] Registering CRI-O factory Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.731913 4826 factory.go:221] Registration of the crio container factory successfully Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.731984 4826 factory.go:103] Registering Raw factory Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.732016 4826 manager.go:1196] Started watching for new ooms in manager Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.735359 4826 manager.go:319] Starting recovery of all containers Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744277 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744373 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744398 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744426 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744455 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744475 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744496 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744523 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744589 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744606 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744632 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744662 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744689 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744717 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744734 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744759 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744866 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744887 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744907 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744926 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744946 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744972 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.744992 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745017 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745071 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745525 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745600 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745623 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745645 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745673 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745736 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745782 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745819 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745838 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745856 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745874 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745892 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745914 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745932 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745950 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.745981 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746006 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746024 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746042 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746070 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746107 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746135 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746163 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746193 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746247 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746445 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746471 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746566 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746588 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746609 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746789 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746927 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.746972 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747001 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747022 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747049 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747075 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747180 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747213 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747286 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747337 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747383 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747470 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747560 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747605 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747630 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747665 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747712 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747730 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747761 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747787 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747813 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747833 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747860 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747879 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747916 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747937 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747962 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747981 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.747999 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748015 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748035 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748054 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748072 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748100 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748119 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748137 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748159 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748179 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748206 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748230 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748250 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748268 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748287 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748408 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748428 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748458 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748486 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748518 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748556 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748576 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748597 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748618 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748639 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748668 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748687 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748711 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748731 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748750 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748768 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748788 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748805 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748841 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748859 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748893 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748919 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748950 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.748975 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749003 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749021 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749039 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749077 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749106 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749135 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749153 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749223 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749372 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749431 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749457 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749488 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749519 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749562 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749581 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749601 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749623 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749640 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749657 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749674 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749701 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749722 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749772 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749791 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749811 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.749831 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752071 4826 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752131 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752154 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752175 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752202 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752221 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752240 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752259 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752279 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752355 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752379 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752401 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752429 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752448 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752518 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752537 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752554 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752572 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752589 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752610 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752632 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752659 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752677 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752695 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752720 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752744 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752762 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752782 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752802 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752821 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752840 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752858 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752893 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752939 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752965 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.752984 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753003 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753022 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753039 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753075 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753094 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753120 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753147 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753170 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753188 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753216 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753243 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753263 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753281 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753338 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753358 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753375 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753396 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753415 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753435 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753555 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.753998 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.754088 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.754115 4826 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.754154 4826 reconstruct.go:97] "Volume reconstruction finished" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.754176 4826 reconciler.go:26] "Reconciler: start to sync state" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.780602 4826 manager.go:324] Recovery completed Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.792264 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.795762 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.795865 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.795886 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.802741 4826 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.802775 4826 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.802810 4826 state_mem.go:36] "Initialized new in-memory state store" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.804825 4826 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.807390 4826 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.807441 4826 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.807474 4826 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 06:43:36 crc kubenswrapper[4826]: E0129 06:43:36.807530 4826 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 06:43:36 crc kubenswrapper[4826]: W0129 06:43:36.808192 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Jan 29 06:43:36 crc kubenswrapper[4826]: E0129 06:43:36.808319 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.817953 4826 policy_none.go:49] "None policy: Start" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.818947 4826 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.818998 4826 state_mem.go:35] "Initializing new in-memory state store" Jan 29 06:43:36 crc kubenswrapper[4826]: E0129 06:43:36.828496 4826 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.884077 4826 manager.go:334] "Starting Device Plugin manager" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.884260 4826 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.884324 4826 server.go:79] "Starting device plugin registration server" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.884929 4826 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.884945 4826 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.885194 4826 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.885694 4826 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.885756 4826 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 06:43:36 crc kubenswrapper[4826]: E0129 06:43:36.895678 4826 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.907885 4826 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.908036 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.909673 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.909753 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.909780 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.910093 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.910605 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.910896 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.912441 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.912511 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.912537 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.912696 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.912728 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.912751 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.912851 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.912993 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.913059 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.914587 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.914608 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.914825 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.914851 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.914652 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.914935 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.915225 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.915413 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.915458 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.917041 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.917093 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.917105 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.917325 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.917408 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.917448 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.918178 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.918218 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.918231 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.918481 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.918521 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.918536 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.918696 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.918755 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.919181 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.919218 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.919228 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.920618 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.920646 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.920655 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:36 crc kubenswrapper[4826]: E0129 06:43:36.931406 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="400ms" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.957837 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.958444 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.958505 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.958552 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.958593 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.958631 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.958672 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.958714 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.958780 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.958819 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.958852 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.958890 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.958932 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.958971 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.959044 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.985443 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.988250 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.988407 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.988436 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:36 crc kubenswrapper[4826]: I0129 06:43:36.988491 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:43:36 crc kubenswrapper[4826]: E0129 06:43:36.989393 4826 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.060660 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.060838 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.060965 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.060962 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.061159 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.060991 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.061563 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.061406 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.061870 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.062086 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.062263 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.062402 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.062584 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.062890 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.063007 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.063135 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.063220 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.063372 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.063645 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.063502 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.063951 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.064059 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.064183 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.064350 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.064749 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.064604 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.065033 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.064918 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.065218 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.065387 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.189781 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.191786 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.191997 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.192136 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.192355 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:43:37 crc kubenswrapper[4826]: E0129 06:43:37.193134 4826 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.257286 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.269830 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.305186 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.310734 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.315355 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:43:37 crc kubenswrapper[4826]: W0129 06:43:37.317956 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b9d7e53afc7e25494c24a094df0d6c5e10611a877f804cac4b46ca224573da94 WatchSource:0}: Error finding container b9d7e53afc7e25494c24a094df0d6c5e10611a877f804cac4b46ca224573da94: Status 404 returned error can't find the container with id b9d7e53afc7e25494c24a094df0d6c5e10611a877f804cac4b46ca224573da94 Jan 29 06:43:37 crc kubenswrapper[4826]: W0129 06:43:37.328733 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-f6ea53f05b51af8b56a6f9cbb91e30fff9515f8161374397bd7d8394a5be8371 WatchSource:0}: Error finding container f6ea53f05b51af8b56a6f9cbb91e30fff9515f8161374397bd7d8394a5be8371: Status 404 returned error can't find the container with id f6ea53f05b51af8b56a6f9cbb91e30fff9515f8161374397bd7d8394a5be8371 Jan 29 06:43:37 crc kubenswrapper[4826]: E0129 06:43:37.332267 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="800ms" Jan 29 06:43:37 crc kubenswrapper[4826]: W0129 06:43:37.353687 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-742ae572f0a04859dd1179bed83b3f98b2b44931fa326fa3e93e8b6022849b72 WatchSource:0}: Error finding container 742ae572f0a04859dd1179bed83b3f98b2b44931fa326fa3e93e8b6022849b72: Status 404 returned error can't find the container with id 742ae572f0a04859dd1179bed83b3f98b2b44931fa326fa3e93e8b6022849b72 Jan 29 06:43:37 crc kubenswrapper[4826]: W0129 06:43:37.354826 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c2a1d1f4c29149b70d5f0506d59f0411d6930499c2e11d89ff495f1747fefa2b WatchSource:0}: Error finding container c2a1d1f4c29149b70d5f0506d59f0411d6930499c2e11d89ff495f1747fefa2b: Status 404 returned error can't find the container with id c2a1d1f4c29149b70d5f0506d59f0411d6930499c2e11d89ff495f1747fefa2b Jan 29 06:43:37 crc kubenswrapper[4826]: W0129 06:43:37.357444 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-2b7abbb881d12ab2c659d830270b324292ae109530177d7a4fa1bf81ae51b8e7 WatchSource:0}: Error finding container 2b7abbb881d12ab2c659d830270b324292ae109530177d7a4fa1bf81ae51b8e7: Status 404 returned error can't find the container with id 2b7abbb881d12ab2c659d830270b324292ae109530177d7a4fa1bf81ae51b8e7 Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.593825 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.595927 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.595968 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.595980 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.596007 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:43:37 crc kubenswrapper[4826]: E0129 06:43:37.596658 4826 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.718826 4826 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.726990 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:04:08.278907133 +0000 UTC Jan 29 06:43:37 crc kubenswrapper[4826]: W0129 06:43:37.761050 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Jan 29 06:43:37 crc kubenswrapper[4826]: E0129 06:43:37.761169 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:43:37 crc kubenswrapper[4826]: W0129 06:43:37.795206 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Jan 29 06:43:37 crc kubenswrapper[4826]: E0129 06:43:37.795416 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.813544 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"742ae572f0a04859dd1179bed83b3f98b2b44931fa326fa3e93e8b6022849b72"} Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.815282 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f6ea53f05b51af8b56a6f9cbb91e30fff9515f8161374397bd7d8394a5be8371"} Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.816760 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b9d7e53afc7e25494c24a094df0d6c5e10611a877f804cac4b46ca224573da94"} Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.818325 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2b7abbb881d12ab2c659d830270b324292ae109530177d7a4fa1bf81ae51b8e7"} Jan 29 06:43:37 crc kubenswrapper[4826]: I0129 06:43:37.819516 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c2a1d1f4c29149b70d5f0506d59f0411d6930499c2e11d89ff495f1747fefa2b"} Jan 29 06:43:37 crc kubenswrapper[4826]: W0129 06:43:37.963068 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Jan 29 06:43:37 crc kubenswrapper[4826]: E0129 06:43:37.963152 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:43:38 crc kubenswrapper[4826]: W0129 06:43:38.011147 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Jan 29 06:43:38 crc kubenswrapper[4826]: E0129 06:43:38.011398 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:43:38 crc kubenswrapper[4826]: E0129 06:43:38.134127 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="1.6s" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.397584 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.399283 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.399353 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.399374 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.399403 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:43:38 crc kubenswrapper[4826]: E0129 06:43:38.399829 4826 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.604198 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 06:43:38 crc kubenswrapper[4826]: E0129 06:43:38.606009 4826 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.718986 4826 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.728144 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:58:48.339208028 +0000 UTC Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.828829 4826 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ac5d94df68e0024eb8a1bce853c4f7c2c7da0b4e59b79b7ad1fefff060ac3c8c" exitCode=0 Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.828960 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.828959 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ac5d94df68e0024eb8a1bce853c4f7c2c7da0b4e59b79b7ad1fefff060ac3c8c"} Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.830845 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.830912 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.830937 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.834549 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8"} Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.834602 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5"} Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.834624 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f"} Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.836786 4826 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd" exitCode=0 Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.836887 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd"} Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.837112 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.838635 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.838692 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.838719 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.839234 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b" exitCode=0 Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.839373 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b"} Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.839397 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.840874 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.840920 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.840936 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.841428 4826 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9da7187e186c945379293c474df17219c3483de409037610e618a148468eb291" exitCode=0 Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.841473 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9da7187e186c945379293c474df17219c3483de409037610e618a148468eb291"} Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.841596 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.842958 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.842996 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.843014 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.843784 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.845158 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.845205 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:38 crc kubenswrapper[4826]: I0129 06:43:38.845224 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.719442 4826 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.728286 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:33:05.394913546 +0000 UTC Jan 29 06:43:39 crc kubenswrapper[4826]: E0129 06:43:39.734939 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="3.2s" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.846971 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2f1ef5698c2a7ab7f9271b23d26588313d4efdf9866dff2e5dfbe495de8ad6da"} Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.847045 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.848393 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.848437 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.848457 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.849671 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b"} Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.849842 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.851266 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.851327 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.851339 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.853268 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0c432a0b0dd88840a13aec51ffb48857ec3d4c744a84a48852deccb7fc8422ac"} Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.853357 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"69cfd89c00e517ce856d21fe9f9ab1014c40a6b7d9237740e75d756007b061d9"} Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.853382 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2bf3ca8147e4012c2cb95e4ce01e17baa83040615adfbca0a88c05d446efc555"} Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.853365 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.855138 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.855183 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.855203 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.857372 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0"} Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.857415 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488"} Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.857436 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6"} Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.857454 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add"} Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.859774 4826 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="acfc6f35c3257ec787f9903dca199cd3bc62964538f389d4f64e5ba2e7aa937a" exitCode=0 Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.859820 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"acfc6f35c3257ec787f9903dca199cd3bc62964538f389d4f64e5ba2e7aa937a"} Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.859854 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.860740 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.860788 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:39 crc kubenswrapper[4826]: I0129 06:43:39.860806 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:39 crc kubenswrapper[4826]: W0129 06:43:39.900204 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Jan 29 06:43:39 crc kubenswrapper[4826]: E0129 06:43:39.900382 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.000250 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.002100 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.002143 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.002154 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.002177 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:43:40 crc kubenswrapper[4826]: E0129 06:43:40.002696 4826 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.059914 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.068566 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.728509 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:24:44.744362412 +0000 UTC Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.866222 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de"} Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.866330 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.867416 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.867456 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.867475 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.869416 4826 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ddc2c75428aa584351f12eb3823ee27ec9d4db50a6070a0ceb81470fe95a7270" exitCode=0 Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.869527 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.869561 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.870187 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.870763 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ddc2c75428aa584351f12eb3823ee27ec9d4db50a6070a0ceb81470fe95a7270"} Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.870901 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.871565 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.872460 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.872496 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.872512 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.873348 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.873346 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.873380 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.873382 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.873428 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.873440 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.873391 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.873475 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:40 crc kubenswrapper[4826]: I0129 06:43:40.873398 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.730236 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:17:29.452666936 +0000 UTC Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.879836 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e8c9a5c2ee2a73a6f22b118fb90225429594082fec3d1c0ecd97fe0992a93387"} Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.879880 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.879909 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"87531393a6bf7d0b723327e6dc538daa2ffde7738758806758a48d281bbb29de"} Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.879940 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"61494c2923f76eaf5c5ed77e63efb521ac48828c1e733c7c9e82cd342689b5d0"} Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.879976 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.880051 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.879981 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.881386 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.881443 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.881463 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.881862 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.881938 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:41 crc kubenswrapper[4826]: I0129 06:43:41.881956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:42 crc kubenswrapper[4826]: I0129 06:43:42.276709 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:42 crc kubenswrapper[4826]: I0129 06:43:42.730481 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:36:16.60330512 +0000 UTC Jan 29 06:43:42 crc kubenswrapper[4826]: I0129 06:43:42.870641 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 06:43:42 crc kubenswrapper[4826]: I0129 06:43:42.889818 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:42 crc kubenswrapper[4826]: I0129 06:43:42.890129 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8db588675a4f0cf922703bcb28ef13f77e9c0f7689bba2472ab4ff5a3fbe2975"} Jan 29 06:43:42 crc kubenswrapper[4826]: I0129 06:43:42.890280 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:42 crc kubenswrapper[4826]: I0129 06:43:42.890396 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"40b9d2e22d8dfa98a0fcb3b87c24b605a43f57e9303c3f3e15d19e2c587744a3"} Jan 29 06:43:42 crc kubenswrapper[4826]: I0129 06:43:42.891590 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:42 crc kubenswrapper[4826]: I0129 06:43:42.891642 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:42 crc kubenswrapper[4826]: I0129 06:43:42.891661 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:42 crc kubenswrapper[4826]: I0129 06:43:42.891912 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:42 crc kubenswrapper[4826]: I0129 06:43:42.891962 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:42 crc kubenswrapper[4826]: I0129 06:43:42.891981 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.203793 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.205824 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.205890 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.205907 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.205949 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.730902 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:42:28.440668115 +0000 UTC Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.892609 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.892629 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.893846 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.893894 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.893913 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.894700 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.894744 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.894762 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.917436 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.917640 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.918683 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.918710 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:43 crc kubenswrapper[4826]: I0129 06:43:43.918722 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:44 crc kubenswrapper[4826]: I0129 06:43:44.291734 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:44 crc kubenswrapper[4826]: I0129 06:43:44.387113 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 06:43:44 crc kubenswrapper[4826]: I0129 06:43:44.731547 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:26:00.96543512 +0000 UTC Jan 29 06:43:44 crc kubenswrapper[4826]: I0129 06:43:44.894736 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:44 crc kubenswrapper[4826]: I0129 06:43:44.894857 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:44 crc kubenswrapper[4826]: I0129 06:43:44.896598 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:44 crc kubenswrapper[4826]: I0129 06:43:44.896631 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:44 crc kubenswrapper[4826]: I0129 06:43:44.896660 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:44 crc kubenswrapper[4826]: I0129 06:43:44.896672 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:44 crc kubenswrapper[4826]: I0129 06:43:44.896685 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:44 crc kubenswrapper[4826]: I0129 06:43:44.896695 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:45 crc kubenswrapper[4826]: I0129 06:43:45.732084 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:12:50.890822073 +0000 UTC Jan 29 06:43:46 crc kubenswrapper[4826]: I0129 06:43:46.524273 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:43:46 crc kubenswrapper[4826]: I0129 06:43:46.524573 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:46 crc kubenswrapper[4826]: I0129 06:43:46.526338 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:46 crc kubenswrapper[4826]: I0129 06:43:46.526413 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:46 crc kubenswrapper[4826]: I0129 06:43:46.526433 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:46 crc kubenswrapper[4826]: I0129 06:43:46.733157 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:43:13.576669683 +0000 UTC Jan 29 06:43:46 crc kubenswrapper[4826]: E0129 06:43:46.895867 4826 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 06:43:47 crc kubenswrapper[4826]: I0129 06:43:47.733462 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:07:42.132352771 +0000 UTC Jan 29 06:43:48 crc kubenswrapper[4826]: I0129 06:43:48.451286 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:48 crc kubenswrapper[4826]: I0129 06:43:48.451572 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:48 crc kubenswrapper[4826]: I0129 06:43:48.453684 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:48 crc kubenswrapper[4826]: I0129 06:43:48.453739 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:48 crc kubenswrapper[4826]: I0129 06:43:48.453763 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:48 crc kubenswrapper[4826]: I0129 06:43:48.457487 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:48 crc kubenswrapper[4826]: I0129 06:43:48.733932 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:22:58.559930407 +0000 UTC Jan 29 06:43:48 crc kubenswrapper[4826]: I0129 06:43:48.905903 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:48 crc kubenswrapper[4826]: I0129 06:43:48.907266 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:48 crc kubenswrapper[4826]: I0129 06:43:48.907338 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:48 crc kubenswrapper[4826]: I0129 06:43:48.907350 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:49 crc kubenswrapper[4826]: I0129 06:43:49.537883 4826 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 06:43:49 crc kubenswrapper[4826]: I0129 06:43:49.537973 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 06:43:49 crc kubenswrapper[4826]: I0129 06:43:49.620243 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:49 crc kubenswrapper[4826]: I0129 06:43:49.734870 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 12:51:41.70966082 +0000 UTC Jan 29 06:43:49 crc kubenswrapper[4826]: I0129 06:43:49.908426 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:49 crc kubenswrapper[4826]: I0129 06:43:49.909734 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:49 crc kubenswrapper[4826]: I0129 06:43:49.909828 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:49 crc kubenswrapper[4826]: I0129 06:43:49.909848 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:50 crc kubenswrapper[4826]: W0129 06:43:50.418830 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 06:43:50 crc kubenswrapper[4826]: I0129 06:43:50.418928 4826 trace.go:236] Trace[786333870]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 06:43:40.417) (total time: 10001ms): Jan 29 06:43:50 crc kubenswrapper[4826]: Trace[786333870]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:43:50.418) Jan 29 06:43:50 crc kubenswrapper[4826]: Trace[786333870]: [10.00122799s] [10.00122799s] END Jan 29 06:43:50 crc kubenswrapper[4826]: E0129 06:43:50.418951 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 06:43:50 crc kubenswrapper[4826]: W0129 06:43:50.579577 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 06:43:50 crc kubenswrapper[4826]: I0129 06:43:50.579701 4826 trace.go:236] Trace[655065097]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 06:43:40.578) (total time: 10001ms): Jan 29 06:43:50 crc kubenswrapper[4826]: Trace[655065097]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:43:50.579) Jan 29 06:43:50 crc kubenswrapper[4826]: Trace[655065097]: [10.001383314s] [10.001383314s] END Jan 29 06:43:50 crc kubenswrapper[4826]: E0129 06:43:50.579735 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 06:43:50 crc kubenswrapper[4826]: W0129 06:43:50.659847 4826 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 06:43:50 crc kubenswrapper[4826]: I0129 06:43:50.659965 4826 trace.go:236] Trace[538378890]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 06:43:40.658) (total time: 10001ms): Jan 29 06:43:50 crc kubenswrapper[4826]: Trace[538378890]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:43:50.659) Jan 29 06:43:50 crc kubenswrapper[4826]: Trace[538378890]: [10.001185139s] [10.001185139s] END Jan 29 06:43:50 crc kubenswrapper[4826]: E0129 06:43:50.660002 4826 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 06:43:50 crc kubenswrapper[4826]: I0129 06:43:50.718929 4826 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 29 06:43:50 crc kubenswrapper[4826]: I0129 06:43:50.735542 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:50:26.952105501 +0000 UTC Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.260849 4826 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.260948 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.271127 4826 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.271180 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.417245 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.417427 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.418330 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.418356 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.418365 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.459614 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.735682 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:57:56.464227444 +0000 UTC Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.912790 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.913570 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.913603 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.913615 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:51 crc kubenswrapper[4826]: I0129 06:43:51.934471 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 06:43:52 crc kubenswrapper[4826]: I0129 06:43:52.621181 4826 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 06:43:52 crc kubenswrapper[4826]: I0129 06:43:52.621420 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 06:43:52 crc kubenswrapper[4826]: I0129 06:43:52.735820 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:03:10.104919436 +0000 UTC Jan 29 06:43:52 crc kubenswrapper[4826]: I0129 06:43:52.916002 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:52 crc kubenswrapper[4826]: I0129 06:43:52.917554 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:52 crc kubenswrapper[4826]: I0129 06:43:52.917611 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:52 crc kubenswrapper[4826]: I0129 06:43:52.917630 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:53 crc kubenswrapper[4826]: I0129 06:43:53.736214 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:19:01.86971141 +0000 UTC Jan 29 06:43:53 crc kubenswrapper[4826]: I0129 06:43:53.985970 4826 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.300697 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.300931 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.302368 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.302426 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.302446 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.307941 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.519769 4826 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.713241 4826 apiserver.go:52] "Watching apiserver" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.720733 4826 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.721164 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.721734 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.721779 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:43:54 crc kubenswrapper[4826]: E0129 06:43:54.721878 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.721915 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.723015 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:43:54 crc kubenswrapper[4826]: E0129 06:43:54.723174 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.723400 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.723517 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:43:54 crc kubenswrapper[4826]: E0129 06:43:54.723631 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.725094 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.725232 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.725398 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.725475 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.725473 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.726462 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.726462 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.727554 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.727970 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.728910 4826 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.737212 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:49:54.335876555 +0000 UTC Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.771985 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.790380 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.807019 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.826604 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.842427 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.861017 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.876785 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.892116 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.905980 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:54 crc kubenswrapper[4826]: I0129 06:43:54.941485 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 06:43:55 crc kubenswrapper[4826]: I0129 06:43:55.738505 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:43:40.472199921 +0000 UTC Jan 29 06:43:55 crc kubenswrapper[4826]: I0129 06:43:55.807974 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:43:55 crc kubenswrapper[4826]: E0129 06:43:55.808190 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:43:55 crc kubenswrapper[4826]: I0129 06:43:55.926061 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:43:55 crc kubenswrapper[4826]: I0129 06:43:55.958244 4826 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.258089 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.262341 4826 trace.go:236] Trace[2029299895]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 06:43:45.287) (total time: 10974ms): Jan 29 06:43:56 crc kubenswrapper[4826]: Trace[2029299895]: ---"Objects listed" error: 10974ms (06:43:56.262) Jan 29 06:43:56 crc kubenswrapper[4826]: Trace[2029299895]: [10.974306451s] [10.974306451s] END Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.262379 4826 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.263187 4826 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.263582 4826 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.278635 4826 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.315731 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.332439 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.342607 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.360432 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.364131 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.364212 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.364346 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.364420 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.364468 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.364614 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.364663 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.364763 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.364815 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.364835 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.364902 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.365005 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.365233 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.365346 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.364772 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.365892 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.365930 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.365960 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.365992 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366026 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366057 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366089 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366122 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366154 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366201 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366365 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366417 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366489 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366551 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366522 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366590 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366599 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366633 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366651 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366692 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366726 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366761 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366792 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366821 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366854 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366885 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366921 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366953 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366985 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367018 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367048 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367081 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367112 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367143 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366647 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.366867 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367040 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367141 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367148 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367176 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367371 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367429 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367473 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367518 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367672 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367729 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367780 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367829 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367876 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367926 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367975 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368023 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368067 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368111 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368157 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368203 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368252 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368333 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368384 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368429 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368470 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368511 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368552 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368598 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368647 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368692 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368735 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368776 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368821 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368863 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368903 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368961 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369004 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369044 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369086 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369128 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369173 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369218 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369266 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369349 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369434 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369499 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367267 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369547 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369598 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369649 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.370007 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.370060 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.370112 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.370164 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.370216 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.370265 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367509 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367505 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367540 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367589 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367775 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367828 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367907 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.367967 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368117 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368187 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.368549 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.369959 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.370229 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.370361 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:43:56.870339125 +0000 UTC m=+20.732132304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372586 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372632 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372633 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372667 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372693 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372725 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372754 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372782 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372809 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372830 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372855 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372877 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372898 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372921 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372948 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372977 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373007 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373043 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373074 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373107 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373146 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373184 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373214 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373252 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373288 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373349 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373379 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373433 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373465 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373501 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373534 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373566 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373599 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373629 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373657 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373685 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373723 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373764 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373794 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373828 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373857 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373884 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373918 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373951 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373982 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374014 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374049 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374079 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374109 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374138 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374169 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374202 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374227 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374248 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374269 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374328 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374363 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374432 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374467 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374536 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.376269 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.376334 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.376374 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.376407 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373014 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373076 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.370594 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.370759 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.370972 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.371452 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.372162 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373092 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373481 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373485 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373540 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373907 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.373960 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374271 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374434 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374483 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374564 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374711 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374856 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374962 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.374980 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.375000 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.375006 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.375219 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.375390 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.375526 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.376127 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.376176 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.376198 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.376676 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.370560 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.378270 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.376689 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.376784 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.376791 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.376927 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.377159 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.377352 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.377720 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.378457 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.378913 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.379191 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.379578 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.380162 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.380402 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.380615 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.380726 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.380896 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.380994 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.381041 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.381068 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.381095 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.381120 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.381146 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.381127 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.381172 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.381199 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.381221 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.381248 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.381273 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.381312 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382019 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382051 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382043 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382074 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382493 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382533 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382567 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382607 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382696 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382753 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382810 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382872 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382928 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.382980 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.383035 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.383095 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.383150 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.383204 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.383258 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.383353 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.383397 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.383476 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.383411 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.383773 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.383825 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.383869 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.383901 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384205 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384240 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384274 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384340 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384255 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384383 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384438 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384581 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384653 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384704 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384719 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384795 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384814 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384861 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384914 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.385027 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.385670 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.385931 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.385979 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.386042 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.386109 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.386224 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.384967 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.386743 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.386888 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.386944 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387185 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387246 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387374 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387431 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387492 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387540 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387600 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387755 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387792 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387821 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387849 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387874 4826 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387903 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387930 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387958 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.387982 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388008 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388018 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388016 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388035 4826 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388332 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388391 4826 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388414 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388423 4826 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388419 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388424 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388460 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388824 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.390942 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388968 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.389184 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.389180 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.389189 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.389627 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.389889 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.390078 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.390285 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.390354 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.390591 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.390874 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.388476 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391116 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391132 4826 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391146 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391160 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391175 4826 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391189 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391204 4826 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391217 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391232 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391223 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.390293 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391244 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391339 4826 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391370 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391391 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391411 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391431 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391450 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391468 4826 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391487 4826 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391504 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391522 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391541 4826 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391559 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391576 4826 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391594 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391613 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391629 4826 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391644 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391663 4826 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.391679 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.391741 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:43:56.891721289 +0000 UTC m=+20.753514478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391444 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.391681 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392086 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392103 4826 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392118 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392111 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392137 4826 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392218 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392254 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392323 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392286 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392432 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392450 4826 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392485 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392520 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392524 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392550 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392580 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392608 4826 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.392638 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392639 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392674 4826 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392703 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.392726 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:43:56.892705565 +0000 UTC m=+20.754498644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392750 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392767 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392780 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392794 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392807 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392821 4826 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392836 4826 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392850 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392863 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392875 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392887 4826 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392901 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392914 4826 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392927 4826 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392941 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392962 4826 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392975 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392986 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.392999 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.393011 4826 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.393022 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.393035 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.393048 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.393061 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.393072 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.393085 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.393098 4826 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.393110 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.396746 4826 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.396773 4826 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.396791 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.396808 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.396824 4826 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.396839 4826 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.396858 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.396858 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.396878 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.393895 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.395077 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.396311 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.396957 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.405102 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.405110 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.405534 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.405554 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.405608 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.405654 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.405899 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.406016 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.406176 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.406276 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.406724 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.406780 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.406802 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.406835 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.407357 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.407574 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.407620 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.407645 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.407854 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.408116 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.408341 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.408579 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.408724 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.408988 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.409023 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.409146 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.409242 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.409742 4826 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.410141 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.410239 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.410529 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.410662 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.410714 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.410785 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.410857 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:43:56.910827343 +0000 UTC m=+20.772620432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.411084 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.411377 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.412742 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.416009 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.397119 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.418058 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.418090 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.418101 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.418147 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:43:56.918133086 +0000 UTC m=+20.779926145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.418964 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.419377 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.420116 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.421844 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.422920 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.422929 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.422953 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.423426 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.423503 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.423703 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.423834 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.425609 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.426069 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.427548 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.430100 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.430104 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.430372 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.430504 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.430660 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.430684 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.430945 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.431684 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.431700 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.431965 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.432689 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.432843 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.433728 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.434458 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.434608 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.437863 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.438341 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.438926 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.439232 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.440094 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.441275 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.442071 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.442210 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.442529 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.442735 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.445123 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.453608 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.455871 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.471666 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.474844 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501588 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501637 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501688 4826 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501708 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501718 4826 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501726 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501736 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501745 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501754 4826 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501762 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501780 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501788 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501796 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501804 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501812 4826 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501821 4826 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501829 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501837 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501846 4826 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501854 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501862 4826 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501870 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501880 4826 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501888 4826 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501897 4826 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501908 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501917 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501925 4826 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501934 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501943 4826 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501952 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501962 4826 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501970 4826 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501978 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501986 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501993 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502001 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502002 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502009 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502088 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502104 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502117 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502129 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502141 4826 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.501888 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502153 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502167 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502181 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502198 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502214 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502227 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502240 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502255 4826 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502269 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502282 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502311 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502325 4826 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502337 4826 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502350 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502361 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502372 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502384 4826 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502395 4826 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502409 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502424 4826 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502435 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502447 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502458 4826 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502471 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502481 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502493 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502504 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502517 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502540 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502551 4826 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502564 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502574 4826 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502586 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502596 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502608 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502627 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502640 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502651 4826 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502662 4826 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502672 4826 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502683 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502694 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502706 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502716 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502727 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502744 4826 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502757 4826 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502768 4826 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502781 4826 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502792 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502803 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502814 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502825 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502836 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502857 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502867 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.502879 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.548239 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.562054 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 06:43:56 crc kubenswrapper[4826]: W0129 06:43:56.563730 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-b34727823d74431828a2a1f9298adc269c17ca19cd5ff43dfe99a700307abee4 WatchSource:0}: Error finding container b34727823d74431828a2a1f9298adc269c17ca19cd5ff43dfe99a700307abee4: Status 404 returned error can't find the container with id b34727823d74431828a2a1f9298adc269c17ca19cd5ff43dfe99a700307abee4 Jan 29 06:43:56 crc kubenswrapper[4826]: W0129 06:43:56.570886 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-e9e213c57e8a441a35dbd8e7ab602d1af85671db44145b19971942ddd0d3b022 WatchSource:0}: Error finding container e9e213c57e8a441a35dbd8e7ab602d1af85671db44145b19971942ddd0d3b022: Status 404 returned error can't find the container with id e9e213c57e8a441a35dbd8e7ab602d1af85671db44145b19971942ddd0d3b022 Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.572703 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.738848 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 18:42:12.793934818 +0000 UTC Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.808381 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.808613 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.808681 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.808858 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.815937 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.817338 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.819734 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.821704 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.823718 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.824542 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.825724 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.826513 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.828094 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.828742 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.829624 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.830117 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.830829 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.831714 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.832217 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.833071 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.833584 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.834492 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.834886 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.835439 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.836348 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.836801 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.837361 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.838154 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.840145 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.841610 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.842848 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.845197 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.847241 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.848595 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.850834 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.852107 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.854033 4826 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.854439 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.858504 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.860184 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.861526 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.862844 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.866817 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.868438 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.869223 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.870369 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.872071 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.872600 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.873208 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.874890 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.876657 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.877599 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.878841 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.879709 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.881211 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.882593 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.883981 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.884546 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.885132 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.886097 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.887710 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.888731 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.890331 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.905503 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.905846 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.905985 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.906085 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.906283 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.906426 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:43:57.906394669 +0000 UTC m=+21.768187778 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.906980 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:43:57.906960194 +0000 UTC m=+21.768753303 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.907059 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:43:56 crc kubenswrapper[4826]: E0129 06:43:56.907105 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:43:57.907091497 +0000 UTC m=+21.768884606 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.921791 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.931911 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c291d740495b38d2ac5b2445e630d60d189ed9681510d1dd8a7c14652bdca80d"} Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.933642 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733"} Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.933721 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8"} Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.933733 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e9e213c57e8a441a35dbd8e7ab602d1af85671db44145b19971942ddd0d3b022"} Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.934747 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4"} Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.934775 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b34727823d74431828a2a1f9298adc269c17ca19cd5ff43dfe99a700307abee4"} Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.944521 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.959391 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.967949 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.980885 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:56 crc kubenswrapper[4826]: I0129 06:43:56.990730 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.004163 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.006622 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.006697 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.006940 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.007597 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.007646 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.007667 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.007614 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.007736 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:43:58.007712222 +0000 UTC m=+21.869505331 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.007739 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.007828 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:43:58.007800525 +0000 UTC m=+21.869593684 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.016175 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.026465 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.036831 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.048247 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.058294 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.074512 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.088224 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.103053 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.739480 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:41:08.976079144 +0000 UTC Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.808176 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.808390 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.914365 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.914467 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:43:57 crc kubenswrapper[4826]: I0129 06:43:57.914508 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.914625 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.914675 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:43:59.914622262 +0000 UTC m=+23.776415371 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.914727 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.914761 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:43:59.914725664 +0000 UTC m=+23.776518773 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:43:57 crc kubenswrapper[4826]: E0129 06:43:57.914851 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:43:59.914815597 +0000 UTC m=+23.776608876 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:43:58 crc kubenswrapper[4826]: I0129 06:43:58.015095 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:43:58 crc kubenswrapper[4826]: I0129 06:43:58.015195 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:43:58 crc kubenswrapper[4826]: E0129 06:43:58.015480 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:43:58 crc kubenswrapper[4826]: E0129 06:43:58.015488 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:43:58 crc kubenswrapper[4826]: E0129 06:43:58.015517 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:43:58 crc kubenswrapper[4826]: E0129 06:43:58.015542 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:43:58 crc kubenswrapper[4826]: E0129 06:43:58.015550 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:43:58 crc kubenswrapper[4826]: E0129 06:43:58.015566 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:43:58 crc kubenswrapper[4826]: E0129 06:43:58.015658 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:00.015630417 +0000 UTC m=+23.877423516 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:43:58 crc kubenswrapper[4826]: E0129 06:43:58.015690 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:00.015677018 +0000 UTC m=+23.877470117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:43:58 crc kubenswrapper[4826]: I0129 06:43:58.740228 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:08:56.591219707 +0000 UTC Jan 29 06:43:58 crc kubenswrapper[4826]: I0129 06:43:58.807956 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:43:58 crc kubenswrapper[4826]: I0129 06:43:58.807987 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:43:58 crc kubenswrapper[4826]: E0129 06:43:58.808219 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:43:58 crc kubenswrapper[4826]: E0129 06:43:58.808352 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.626354 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.632929 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.639254 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.644956 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.671249 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.694209 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.717092 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.738937 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.741099 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:34:11.572571015 +0000 UTC Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.759595 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.780617 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.799784 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.808782 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:43:59 crc kubenswrapper[4826]: E0129 06:43:59.809013 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.819359 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.839855 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.860875 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.885359 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.905480 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.925423 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.933545 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.933657 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.933720 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:43:59 crc kubenswrapper[4826]: E0129 06:43:59.933801 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:44:03.933777197 +0000 UTC m=+27.795570296 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:43:59 crc kubenswrapper[4826]: E0129 06:43:59.933884 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:43:59 crc kubenswrapper[4826]: E0129 06:43:59.933904 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:43:59 crc kubenswrapper[4826]: E0129 06:43:59.933986 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:03.933961082 +0000 UTC m=+27.795754191 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:43:59 crc kubenswrapper[4826]: E0129 06:43:59.934018 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:03.934004853 +0000 UTC m=+27.795797952 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.945670 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f"} Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.946002 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.970142 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:43:59 crc kubenswrapper[4826]: I0129 06:43:59.991542 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:43:59Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:00 crc kubenswrapper[4826]: I0129 06:44:00.012131 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:00 crc kubenswrapper[4826]: I0129 06:44:00.030822 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:00 crc kubenswrapper[4826]: I0129 06:44:00.034280 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:00 crc kubenswrapper[4826]: I0129 06:44:00.034396 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:00 crc kubenswrapper[4826]: E0129 06:44:00.034682 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:44:00 crc kubenswrapper[4826]: E0129 06:44:00.034734 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:44:00 crc kubenswrapper[4826]: E0129 06:44:00.034762 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:00 crc kubenswrapper[4826]: E0129 06:44:00.034820 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:44:00 crc kubenswrapper[4826]: E0129 06:44:00.034847 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:44:00 crc kubenswrapper[4826]: E0129 06:44:00.034865 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:00 crc kubenswrapper[4826]: E0129 06:44:00.034849 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:04.034818253 +0000 UTC m=+27.896611352 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:00 crc kubenswrapper[4826]: E0129 06:44:00.034931 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:04.034913626 +0000 UTC m=+27.896706725 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:00 crc kubenswrapper[4826]: I0129 06:44:00.057803 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:00 crc kubenswrapper[4826]: I0129 06:44:00.078670 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:00 crc kubenswrapper[4826]: I0129 06:44:00.100597 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:00 crc kubenswrapper[4826]: I0129 06:44:00.120113 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:00Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:00 crc kubenswrapper[4826]: I0129 06:44:00.741661 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:25:28.843473068 +0000 UTC Jan 29 06:44:00 crc kubenswrapper[4826]: I0129 06:44:00.808359 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:00 crc kubenswrapper[4826]: I0129 06:44:00.808434 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:00 crc kubenswrapper[4826]: E0129 06:44:00.808573 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:00 crc kubenswrapper[4826]: E0129 06:44:00.808760 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:01 crc kubenswrapper[4826]: I0129 06:44:01.742646 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:48:15.438488418 +0000 UTC Jan 29 06:44:01 crc kubenswrapper[4826]: I0129 06:44:01.808515 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:01 crc kubenswrapper[4826]: E0129 06:44:01.808716 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.663635 4826 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.666337 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.666424 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.666447 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.666605 4826 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.675858 4826 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.676130 4826 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.677695 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.677766 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.677787 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.677815 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.677840 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:02Z","lastTransitionTime":"2026-01-29T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:02 crc kubenswrapper[4826]: E0129 06:44:02.701798 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:02Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.708111 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.708182 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.708203 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.708230 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.708249 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:02Z","lastTransitionTime":"2026-01-29T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:02 crc kubenswrapper[4826]: E0129 06:44:02.724180 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:02Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.732530 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.732599 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.732611 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.732630 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.732641 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:02Z","lastTransitionTime":"2026-01-29T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.743155 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 07:31:02.955072471 +0000 UTC Jan 29 06:44:02 crc kubenswrapper[4826]: E0129 06:44:02.751792 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:02Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.756041 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.756104 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.756122 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.756146 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.756162 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:02Z","lastTransitionTime":"2026-01-29T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:02 crc kubenswrapper[4826]: E0129 06:44:02.780048 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:02Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.786267 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.786335 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.786351 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.786370 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.786382 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:02Z","lastTransitionTime":"2026-01-29T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:02 crc kubenswrapper[4826]: E0129 06:44:02.809360 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:02Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:02 crc kubenswrapper[4826]: E0129 06:44:02.809540 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.809887 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:02 crc kubenswrapper[4826]: E0129 06:44:02.810038 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.810161 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:02 crc kubenswrapper[4826]: E0129 06:44:02.810313 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.811398 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.811422 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.811431 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.811444 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.811455 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:02Z","lastTransitionTime":"2026-01-29T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.835610 4826 csr.go:261] certificate signing request csr-q69tc is approved, waiting to be issued Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.862485 4826 csr.go:257] certificate signing request csr-q69tc is issued Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.913617 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.913672 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.913704 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.913718 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:02 crc kubenswrapper[4826]: I0129 06:44:02.913728 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:02Z","lastTransitionTime":"2026-01-29T06:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.015792 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.015835 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.015845 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.015861 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.015870 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:03Z","lastTransitionTime":"2026-01-29T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.118184 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.118229 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.118244 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.118266 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.118283 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:03Z","lastTransitionTime":"2026-01-29T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.220548 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.220606 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.220622 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.220643 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.220658 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:03Z","lastTransitionTime":"2026-01-29T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.323838 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.323892 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.323904 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.323928 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.323945 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:03Z","lastTransitionTime":"2026-01-29T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.426262 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.426323 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.426335 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.426351 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.426362 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:03Z","lastTransitionTime":"2026-01-29T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.528627 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.528675 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.528685 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.528707 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.528717 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:03Z","lastTransitionTime":"2026-01-29T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.631891 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.631945 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.631958 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.631981 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.631997 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:03Z","lastTransitionTime":"2026-01-29T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.663764 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-llzmh"] Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.664164 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tdzw4"] Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.664372 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.664824 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tdzw4" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.667814 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.667986 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.668015 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.668174 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.668375 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.668406 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.668515 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.668681 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.684203 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.703594 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.719061 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.731528 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.734639 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.734686 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.734705 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.734728 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.734746 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:03Z","lastTransitionTime":"2026-01-29T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.744208 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:54:41.93854909 +0000 UTC Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.747091 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.763962 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.772903 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4ktz\" (UniqueName: \"kubernetes.io/projected/6ea2651e-31ea-4e99-8bcd-2f8e9687df2f-kube-api-access-q4ktz\") pod \"machine-config-daemon-llzmh\" (UID: \"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\") " pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.772950 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6ea2651e-31ea-4e99-8bcd-2f8e9687df2f-rootfs\") pod \"machine-config-daemon-llzmh\" (UID: \"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\") " pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.772977 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcc9d\" (UniqueName: \"kubernetes.io/projected/550bdc9c-0324-4f3c-98df-95fbf1029eda-kube-api-access-qcc9d\") pod \"node-resolver-tdzw4\" (UID: \"550bdc9c-0324-4f3c-98df-95fbf1029eda\") " pod="openshift-dns/node-resolver-tdzw4" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.773002 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ea2651e-31ea-4e99-8bcd-2f8e9687df2f-proxy-tls\") pod \"machine-config-daemon-llzmh\" (UID: \"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\") " pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.773034 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/550bdc9c-0324-4f3c-98df-95fbf1029eda-hosts-file\") pod \"node-resolver-tdzw4\" (UID: \"550bdc9c-0324-4f3c-98df-95fbf1029eda\") " pod="openshift-dns/node-resolver-tdzw4" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.773063 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ea2651e-31ea-4e99-8bcd-2f8e9687df2f-mcd-auth-proxy-config\") pod \"machine-config-daemon-llzmh\" (UID: \"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\") " pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.779960 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.791200 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.803179 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.808498 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:03 crc kubenswrapper[4826]: E0129 06:44:03.808613 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.814413 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.825285 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.833705 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.836966 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.837076 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.837145 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.837229 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.837308 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:03Z","lastTransitionTime":"2026-01-29T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.846286 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.858194 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.863869 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 06:39:02 +0000 UTC, rotation deadline is 2026-11-28 19:07:33.770822751 +0000 UTC Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.864002 4826 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7284h23m29.906826164s for next certificate rotation Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.871079 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.873807 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4ktz\" (UniqueName: \"kubernetes.io/projected/6ea2651e-31ea-4e99-8bcd-2f8e9687df2f-kube-api-access-q4ktz\") pod \"machine-config-daemon-llzmh\" (UID: \"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\") " pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.873948 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6ea2651e-31ea-4e99-8bcd-2f8e9687df2f-rootfs\") pod \"machine-config-daemon-llzmh\" (UID: \"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\") " pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.874046 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcc9d\" (UniqueName: \"kubernetes.io/projected/550bdc9c-0324-4f3c-98df-95fbf1029eda-kube-api-access-qcc9d\") pod \"node-resolver-tdzw4\" (UID: \"550bdc9c-0324-4f3c-98df-95fbf1029eda\") " pod="openshift-dns/node-resolver-tdzw4" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.874114 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6ea2651e-31ea-4e99-8bcd-2f8e9687df2f-rootfs\") pod \"machine-config-daemon-llzmh\" (UID: \"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\") " pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.874142 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ea2651e-31ea-4e99-8bcd-2f8e9687df2f-proxy-tls\") pod \"machine-config-daemon-llzmh\" (UID: \"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\") " pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.874335 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/550bdc9c-0324-4f3c-98df-95fbf1029eda-hosts-file\") pod \"node-resolver-tdzw4\" (UID: \"550bdc9c-0324-4f3c-98df-95fbf1029eda\") " pod="openshift-dns/node-resolver-tdzw4" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.874446 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ea2651e-31ea-4e99-8bcd-2f8e9687df2f-mcd-auth-proxy-config\") pod \"machine-config-daemon-llzmh\" (UID: \"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\") " pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.874506 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/550bdc9c-0324-4f3c-98df-95fbf1029eda-hosts-file\") pod \"node-resolver-tdzw4\" (UID: \"550bdc9c-0324-4f3c-98df-95fbf1029eda\") " pod="openshift-dns/node-resolver-tdzw4" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.875073 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6ea2651e-31ea-4e99-8bcd-2f8e9687df2f-mcd-auth-proxy-config\") pod \"machine-config-daemon-llzmh\" (UID: \"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\") " pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.883195 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.885725 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ea2651e-31ea-4e99-8bcd-2f8e9687df2f-proxy-tls\") pod \"machine-config-daemon-llzmh\" (UID: \"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\") " pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.898556 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.901023 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcc9d\" (UniqueName: \"kubernetes.io/projected/550bdc9c-0324-4f3c-98df-95fbf1029eda-kube-api-access-qcc9d\") pod \"node-resolver-tdzw4\" (UID: \"550bdc9c-0324-4f3c-98df-95fbf1029eda\") " pod="openshift-dns/node-resolver-tdzw4" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.903090 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4ktz\" (UniqueName: \"kubernetes.io/projected/6ea2651e-31ea-4e99-8bcd-2f8e9687df2f-kube-api-access-q4ktz\") pod \"machine-config-daemon-llzmh\" (UID: \"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\") " pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.913789 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.928008 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:03Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.939958 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.940083 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.940163 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.940247 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.940331 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:03Z","lastTransitionTime":"2026-01-29T06:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.976006 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:44:03 crc kubenswrapper[4826]: E0129 06:44:03.976176 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:44:11.976140157 +0000 UTC m=+35.837933266 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.976582 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.976689 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:03 crc kubenswrapper[4826]: E0129 06:44:03.976763 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:44:03 crc kubenswrapper[4826]: E0129 06:44:03.976859 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:11.976842735 +0000 UTC m=+35.838635844 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:44:03 crc kubenswrapper[4826]: E0129 06:44:03.976917 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:44:03 crc kubenswrapper[4826]: E0129 06:44:03.977016 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:11.976997169 +0000 UTC m=+35.838790238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:44:03 crc kubenswrapper[4826]: I0129 06:44:03.988928 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:44:04 crc kubenswrapper[4826]: W0129 06:44:04.000834 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ea2651e_31ea_4e99_8bcd_2f8e9687df2f.slice/crio-01be7c10cdabc7feea1939a2ebdeaf512b0500610f8622b1f5986b7550ea28f8 WatchSource:0}: Error finding container 01be7c10cdabc7feea1939a2ebdeaf512b0500610f8622b1f5986b7550ea28f8: Status 404 returned error can't find the container with id 01be7c10cdabc7feea1939a2ebdeaf512b0500610f8622b1f5986b7550ea28f8 Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.001588 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tdzw4" Jan 29 06:44:04 crc kubenswrapper[4826]: W0129 06:44:04.021199 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod550bdc9c_0324_4f3c_98df_95fbf1029eda.slice/crio-167475ccbf232e8a53e3fecd782ef4f29b9280a21bfc2dec83e2da617a19c2f8 WatchSource:0}: Error finding container 167475ccbf232e8a53e3fecd782ef4f29b9280a21bfc2dec83e2da617a19c2f8: Status 404 returned error can't find the container with id 167475ccbf232e8a53e3fecd782ef4f29b9280a21bfc2dec83e2da617a19c2f8 Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.042915 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.042948 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.042956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.042972 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.042981 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:04Z","lastTransitionTime":"2026-01-29T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.053101 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-5nvkq"] Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.054066 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.058237 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.059980 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.060495 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.060644 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.060635 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-s7xfk"] Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.061001 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.065011 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kdv64"] Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.065247 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.065333 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.066988 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.067382 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.067545 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.067547 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.067797 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.067916 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.068012 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.068324 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.071977 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.075754 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.077599 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.077677 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:04 crc kubenswrapper[4826]: E0129 06:44:04.077898 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:44:04 crc kubenswrapper[4826]: E0129 06:44:04.077922 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:44:04 crc kubenswrapper[4826]: E0129 06:44:04.077941 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:44:04 crc kubenswrapper[4826]: E0129 06:44:04.077948 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:44:04 crc kubenswrapper[4826]: E0129 06:44:04.077964 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:04 crc kubenswrapper[4826]: E0129 06:44:04.077963 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:04 crc kubenswrapper[4826]: E0129 06:44:04.078053 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:12.078032575 +0000 UTC m=+35.939825654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:04 crc kubenswrapper[4826]: E0129 06:44:04.078079 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:12.078069446 +0000 UTC m=+35.939862615 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.091675 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.104233 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.117641 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.131364 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.145196 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.150433 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.150475 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.150486 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.150504 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.150518 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:04Z","lastTransitionTime":"2026-01-29T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.162455 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.175401 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.177993 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-systemd-units\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178049 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovnkube-config\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178070 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovnkube-script-lib\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178093 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-var-lib-cni-multus\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178348 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-systemd\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178376 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-var-lib-openvswitch\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178399 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovn-node-metrics-cert\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178419 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-multus-conf-dir\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178504 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178559 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4ca8a794-1985-4f1b-8651-03cfce7dd20c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178601 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-slash\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178629 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-ovn\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178647 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fa65a108-1826-4e74-8e8a-1eae605298f3-multus-daemon-config\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178665 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-etc-kubernetes\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178700 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rttd\" (UniqueName: \"kubernetes.io/projected/4ca8a794-1985-4f1b-8651-03cfce7dd20c-kube-api-access-8rttd\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178743 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-run-netns\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178766 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp8s9\" (UniqueName: \"kubernetes.io/projected/6f0c380f-ebc1-482f-9a91-8b08033eadf2-kube-api-access-dp8s9\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178795 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfhbl\" (UniqueName: \"kubernetes.io/projected/fa65a108-1826-4e74-8e8a-1eae605298f3-kube-api-access-tfhbl\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178844 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-env-overrides\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178869 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fa65a108-1826-4e74-8e8a-1eae605298f3-cni-binary-copy\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178935 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-run-multus-certs\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.178982 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-log-socket\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179009 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-system-cni-dir\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179057 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-var-lib-kubelet\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179123 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ca8a794-1985-4f1b-8651-03cfce7dd20c-cni-binary-copy\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179159 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-multus-socket-dir-parent\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179221 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-kubelet\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179241 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-multus-cni-dir\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179281 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-hostroot\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179336 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ca8a794-1985-4f1b-8651-03cfce7dd20c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179360 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-cnibin\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179387 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-cni-netd\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179406 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-run-netns\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179427 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-node-log\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179448 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-run-ovn-kubernetes\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179493 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-os-release\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179514 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-var-lib-cni-bin\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179558 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ca8a794-1985-4f1b-8651-03cfce7dd20c-os-release\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179580 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-etc-openvswitch\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179647 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-cni-bin\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179668 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ca8a794-1985-4f1b-8651-03cfce7dd20c-system-cni-dir\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179725 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ca8a794-1985-4f1b-8651-03cfce7dd20c-cnibin\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179753 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-openvswitch\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.179799 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-run-k8s-cni-cncf-io\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.188348 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.200454 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.213649 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.227906 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.241082 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.252527 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.252571 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.252612 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.252642 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.252656 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:04Z","lastTransitionTime":"2026-01-29T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.255619 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.267786 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.280665 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovnkube-config\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.280729 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovnkube-script-lib\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.280764 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-var-lib-cni-multus\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.280799 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-systemd-units\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.280848 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-systemd\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.280879 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-var-lib-openvswitch\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.280910 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovn-node-metrics-cert\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.280941 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-multus-conf-dir\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.280977 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281010 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4ca8a794-1985-4f1b-8651-03cfce7dd20c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281044 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-ovn\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281077 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fa65a108-1826-4e74-8e8a-1eae605298f3-multus-daemon-config\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281108 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-etc-kubernetes\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281146 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rttd\" (UniqueName: \"kubernetes.io/projected/4ca8a794-1985-4f1b-8651-03cfce7dd20c-kube-api-access-8rttd\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281191 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-slash\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281220 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfhbl\" (UniqueName: \"kubernetes.io/projected/fa65a108-1826-4e74-8e8a-1eae605298f3-kube-api-access-tfhbl\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281253 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-run-netns\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281284 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp8s9\" (UniqueName: \"kubernetes.io/projected/6f0c380f-ebc1-482f-9a91-8b08033eadf2-kube-api-access-dp8s9\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281342 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-run-multus-certs\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281388 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-env-overrides\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281436 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fa65a108-1826-4e74-8e8a-1eae605298f3-cni-binary-copy\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281468 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-log-socket\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281499 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-system-cni-dir\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281533 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-var-lib-kubelet\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281567 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ca8a794-1985-4f1b-8651-03cfce7dd20c-cni-binary-copy\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281606 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-multus-socket-dir-parent\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281686 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ca8a794-1985-4f1b-8651-03cfce7dd20c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281730 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-kubelet\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281765 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-etc-kubernetes\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281777 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-multus-cni-dir\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281843 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-hostroot\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281867 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-cnibin\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281896 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-cni-netd\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281922 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-run-netns\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281949 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-node-log\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281972 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-os-release\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.281993 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-var-lib-cni-bin\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.282015 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ca8a794-1985-4f1b-8651-03cfce7dd20c-os-release\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.282043 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-run-ovn-kubernetes\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.282065 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-etc-openvswitch\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.282088 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-cni-bin\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.282110 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ca8a794-1985-4f1b-8651-03cfce7dd20c-system-cni-dir\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.282131 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ca8a794-1985-4f1b-8651-03cfce7dd20c-cnibin\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.282158 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-openvswitch\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.282158 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-multus-cni-dir\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.282178 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-run-k8s-cni-cncf-io\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.282224 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-hostroot\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.282229 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-slash\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.282256 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-run-k8s-cni-cncf-io\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283139 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovnkube-script-lib\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283179 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-run-multus-certs\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283210 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-cnibin\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283227 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-cni-netd\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283243 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-run-netns\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283258 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-node-log\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283289 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fa65a108-1826-4e74-8e8a-1eae605298f3-multus-daemon-config\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283330 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-var-lib-cni-bin\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283318 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-os-release\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283369 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ca8a794-1985-4f1b-8651-03cfce7dd20c-os-release\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283389 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-openvswitch\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283406 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-log-socket\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283398 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ca8a794-1985-4f1b-8651-03cfce7dd20c-cnibin\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283435 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-run-ovn-kubernetes\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283481 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-etc-openvswitch\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283509 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-cni-bin\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283518 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovnkube-config\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283535 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ca8a794-1985-4f1b-8651-03cfce7dd20c-system-cni-dir\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283566 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-run-netns\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283594 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-systemd-units\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283695 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-system-cni-dir\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283713 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-env-overrides\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283722 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-var-lib-openvswitch\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283747 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-systemd\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283745 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-var-lib-kubelet\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.283976 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-multus-conf-dir\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.284028 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.284055 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-ovn\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.284089 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-multus-socket-dir-parent\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.284114 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fa65a108-1826-4e74-8e8a-1eae605298f3-host-var-lib-cni-multus\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.284139 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-kubelet\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.284340 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ca8a794-1985-4f1b-8651-03cfce7dd20c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.284428 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4ca8a794-1985-4f1b-8651-03cfce7dd20c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.284748 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ca8a794-1985-4f1b-8651-03cfce7dd20c-cni-binary-copy\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.285889 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fa65a108-1826-4e74-8e8a-1eae605298f3-cni-binary-copy\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.285991 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.299240 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovn-node-metrics-cert\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.302478 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rttd\" (UniqueName: \"kubernetes.io/projected/4ca8a794-1985-4f1b-8651-03cfce7dd20c-kube-api-access-8rttd\") pod \"multus-additional-cni-plugins-5nvkq\" (UID: \"4ca8a794-1985-4f1b-8651-03cfce7dd20c\") " pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.305570 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfhbl\" (UniqueName: \"kubernetes.io/projected/fa65a108-1826-4e74-8e8a-1eae605298f3-kube-api-access-tfhbl\") pod \"multus-kdv64\" (UID: \"fa65a108-1826-4e74-8e8a-1eae605298f3\") " pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.312484 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.313182 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp8s9\" (UniqueName: \"kubernetes.io/projected/6f0c380f-ebc1-482f-9a91-8b08033eadf2-kube-api-access-dp8s9\") pod \"ovnkube-node-s7xfk\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.324437 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.337638 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.350625 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.355364 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.355418 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.355436 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.355460 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.355476 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:04Z","lastTransitionTime":"2026-01-29T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.375906 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.388647 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.393465 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.398955 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kdv64" Jan 29 06:44:04 crc kubenswrapper[4826]: W0129 06:44:04.400660 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ca8a794_1985_4f1b_8651_03cfce7dd20c.slice/crio-125bda194d758f3491ddefb5e2ffd4a35bb7dea38edac0510a7f80c18f5bd0b1 WatchSource:0}: Error finding container 125bda194d758f3491ddefb5e2ffd4a35bb7dea38edac0510a7f80c18f5bd0b1: Status 404 returned error can't find the container with id 125bda194d758f3491ddefb5e2ffd4a35bb7dea38edac0510a7f80c18f5bd0b1 Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.403906 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.405425 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: W0129 06:44:04.412281 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa65a108_1826_4e74_8e8a_1eae605298f3.slice/crio-1ed3148b201bcca5f3ef45bc4c6074899733ce959bcc48695a25127f8453acc1 WatchSource:0}: Error finding container 1ed3148b201bcca5f3ef45bc4c6074899733ce959bcc48695a25127f8453acc1: Status 404 returned error can't find the container with id 1ed3148b201bcca5f3ef45bc4c6074899733ce959bcc48695a25127f8453acc1 Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.420498 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.460723 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.460791 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.460804 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.460848 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.460860 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:04Z","lastTransitionTime":"2026-01-29T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.563900 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.563956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.563967 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.564081 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.564100 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:04Z","lastTransitionTime":"2026-01-29T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.667132 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.667183 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.667195 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.667215 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.667227 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:04Z","lastTransitionTime":"2026-01-29T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.745377 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:45:05.88911593 +0000 UTC Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.769415 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.769448 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.769460 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.769476 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.769486 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:04Z","lastTransitionTime":"2026-01-29T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.807982 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.808106 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:04 crc kubenswrapper[4826]: E0129 06:44:04.808122 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:04 crc kubenswrapper[4826]: E0129 06:44:04.808285 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.872740 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.872796 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.872808 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.872828 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.872842 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:04Z","lastTransitionTime":"2026-01-29T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.963531 4826 generic.go:334] "Generic (PLEG): container finished" podID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerID="6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7" exitCode=0 Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.963599 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerDied","Data":"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.963664 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerStarted","Data":"a84227bc1252087022d1aaf8e51d27e3d58ecd26a7c7ca6ff62b401818131963"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.967539 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kdv64" event={"ID":"fa65a108-1826-4e74-8e8a-1eae605298f3","Type":"ContainerStarted","Data":"7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.967607 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kdv64" event={"ID":"fa65a108-1826-4e74-8e8a-1eae605298f3","Type":"ContainerStarted","Data":"1ed3148b201bcca5f3ef45bc4c6074899733ce959bcc48695a25127f8453acc1"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.969656 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tdzw4" event={"ID":"550bdc9c-0324-4f3c-98df-95fbf1029eda","Type":"ContainerStarted","Data":"42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.969709 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tdzw4" event={"ID":"550bdc9c-0324-4f3c-98df-95fbf1029eda","Type":"ContainerStarted","Data":"167475ccbf232e8a53e3fecd782ef4f29b9280a21bfc2dec83e2da617a19c2f8"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.972868 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.972902 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.972917 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"01be7c10cdabc7feea1939a2ebdeaf512b0500610f8622b1f5986b7550ea28f8"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.974961 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" event={"ID":"4ca8a794-1985-4f1b-8651-03cfce7dd20c","Type":"ContainerStarted","Data":"a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.974999 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" event={"ID":"4ca8a794-1985-4f1b-8651-03cfce7dd20c","Type":"ContainerStarted","Data":"125bda194d758f3491ddefb5e2ffd4a35bb7dea38edac0510a7f80c18f5bd0b1"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.975351 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.975400 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.975420 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.975444 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.975461 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:04Z","lastTransitionTime":"2026-01-29T06:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:04 crc kubenswrapper[4826]: I0129 06:44:04.991103 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:04Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.011831 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.028463 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.050244 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.064417 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.079700 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.079748 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.079760 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.079780 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.079792 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:05Z","lastTransitionTime":"2026-01-29T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.080484 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.099751 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.138813 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.155764 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.173610 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.182251 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.182310 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.182325 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.182344 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.182358 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:05Z","lastTransitionTime":"2026-01-29T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.187553 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.203860 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.217446 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.228035 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.237034 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.248127 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.271013 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.285009 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.285055 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.285066 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.285083 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.285095 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:05Z","lastTransitionTime":"2026-01-29T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.286651 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.298731 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.311975 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.330115 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.342540 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.355787 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.368750 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.390711 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.390776 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.390808 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.390838 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.390858 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:05Z","lastTransitionTime":"2026-01-29T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.396451 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.415272 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.494427 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.494476 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.494486 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.494503 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.494515 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:05Z","lastTransitionTime":"2026-01-29T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.596644 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.597026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.597039 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.597057 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.597067 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:05Z","lastTransitionTime":"2026-01-29T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.699372 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.699430 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.699448 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.699480 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.699499 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:05Z","lastTransitionTime":"2026-01-29T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.746622 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:07:32.032528784 +0000 UTC Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.802908 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.803099 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.803160 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.803226 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.803290 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:05Z","lastTransitionTime":"2026-01-29T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.808185 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:05 crc kubenswrapper[4826]: E0129 06:44:05.808351 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.907170 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.907764 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.907872 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.907966 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.908038 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:05Z","lastTransitionTime":"2026-01-29T06:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.981399 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerStarted","Data":"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a"} Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.981452 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerStarted","Data":"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a"} Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.981466 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerStarted","Data":"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220"} Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.981479 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerStarted","Data":"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b"} Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.987715 4826 generic.go:334] "Generic (PLEG): container finished" podID="4ca8a794-1985-4f1b-8651-03cfce7dd20c" containerID="a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697" exitCode=0 Jan 29 06:44:05 crc kubenswrapper[4826]: I0129 06:44:05.987749 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" event={"ID":"4ca8a794-1985-4f1b-8651-03cfce7dd20c","Type":"ContainerDied","Data":"a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697"} Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.000819 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:05Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.010523 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.010583 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.010595 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.010614 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.010627 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:06Z","lastTransitionTime":"2026-01-29T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.013887 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.027413 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.040095 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.062541 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.077966 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.089426 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.102321 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.116968 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.117150 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.117245 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.117358 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.117454 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:06Z","lastTransitionTime":"2026-01-29T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.117161 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.136198 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.149440 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.163001 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.178813 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.220568 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.220612 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.220622 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.220640 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.220651 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:06Z","lastTransitionTime":"2026-01-29T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.324016 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.324075 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.324088 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.324107 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.324123 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:06Z","lastTransitionTime":"2026-01-29T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.426621 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.426676 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.426688 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.426709 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.426721 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:06Z","lastTransitionTime":"2026-01-29T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.529550 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.529618 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.529637 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.529665 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.529688 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:06Z","lastTransitionTime":"2026-01-29T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.565173 4826 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.632997 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.633081 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.633102 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.633132 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.633151 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:06Z","lastTransitionTime":"2026-01-29T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.736394 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.736445 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.736462 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.736484 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.736499 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:06Z","lastTransitionTime":"2026-01-29T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.747032 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:53:13.902406092 +0000 UTC Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.808536 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.808642 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:06 crc kubenswrapper[4826]: E0129 06:44:06.808731 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:06 crc kubenswrapper[4826]: E0129 06:44:06.808859 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.835823 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.839631 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.839707 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.839727 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.839753 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.839771 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:06Z","lastTransitionTime":"2026-01-29T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.858015 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.873979 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.898390 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.917532 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.938724 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.941746 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.941801 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.941821 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.941847 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.941866 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:06Z","lastTransitionTime":"2026-01-29T06:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.955290 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.973187 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.986010 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:06Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.993925 4826 generic.go:334] "Generic (PLEG): container finished" podID="4ca8a794-1985-4f1b-8651-03cfce7dd20c" containerID="30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f" exitCode=0 Jan 29 06:44:06 crc kubenswrapper[4826]: I0129 06:44:06.994011 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" event={"ID":"4ca8a794-1985-4f1b-8651-03cfce7dd20c","Type":"ContainerDied","Data":"30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f"} Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.000076 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerStarted","Data":"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317"} Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.000113 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerStarted","Data":"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24"} Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.011404 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.035990 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.046078 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.046125 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.046141 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.046164 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.046181 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:07Z","lastTransitionTime":"2026-01-29T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.059366 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.079245 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.097146 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.115738 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.138034 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.149276 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.149360 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.149380 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.149405 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.149423 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:07Z","lastTransitionTime":"2026-01-29T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.158363 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.179169 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.198523 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.216223 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.239201 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.253410 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.253467 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.253493 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.253508 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.253527 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:07Z","lastTransitionTime":"2026-01-29T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.256525 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.275347 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.294601 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.318196 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.337223 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.356686 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.356733 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.356745 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.356780 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.356793 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:07Z","lastTransitionTime":"2026-01-29T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.460134 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.460193 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.460213 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.460237 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.460253 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:07Z","lastTransitionTime":"2026-01-29T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.563654 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.563705 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.563718 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.563737 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.563750 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:07Z","lastTransitionTime":"2026-01-29T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.667372 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.667491 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.667513 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.667546 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.667568 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:07Z","lastTransitionTime":"2026-01-29T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.747474 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:49:22.415330948 +0000 UTC Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.771251 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.771347 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.771366 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.771393 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.771414 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:07Z","lastTransitionTime":"2026-01-29T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.808046 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:07 crc kubenswrapper[4826]: E0129 06:44:07.808346 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.875738 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.875799 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.875811 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.875833 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.875849 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:07Z","lastTransitionTime":"2026-01-29T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.880824 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-b69cv"] Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.881830 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-b69cv" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.884596 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.884999 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.886146 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.886745 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.903658 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.928052 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.945242 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.966948 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.979760 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.979829 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.979849 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.979879 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.979898 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:07Z","lastTransitionTime":"2026-01-29T06:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:07 crc kubenswrapper[4826]: I0129 06:44:07.987258 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:07Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.007800 4826 generic.go:334] "Generic (PLEG): container finished" podID="4ca8a794-1985-4f1b-8651-03cfce7dd20c" containerID="345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d" exitCode=0 Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.007877 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" event={"ID":"4ca8a794-1985-4f1b-8651-03cfce7dd20c","Type":"ContainerDied","Data":"345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d"} Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.014470 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.023866 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3bbd3c9-5a5b-4b55-9742-0fe17ceab252-host\") pod \"node-ca-b69cv\" (UID: \"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\") " pod="openshift-image-registry/node-ca-b69cv" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.023973 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d9qn\" (UniqueName: \"kubernetes.io/projected/e3bbd3c9-5a5b-4b55-9742-0fe17ceab252-kube-api-access-4d9qn\") pod \"node-ca-b69cv\" (UID: \"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\") " pod="openshift-image-registry/node-ca-b69cv" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.024017 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e3bbd3c9-5a5b-4b55-9742-0fe17ceab252-serviceca\") pod \"node-ca-b69cv\" (UID: \"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\") " pod="openshift-image-registry/node-ca-b69cv" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.038994 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.059404 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.078590 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.082934 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.083001 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.083019 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.083050 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.083068 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:08Z","lastTransitionTime":"2026-01-29T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.093483 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.124787 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.125077 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e3bbd3c9-5a5b-4b55-9742-0fe17ceab252-serviceca\") pod \"node-ca-b69cv\" (UID: \"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\") " pod="openshift-image-registry/node-ca-b69cv" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.125273 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3bbd3c9-5a5b-4b55-9742-0fe17ceab252-host\") pod \"node-ca-b69cv\" (UID: \"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\") " pod="openshift-image-registry/node-ca-b69cv" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.125364 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d9qn\" (UniqueName: \"kubernetes.io/projected/e3bbd3c9-5a5b-4b55-9742-0fe17ceab252-kube-api-access-4d9qn\") pod \"node-ca-b69cv\" (UID: \"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\") " pod="openshift-image-registry/node-ca-b69cv" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.125554 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e3bbd3c9-5a5b-4b55-9742-0fe17ceab252-host\") pod \"node-ca-b69cv\" (UID: \"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\") " pod="openshift-image-registry/node-ca-b69cv" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.127730 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e3bbd3c9-5a5b-4b55-9742-0fe17ceab252-serviceca\") pod \"node-ca-b69cv\" (UID: \"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\") " pod="openshift-image-registry/node-ca-b69cv" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.142906 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.158211 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d9qn\" (UniqueName: \"kubernetes.io/projected/e3bbd3c9-5a5b-4b55-9742-0fe17ceab252-kube-api-access-4d9qn\") pod \"node-ca-b69cv\" (UID: \"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\") " pod="openshift-image-registry/node-ca-b69cv" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.158603 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.177923 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.189883 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.189935 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.189951 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.189977 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.189994 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:08Z","lastTransitionTime":"2026-01-29T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.195379 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.206363 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-b69cv" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.213899 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.229572 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.250660 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.268334 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.293366 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.294041 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.294105 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.294123 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.294144 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.294160 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:08Z","lastTransitionTime":"2026-01-29T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.312008 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.327970 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.342228 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.357532 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.384450 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.396947 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.396996 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.397007 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.397025 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.397038 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:08Z","lastTransitionTime":"2026-01-29T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.400059 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.412351 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.429878 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:08Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.500978 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.501027 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.501045 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.501071 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.501085 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:08Z","lastTransitionTime":"2026-01-29T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.604947 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.604991 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.605005 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.605025 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.605042 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:08Z","lastTransitionTime":"2026-01-29T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.708946 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.709000 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.709017 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.709041 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.709060 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:08Z","lastTransitionTime":"2026-01-29T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.747889 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 15:42:33.669105997 +0000 UTC Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.808580 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.808769 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:08 crc kubenswrapper[4826]: E0129 06:44:08.808936 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:08 crc kubenswrapper[4826]: E0129 06:44:08.809203 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.811777 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.811832 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.811843 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.811869 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.811882 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:08Z","lastTransitionTime":"2026-01-29T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.916147 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.916214 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.916233 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.916260 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:08 crc kubenswrapper[4826]: I0129 06:44:08.916287 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:08Z","lastTransitionTime":"2026-01-29T06:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.024487 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.027910 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.027935 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.027973 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.027991 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:09Z","lastTransitionTime":"2026-01-29T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.030727 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerStarted","Data":"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.033701 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-b69cv" event={"ID":"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252","Type":"ContainerStarted","Data":"1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.033736 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-b69cv" event={"ID":"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252","Type":"ContainerStarted","Data":"50dff9397a0f2695375fb84b9a702b0932d52be72714e2955c2303a29ed7e8f8"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.042027 4826 generic.go:334] "Generic (PLEG): container finished" podID="4ca8a794-1985-4f1b-8651-03cfce7dd20c" containerID="e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5" exitCode=0 Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.042091 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" event={"ID":"4ca8a794-1985-4f1b-8651-03cfce7dd20c","Type":"ContainerDied","Data":"e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.061214 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.082801 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.113024 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.131927 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.133532 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.134030 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.134369 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.134465 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.134590 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:09Z","lastTransitionTime":"2026-01-29T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.149894 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.167997 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.182100 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.202640 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.220351 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.234735 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.237280 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.237357 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.237375 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.237402 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.237423 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:09Z","lastTransitionTime":"2026-01-29T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.247486 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.258798 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.271733 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.284876 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.302217 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.314811 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.325776 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.337702 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.344804 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.344858 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.344870 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.344889 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.344901 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:09Z","lastTransitionTime":"2026-01-29T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.369548 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.389373 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.404605 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.421152 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.437472 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.447373 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.447414 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.447426 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.447446 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.447463 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:09Z","lastTransitionTime":"2026-01-29T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.452946 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.466744 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.482719 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.499252 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.516094 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:09Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.550782 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.550856 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.550878 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.550907 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.550926 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:09Z","lastTransitionTime":"2026-01-29T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.653768 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.654256 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.654277 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.654335 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.654356 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:09Z","lastTransitionTime":"2026-01-29T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.748716 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:24:28.385620491 +0000 UTC Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.757392 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.757438 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.757452 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.757470 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.757484 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:09Z","lastTransitionTime":"2026-01-29T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.807833 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:09 crc kubenswrapper[4826]: E0129 06:44:09.808022 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.860267 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.860352 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.860373 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.860404 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.860426 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:09Z","lastTransitionTime":"2026-01-29T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.962745 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.962809 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.962826 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.962851 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:09 crc kubenswrapper[4826]: I0129 06:44:09.962869 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:09Z","lastTransitionTime":"2026-01-29T06:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.050810 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" event={"ID":"4ca8a794-1985-4f1b-8651-03cfce7dd20c","Type":"ContainerStarted","Data":"319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476"} Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.065443 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.065493 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.065507 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.065526 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.065540 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:10Z","lastTransitionTime":"2026-01-29T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.072531 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.089816 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.105109 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.126218 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.140392 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.161275 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.168756 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.168825 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.168838 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.168858 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.168872 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:10Z","lastTransitionTime":"2026-01-29T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.178814 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.193613 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.207702 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.237017 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.256045 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.271411 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.271452 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.271466 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.271487 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.271499 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:10Z","lastTransitionTime":"2026-01-29T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.276577 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.292623 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.312639 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:10Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.374724 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.374794 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.374813 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.374839 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.374858 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:10Z","lastTransitionTime":"2026-01-29T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.479013 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.479091 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.479111 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.479144 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.479166 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:10Z","lastTransitionTime":"2026-01-29T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.583586 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.583645 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.583662 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.583686 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.583703 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:10Z","lastTransitionTime":"2026-01-29T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.687394 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.687465 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.687487 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.687516 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.687535 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:10Z","lastTransitionTime":"2026-01-29T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.749356 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 13:22:53.040250609 +0000 UTC Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.790351 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.790406 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.790428 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.790452 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.790470 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:10Z","lastTransitionTime":"2026-01-29T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.807819 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.807867 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:10 crc kubenswrapper[4826]: E0129 06:44:10.807988 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:10 crc kubenswrapper[4826]: E0129 06:44:10.808126 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.894123 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.894206 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.894228 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.894259 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.894291 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:10Z","lastTransitionTime":"2026-01-29T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.998434 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.998519 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.998544 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.998578 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:10 crc kubenswrapper[4826]: I0129 06:44:10.998631 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:10Z","lastTransitionTime":"2026-01-29T06:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.061656 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerStarted","Data":"ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d"} Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.062144 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.062202 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.066760 4826 generic.go:334] "Generic (PLEG): container finished" podID="4ca8a794-1985-4f1b-8651-03cfce7dd20c" containerID="319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476" exitCode=0 Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.066844 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" event={"ID":"4ca8a794-1985-4f1b-8651-03cfce7dd20c","Type":"ContainerDied","Data":"319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476"} Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.086777 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.146421 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.146479 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.146489 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.146508 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.146520 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:11Z","lastTransitionTime":"2026-01-29T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.149070 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.158373 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.159368 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.174451 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.190225 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.204465 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.216949 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.234847 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.248871 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.248915 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.248926 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.248941 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.248953 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:11Z","lastTransitionTime":"2026-01-29T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.256076 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.269902 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.282515 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.295735 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.307631 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.320065 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.332086 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.348549 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.351526 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.351572 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.351582 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.351603 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.351616 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:11Z","lastTransitionTime":"2026-01-29T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.364694 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.382074 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.394959 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.408786 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.418366 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.439042 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.454601 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.454670 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.454690 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.454718 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.454736 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:11Z","lastTransitionTime":"2026-01-29T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.457783 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.474952 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.488871 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.510531 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.529720 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.550586 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.557894 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.557995 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.558016 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.558048 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.558067 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:11Z","lastTransitionTime":"2026-01-29T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.565495 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:11Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.661038 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.661091 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.661104 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.661126 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.661139 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:11Z","lastTransitionTime":"2026-01-29T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.750180 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:47:48.672372046 +0000 UTC Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.764900 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.764955 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.764975 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.765004 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.765021 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:11Z","lastTransitionTime":"2026-01-29T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.808285 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:11 crc kubenswrapper[4826]: E0129 06:44:11.808497 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.868799 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.868869 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.868888 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.868918 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.868942 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:11Z","lastTransitionTime":"2026-01-29T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.972827 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.972885 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.972904 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.972931 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:11 crc kubenswrapper[4826]: I0129 06:44:11.972950 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:11Z","lastTransitionTime":"2026-01-29T06:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.072385 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.072606 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:44:28.072566223 +0000 UTC m=+51.934359322 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.072688 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.072900 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.072947 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.073029 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.073037 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:28.073013775 +0000 UTC m=+51.934806874 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.073144 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:28.073124068 +0000 UTC m=+51.934917167 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.075521 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.075569 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.075587 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.075613 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.075635 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:12Z","lastTransitionTime":"2026-01-29T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.078099 4826 generic.go:334] "Generic (PLEG): container finished" podID="4ca8a794-1985-4f1b-8651-03cfce7dd20c" containerID="1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323" exitCode=0 Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.078180 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" event={"ID":"4ca8a794-1985-4f1b-8651-03cfce7dd20c","Type":"ContainerDied","Data":"1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323"} Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.078409 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.099714 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.120626 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.138454 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.158367 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.175639 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.175795 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.176358 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.176402 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.176424 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.176489 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:28.176465065 +0000 UTC m=+52.038258164 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.176667 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.176697 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.176717 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.176782 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:28.176759212 +0000 UTC m=+52.038552321 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.178822 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.178870 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.178890 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.178914 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.178932 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:12Z","lastTransitionTime":"2026-01-29T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.179944 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.193956 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.215618 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.238112 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.265346 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.285254 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.285335 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.285349 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.285368 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.285382 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:12Z","lastTransitionTime":"2026-01-29T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.286086 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.301390 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.319064 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.335771 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.374365 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:12Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.395283 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.395353 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.395372 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.395398 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.395413 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:12Z","lastTransitionTime":"2026-01-29T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.498586 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.498651 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.498671 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.498696 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.498716 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:12Z","lastTransitionTime":"2026-01-29T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.601714 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.601762 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.601778 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.601802 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.601819 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:12Z","lastTransitionTime":"2026-01-29T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.705020 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.705092 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.705109 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.705135 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.705151 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:12Z","lastTransitionTime":"2026-01-29T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.751062 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:58:24.018723296 +0000 UTC Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.808022 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.808242 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.808376 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:12 crc kubenswrapper[4826]: E0129 06:44:12.808501 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.808688 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.808740 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.808754 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.808777 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.808794 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:12Z","lastTransitionTime":"2026-01-29T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.912329 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.912429 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.912456 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.912499 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:12 crc kubenswrapper[4826]: I0129 06:44:12.912526 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:12Z","lastTransitionTime":"2026-01-29T06:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.015639 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.015695 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.015709 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.015730 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.015746 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.088709 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" event={"ID":"4ca8a794-1985-4f1b-8651-03cfce7dd20c","Type":"ContainerStarted","Data":"734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.088773 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.113529 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.114634 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.114705 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.114723 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.114750 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.114771 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.128226 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: E0129 06:44:13.135239 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.140244 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.140285 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.140312 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.140332 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.140347 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.150322 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: E0129 06:44:13.159179 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.164874 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.164952 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.164976 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.165002 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.165023 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.171169 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: E0129 06:44:13.187190 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.189174 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.191989 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.192040 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.192058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.192082 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.192101 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.203357 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: E0129 06:44:13.209223 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.215354 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.215407 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.215422 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.215442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.215457 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.219869 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: E0129 06:44:13.232659 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: E0129 06:44:13.232816 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.235214 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.235257 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.235272 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.235343 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.235358 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.236893 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.254992 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.273836 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.289462 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.304973 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.333829 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.338364 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.338414 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.338427 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.338492 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.338512 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.349643 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:13Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.461476 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.461543 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.461563 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.461599 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.461619 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.565166 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.565226 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.565240 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.565265 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.565279 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.668351 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.668423 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.668443 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.668473 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.668497 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.751431 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:43:57.603543054 +0000 UTC Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.771600 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.771648 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.771657 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.771673 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.771683 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.808453 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:13 crc kubenswrapper[4826]: E0129 06:44:13.808614 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.875269 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.875382 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.875405 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.875431 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.875449 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.979264 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.979785 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.979805 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.979833 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:13 crc kubenswrapper[4826]: I0129 06:44:13.979852 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:13Z","lastTransitionTime":"2026-01-29T06:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.084179 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.084231 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.084247 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.084273 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.084291 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:14Z","lastTransitionTime":"2026-01-29T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.098525 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovnkube-controller/0.log" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.103733 4826 generic.go:334] "Generic (PLEG): container finished" podID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerID="ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d" exitCode=1 Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.103784 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerDied","Data":"ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d"} Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.105053 4826 scope.go:117] "RemoveContainer" containerID="ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.129347 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.152137 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.177349 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.186778 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.186837 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.186855 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.186880 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.186924 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:14Z","lastTransitionTime":"2026-01-29T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.198046 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.218615 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.237390 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.252839 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.273553 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"4:13.698105 6107 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.698242 6107 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.699054 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 06:44:13.699116 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 06:44:13.699126 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 06:44:13.699162 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:44:13.699183 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:44:13.699192 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:44:13.699214 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:44:13.699274 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:44:13.699344 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:13.699391 6107 factory.go:656] Stopping watch factory\\\\nI0129 06:44:13.699415 6107 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.290347 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.290415 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.290440 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.290472 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.290575 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:14Z","lastTransitionTime":"2026-01-29T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.293128 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.309494 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.331962 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.356857 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.377733 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.391842 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:14Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.396459 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.396525 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.396544 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.396568 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.396584 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:14Z","lastTransitionTime":"2026-01-29T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.500015 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.500081 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.500097 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.500122 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.500139 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:14Z","lastTransitionTime":"2026-01-29T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.604836 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.604883 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.604900 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.604925 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.604943 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:14Z","lastTransitionTime":"2026-01-29T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.708606 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.708675 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.708698 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.708729 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.708750 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:14Z","lastTransitionTime":"2026-01-29T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.751877 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:24:43.865913418 +0000 UTC Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.808478 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:14 crc kubenswrapper[4826]: E0129 06:44:14.808706 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.809432 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:14 crc kubenswrapper[4826]: E0129 06:44:14.809569 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.811915 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.811974 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.811997 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.812026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.812050 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:14Z","lastTransitionTime":"2026-01-29T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.913785 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.913823 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.913834 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.913852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:14 crc kubenswrapper[4826]: I0129 06:44:14.913865 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:14Z","lastTransitionTime":"2026-01-29T06:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.016463 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.016503 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.016513 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.016529 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.016542 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:15Z","lastTransitionTime":"2026-01-29T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.115568 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovnkube-controller/0.log" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.117946 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.117974 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.117985 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.117999 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.118009 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:15Z","lastTransitionTime":"2026-01-29T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.118369 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerStarted","Data":"06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2"} Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.118469 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.131469 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.143892 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.155771 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.169409 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.183623 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.196157 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.214031 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.221090 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.221132 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.221143 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.221160 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.221170 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:15Z","lastTransitionTime":"2026-01-29T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.232165 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.251869 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.271552 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.286219 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.313455 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.324194 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.324231 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.324240 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.324259 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.324270 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:15Z","lastTransitionTime":"2026-01-29T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.344242 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"4:13.698105 6107 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.698242 6107 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.699054 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 06:44:13.699116 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 06:44:13.699126 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 06:44:13.699162 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:44:13.699183 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:44:13.699192 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:44:13.699214 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:44:13.699274 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:44:13.699344 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:13.699391 6107 factory.go:656] Stopping watch factory\\\\nI0129 06:44:13.699415 6107 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.361923 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.427864 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.427898 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.427911 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.427929 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.427945 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:15Z","lastTransitionTime":"2026-01-29T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.531730 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.531793 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.531812 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.531840 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.531869 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:15Z","lastTransitionTime":"2026-01-29T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.635055 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.635136 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.635157 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.635185 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.635205 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:15Z","lastTransitionTime":"2026-01-29T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.737897 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.737966 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.737985 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.738015 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.738035 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:15Z","lastTransitionTime":"2026-01-29T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.752490 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:19:07.656155953 +0000 UTC Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.807899 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:15 crc kubenswrapper[4826]: E0129 06:44:15.808070 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.841862 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.841928 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.841953 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.841978 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.841998 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:15Z","lastTransitionTime":"2026-01-29T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.913270 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p"] Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.914181 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.917782 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.918497 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.929279 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be51f861-02cd-4b43-8b55-eddc27a15272-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5gq6p\" (UID: \"be51f861-02cd-4b43-8b55-eddc27a15272\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.929415 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be51f861-02cd-4b43-8b55-eddc27a15272-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5gq6p\" (UID: \"be51f861-02cd-4b43-8b55-eddc27a15272\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.929465 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be51f861-02cd-4b43-8b55-eddc27a15272-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5gq6p\" (UID: \"be51f861-02cd-4b43-8b55-eddc27a15272\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.929685 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjcd9\" (UniqueName: \"kubernetes.io/projected/be51f861-02cd-4b43-8b55-eddc27a15272-kube-api-access-sjcd9\") pod \"ovnkube-control-plane-749d76644c-5gq6p\" (UID: \"be51f861-02cd-4b43-8b55-eddc27a15272\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.944229 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.945076 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.945133 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.945151 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.945175 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.945193 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:15Z","lastTransitionTime":"2026-01-29T06:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.968568 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:15 crc kubenswrapper[4826]: I0129 06:44:15.994704 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:15Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.025825 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"4:13.698105 6107 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.698242 6107 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.699054 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 06:44:13.699116 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 06:44:13.699126 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 06:44:13.699162 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:44:13.699183 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:44:13.699192 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:44:13.699214 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:44:13.699274 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:44:13.699344 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:13.699391 6107 factory.go:656] Stopping watch factory\\\\nI0129 06:44:13.699415 6107 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.030712 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjcd9\" (UniqueName: \"kubernetes.io/projected/be51f861-02cd-4b43-8b55-eddc27a15272-kube-api-access-sjcd9\") pod \"ovnkube-control-plane-749d76644c-5gq6p\" (UID: \"be51f861-02cd-4b43-8b55-eddc27a15272\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.030826 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be51f861-02cd-4b43-8b55-eddc27a15272-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5gq6p\" (UID: \"be51f861-02cd-4b43-8b55-eddc27a15272\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.030917 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be51f861-02cd-4b43-8b55-eddc27a15272-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5gq6p\" (UID: \"be51f861-02cd-4b43-8b55-eddc27a15272\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.031051 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be51f861-02cd-4b43-8b55-eddc27a15272-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5gq6p\" (UID: \"be51f861-02cd-4b43-8b55-eddc27a15272\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.032507 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be51f861-02cd-4b43-8b55-eddc27a15272-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5gq6p\" (UID: \"be51f861-02cd-4b43-8b55-eddc27a15272\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.032705 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be51f861-02cd-4b43-8b55-eddc27a15272-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5gq6p\" (UID: \"be51f861-02cd-4b43-8b55-eddc27a15272\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.040597 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be51f861-02cd-4b43-8b55-eddc27a15272-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5gq6p\" (UID: \"be51f861-02cd-4b43-8b55-eddc27a15272\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.045194 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.048850 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.048989 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.049010 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.049038 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.049056 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:16Z","lastTransitionTime":"2026-01-29T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.053466 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjcd9\" (UniqueName: \"kubernetes.io/projected/be51f861-02cd-4b43-8b55-eddc27a15272-kube-api-access-sjcd9\") pod \"ovnkube-control-plane-749d76644c-5gq6p\" (UID: \"be51f861-02cd-4b43-8b55-eddc27a15272\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.065266 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.084432 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.097998 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.115061 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.125480 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovnkube-controller/1.log" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.126690 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovnkube-controller/0.log" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.131131 4826 generic.go:334] "Generic (PLEG): container finished" podID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerID="06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2" exitCode=1 Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.131182 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerDied","Data":"06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2"} Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.131234 4826 scope.go:117] "RemoveContainer" containerID="ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.132517 4826 scope.go:117] "RemoveContainer" containerID="06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2" Jan 29 06:44:16 crc kubenswrapper[4826]: E0129 06:44:16.132882 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-s7xfk_openshift-ovn-kubernetes(6f0c380f-ebc1-482f-9a91-8b08033eadf2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.144744 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.151832 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.151891 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.151911 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.151937 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.151979 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:16Z","lastTransitionTime":"2026-01-29T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.160929 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.185368 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.206183 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.223622 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.241720 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.244823 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.257859 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.257938 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.257965 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.257998 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.258023 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:16Z","lastTransitionTime":"2026-01-29T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:16 crc kubenswrapper[4826]: W0129 06:44:16.263970 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe51f861_02cd_4b43_8b55_eddc27a15272.slice/crio-f0ec9b79cfba51d6be4c1df756d8f4586a380042317ecc73b96ae83e881b3361 WatchSource:0}: Error finding container f0ec9b79cfba51d6be4c1df756d8f4586a380042317ecc73b96ae83e881b3361: Status 404 returned error can't find the container with id f0ec9b79cfba51d6be4c1df756d8f4586a380042317ecc73b96ae83e881b3361 Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.268685 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.292096 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.309942 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.333651 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.354396 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.360724 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.360790 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.360809 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.360832 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.360851 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:16Z","lastTransitionTime":"2026-01-29T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.379402 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.397907 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.419196 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.434543 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.450749 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.463817 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.463874 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.463893 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.463920 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.463938 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:16Z","lastTransitionTime":"2026-01-29T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.481478 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"4:13.698105 6107 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.698242 6107 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.699054 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 06:44:13.699116 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 06:44:13.699126 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 06:44:13.699162 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:44:13.699183 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:44:13.699192 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:44:13.699214 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:44:13.699274 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:44:13.699344 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:13.699391 6107 factory.go:656] Stopping watch factory\\\\nI0129 06:44:13.699415 6107 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"netes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.415755 6263 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416359 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416612 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416806 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416853 6263 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416807 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.417498 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:15.417553 6263 factory.go:656] Stopping watch factory\\\\nI0129 06:44:15.417567 6263 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.497031 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.512407 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.528475 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.547539 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.571993 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.572132 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.572435 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.572486 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.572541 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:16Z","lastTransitionTime":"2026-01-29T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.676663 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.676737 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.676761 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.676796 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.676822 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:16Z","lastTransitionTime":"2026-01-29T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.753918 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:54:01.966304995 +0000 UTC Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.780476 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.780530 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.780551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.780580 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.780600 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:16Z","lastTransitionTime":"2026-01-29T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.808556 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.808548 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:16 crc kubenswrapper[4826]: E0129 06:44:16.808756 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:16 crc kubenswrapper[4826]: E0129 06:44:16.809182 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.836691 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.856082 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.876089 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.884358 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.884404 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.884421 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.884446 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.884465 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:16Z","lastTransitionTime":"2026-01-29T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.893255 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.920489 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"4:13.698105 6107 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.698242 6107 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.699054 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 06:44:13.699116 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 06:44:13.699126 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 06:44:13.699162 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:44:13.699183 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:44:13.699192 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:44:13.699214 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:44:13.699274 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:44:13.699344 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:13.699391 6107 factory.go:656] Stopping watch factory\\\\nI0129 06:44:13.699415 6107 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"netes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.415755 6263 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416359 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416612 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416806 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416853 6263 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416807 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.417498 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:15.417553 6263 factory.go:656] Stopping watch factory\\\\nI0129 06:44:15.417567 6263 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.937350 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.954267 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.972365 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.986327 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:16Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.987378 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.987431 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.987452 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.987482 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:16 crc kubenswrapper[4826]: I0129 06:44:16.987504 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:16Z","lastTransitionTime":"2026-01-29T06:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.006536 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.024171 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.042469 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.058943 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.080027 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.089934 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.090155 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.090381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.090493 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.090552 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:17Z","lastTransitionTime":"2026-01-29T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.096146 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.136199 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovnkube-controller/1.log" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.141549 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" event={"ID":"be51f861-02cd-4b43-8b55-eddc27a15272","Type":"ContainerStarted","Data":"fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae"} Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.141736 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" event={"ID":"be51f861-02cd-4b43-8b55-eddc27a15272","Type":"ContainerStarted","Data":"a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc"} Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.143492 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" event={"ID":"be51f861-02cd-4b43-8b55-eddc27a15272","Type":"ContainerStarted","Data":"f0ec9b79cfba51d6be4c1df756d8f4586a380042317ecc73b96ae83e881b3361"} Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.167254 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.182846 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.193863 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.193956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.193980 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.194012 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.194036 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:17Z","lastTransitionTime":"2026-01-29T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.203465 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.222436 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.240290 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.255657 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.273391 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.293406 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.296478 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.296661 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.296776 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.296914 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.297037 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:17Z","lastTransitionTime":"2026-01-29T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.319837 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.339362 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.358701 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.375934 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.393290 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.399687 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.399831 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.399916 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.400006 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.400093 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:17Z","lastTransitionTime":"2026-01-29T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.423965 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"4:13.698105 6107 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.698242 6107 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.699054 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 06:44:13.699116 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 06:44:13.699126 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 06:44:13.699162 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:44:13.699183 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:44:13.699192 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:44:13.699214 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:44:13.699274 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:44:13.699344 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:13.699391 6107 factory.go:656] Stopping watch factory\\\\nI0129 06:44:13.699415 6107 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"netes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.415755 6263 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416359 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416612 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416806 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416853 6263 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416807 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.417498 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:15.417553 6263 factory.go:656] Stopping watch factory\\\\nI0129 06:44:15.417567 6263 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.441067 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.502716 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.502801 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.502819 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.502845 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.502865 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:17Z","lastTransitionTime":"2026-01-29T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.605738 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.605794 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.605810 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.605831 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.605847 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:17Z","lastTransitionTime":"2026-01-29T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.708361 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.708398 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.708409 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.708424 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.708437 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:17Z","lastTransitionTime":"2026-01-29T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.754328 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:25:25.973089712 +0000 UTC Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.808726 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:17 crc kubenswrapper[4826]: E0129 06:44:17.808903 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.811459 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.811510 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.811528 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.811551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.811567 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:17Z","lastTransitionTime":"2026-01-29T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.859533 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-6qxzb"] Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.860523 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:17 crc kubenswrapper[4826]: E0129 06:44:17.860791 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.879959 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.898801 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.914691 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.914753 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.914770 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.914799 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.914817 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:17Z","lastTransitionTime":"2026-01-29T06:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.921384 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.944330 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.954709 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4kx7\" (UniqueName: \"kubernetes.io/projected/11d649f8-dcd0-4c52-96f1-f5c229546376-kube-api-access-b4kx7\") pod \"network-metrics-daemon-6qxzb\" (UID: \"11d649f8-dcd0-4c52-96f1-f5c229546376\") " pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.954771 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs\") pod \"network-metrics-daemon-6qxzb\" (UID: \"11d649f8-dcd0-4c52-96f1-f5c229546376\") " pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.978030 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad1c6c49ea527febb6f5a1774e0f4cd993ef81cc83b74b496f07e88d0123cb3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"message\\\":\\\"4:13.698105 6107 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.698242 6107 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:13.699054 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 06:44:13.699116 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 06:44:13.699126 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 06:44:13.699162 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 06:44:13.699183 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 06:44:13.699192 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 06:44:13.699214 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 06:44:13.699274 6107 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 06:44:13.699344 6107 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:13.699391 6107 factory.go:656] Stopping watch factory\\\\nI0129 06:44:13.699415 6107 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"netes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.415755 6263 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416359 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416612 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416806 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416853 6263 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416807 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.417498 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:15.417553 6263 factory.go:656] Stopping watch factory\\\\nI0129 06:44:15.417567 6263 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:17 crc kubenswrapper[4826]: I0129 06:44:17.996543 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:17Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.013599 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11d649f8-dcd0-4c52-96f1-f5c229546376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qxzb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.019193 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.019272 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.019333 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.019365 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.019387 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:18Z","lastTransitionTime":"2026-01-29T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.038675 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.055994 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4kx7\" (UniqueName: \"kubernetes.io/projected/11d649f8-dcd0-4c52-96f1-f5c229546376-kube-api-access-b4kx7\") pod \"network-metrics-daemon-6qxzb\" (UID: \"11d649f8-dcd0-4c52-96f1-f5c229546376\") " pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.056064 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs\") pod \"network-metrics-daemon-6qxzb\" (UID: \"11d649f8-dcd0-4c52-96f1-f5c229546376\") " pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:18 crc kubenswrapper[4826]: E0129 06:44:18.056289 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:18 crc kubenswrapper[4826]: E0129 06:44:18.056435 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs podName:11d649f8-dcd0-4c52-96f1-f5c229546376 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:18.556401218 +0000 UTC m=+42.418194297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs") pod "network-metrics-daemon-6qxzb" (UID: "11d649f8-dcd0-4c52-96f1-f5c229546376") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.056727 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.085014 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.093804 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4kx7\" (UniqueName: \"kubernetes.io/projected/11d649f8-dcd0-4c52-96f1-f5c229546376-kube-api-access-b4kx7\") pod \"network-metrics-daemon-6qxzb\" (UID: \"11d649f8-dcd0-4c52-96f1-f5c229546376\") " pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.111082 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.123244 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.123322 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.123343 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.123368 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.123386 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:18Z","lastTransitionTime":"2026-01-29T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.133845 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.157827 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.177073 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.197886 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.218102 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:18Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.225952 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.226092 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.226179 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.226259 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.226363 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:18Z","lastTransitionTime":"2026-01-29T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.329335 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.329405 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.329424 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.329450 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.329467 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:18Z","lastTransitionTime":"2026-01-29T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.432645 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.432719 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.432742 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.432774 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.432796 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:18Z","lastTransitionTime":"2026-01-29T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.536768 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.536827 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.536848 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.536878 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.536899 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:18Z","lastTransitionTime":"2026-01-29T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.561978 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs\") pod \"network-metrics-daemon-6qxzb\" (UID: \"11d649f8-dcd0-4c52-96f1-f5c229546376\") " pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:18 crc kubenswrapper[4826]: E0129 06:44:18.562207 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:18 crc kubenswrapper[4826]: E0129 06:44:18.562273 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs podName:11d649f8-dcd0-4c52-96f1-f5c229546376 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:19.562251245 +0000 UTC m=+43.424044344 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs") pod "network-metrics-daemon-6qxzb" (UID: "11d649f8-dcd0-4c52-96f1-f5c229546376") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.640266 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.640463 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.640554 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.640590 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.640613 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:18Z","lastTransitionTime":"2026-01-29T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.743713 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.743775 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.743791 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.743819 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.743837 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:18Z","lastTransitionTime":"2026-01-29T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.754716 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:26:18.574287542 +0000 UTC Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.808131 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.808183 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:18 crc kubenswrapper[4826]: E0129 06:44:18.809226 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:18 crc kubenswrapper[4826]: E0129 06:44:18.809283 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.847572 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.847636 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.847661 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.847691 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.847712 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:18Z","lastTransitionTime":"2026-01-29T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.950381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.950620 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.950643 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.950667 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:18 crc kubenswrapper[4826]: I0129 06:44:18.950689 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:18Z","lastTransitionTime":"2026-01-29T06:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.054544 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.054595 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.054611 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.054633 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.054649 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:19Z","lastTransitionTime":"2026-01-29T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.157376 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.157639 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.157656 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.157677 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.157696 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:19Z","lastTransitionTime":"2026-01-29T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.261247 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.261342 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.261369 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.261398 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.261415 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:19Z","lastTransitionTime":"2026-01-29T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.364852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.364902 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.364919 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.364942 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.364958 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:19Z","lastTransitionTime":"2026-01-29T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.467933 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.467986 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.468002 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.468024 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.468041 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:19Z","lastTransitionTime":"2026-01-29T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.571004 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.571056 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.571072 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.571094 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.571113 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:19Z","lastTransitionTime":"2026-01-29T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.573811 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs\") pod \"network-metrics-daemon-6qxzb\" (UID: \"11d649f8-dcd0-4c52-96f1-f5c229546376\") " pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:19 crc kubenswrapper[4826]: E0129 06:44:19.574070 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:19 crc kubenswrapper[4826]: E0129 06:44:19.574179 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs podName:11d649f8-dcd0-4c52-96f1-f5c229546376 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:21.574148025 +0000 UTC m=+45.435941124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs") pod "network-metrics-daemon-6qxzb" (UID: "11d649f8-dcd0-4c52-96f1-f5c229546376") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.675011 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.675071 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.675088 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.675113 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.675130 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:19Z","lastTransitionTime":"2026-01-29T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.755548 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:36:15.209142183 +0000 UTC Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.777844 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.777943 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.777958 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.778002 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.778017 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:19Z","lastTransitionTime":"2026-01-29T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.808597 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.808618 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:19 crc kubenswrapper[4826]: E0129 06:44:19.808790 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:19 crc kubenswrapper[4826]: E0129 06:44:19.808895 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.881430 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.881488 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.881510 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.881539 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.881562 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:19Z","lastTransitionTime":"2026-01-29T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.985136 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.985239 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.985280 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.985329 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:19 crc kubenswrapper[4826]: I0129 06:44:19.985348 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:19Z","lastTransitionTime":"2026-01-29T06:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.088191 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.088228 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.088239 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.088259 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.088270 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:20Z","lastTransitionTime":"2026-01-29T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.191497 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.191576 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.191595 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.191620 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.191640 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:20Z","lastTransitionTime":"2026-01-29T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.299427 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.299561 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.299674 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.299714 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.299750 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:20Z","lastTransitionTime":"2026-01-29T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.403777 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.403834 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.403850 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.403874 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.403891 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:20Z","lastTransitionTime":"2026-01-29T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.506700 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.506762 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.506780 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.506802 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.506820 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:20Z","lastTransitionTime":"2026-01-29T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.609950 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.610006 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.610025 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.610049 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.610066 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:20Z","lastTransitionTime":"2026-01-29T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.712431 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.712490 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.712507 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.712528 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.712544 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:20Z","lastTransitionTime":"2026-01-29T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.755920 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:18:21.389169751 +0000 UTC Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.808117 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.808190 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:20 crc kubenswrapper[4826]: E0129 06:44:20.808279 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:20 crc kubenswrapper[4826]: E0129 06:44:20.808437 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.815555 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.815609 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.815627 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.815653 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.815670 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:20Z","lastTransitionTime":"2026-01-29T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.918222 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.918288 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.918335 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.918360 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:20 crc kubenswrapper[4826]: I0129 06:44:20.918377 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:20Z","lastTransitionTime":"2026-01-29T06:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.020847 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.020911 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.020932 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.020960 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.020983 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:21Z","lastTransitionTime":"2026-01-29T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.124280 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.124376 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.124400 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.124429 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.124451 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:21Z","lastTransitionTime":"2026-01-29T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.227058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.227112 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.227128 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.227150 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.227167 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:21Z","lastTransitionTime":"2026-01-29T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.329792 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.329852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.329871 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.329895 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.330058 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:21Z","lastTransitionTime":"2026-01-29T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.432543 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.432598 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.432613 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.432636 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.432654 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:21Z","lastTransitionTime":"2026-01-29T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.536418 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.536536 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.536557 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.536585 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.536607 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:21Z","lastTransitionTime":"2026-01-29T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.595255 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs\") pod \"network-metrics-daemon-6qxzb\" (UID: \"11d649f8-dcd0-4c52-96f1-f5c229546376\") " pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:21 crc kubenswrapper[4826]: E0129 06:44:21.595489 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:21 crc kubenswrapper[4826]: E0129 06:44:21.595591 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs podName:11d649f8-dcd0-4c52-96f1-f5c229546376 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:25.595565109 +0000 UTC m=+49.457358218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs") pod "network-metrics-daemon-6qxzb" (UID: "11d649f8-dcd0-4c52-96f1-f5c229546376") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.639214 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.639267 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.639279 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.639316 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.639331 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:21Z","lastTransitionTime":"2026-01-29T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.741563 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.741623 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.741641 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.741664 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.741682 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:21Z","lastTransitionTime":"2026-01-29T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.757132 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:55:36.50009758 +0000 UTC Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.808464 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:21 crc kubenswrapper[4826]: E0129 06:44:21.808640 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.809098 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:21 crc kubenswrapper[4826]: E0129 06:44:21.809208 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.844495 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.844555 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.844578 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.844602 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.844619 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:21Z","lastTransitionTime":"2026-01-29T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.947912 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.948010 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.948038 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.948082 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:21 crc kubenswrapper[4826]: I0129 06:44:21.948110 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:21Z","lastTransitionTime":"2026-01-29T06:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.052362 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.052421 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.052442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.052474 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.052498 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:22Z","lastTransitionTime":"2026-01-29T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.155749 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.155814 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.155833 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.155870 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.155888 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:22Z","lastTransitionTime":"2026-01-29T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.259620 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.259679 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.259693 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.259716 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.259735 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:22Z","lastTransitionTime":"2026-01-29T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.363356 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.363473 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.363496 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.363527 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.363553 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:22Z","lastTransitionTime":"2026-01-29T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.466925 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.467014 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.467037 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.467065 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.467084 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:22Z","lastTransitionTime":"2026-01-29T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.570699 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.570751 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.570770 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.570807 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.570998 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:22Z","lastTransitionTime":"2026-01-29T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.674489 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.674551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.674569 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.674597 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.674614 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:22Z","lastTransitionTime":"2026-01-29T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.758214 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 03:34:25.759786582 +0000 UTC Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.788411 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.788837 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.788936 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.789041 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.789126 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:22Z","lastTransitionTime":"2026-01-29T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.809457 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:22 crc kubenswrapper[4826]: E0129 06:44:22.809640 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.809799 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:22 crc kubenswrapper[4826]: E0129 06:44:22.810261 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.892486 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.893453 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.893500 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.893538 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.893560 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:22Z","lastTransitionTime":"2026-01-29T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.997330 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.997402 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.997429 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.997459 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:22 crc kubenswrapper[4826]: I0129 06:44:22.997485 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:22Z","lastTransitionTime":"2026-01-29T06:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.104603 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.104654 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.104671 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.104696 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.105415 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:23Z","lastTransitionTime":"2026-01-29T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.220091 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.220144 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.220157 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.220181 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.220194 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:23Z","lastTransitionTime":"2026-01-29T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.323897 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.323961 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.323977 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.324002 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.324019 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:23Z","lastTransitionTime":"2026-01-29T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.427209 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.427288 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.427365 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.427388 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.427441 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:23Z","lastTransitionTime":"2026-01-29T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.484500 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.486184 4826 scope.go:117] "RemoveContainer" containerID="06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2" Jan 29 06:44:23 crc kubenswrapper[4826]: E0129 06:44:23.486532 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-s7xfk_openshift-ovn-kubernetes(6f0c380f-ebc1-482f-9a91-8b08033eadf2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.503374 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.519382 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.530148 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.530205 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.530227 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.530251 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.530269 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:23Z","lastTransitionTime":"2026-01-29T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.538823 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: E0129 06:44:23.551513 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.557424 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.557486 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.557711 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.557737 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.557765 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.557783 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:23Z","lastTransitionTime":"2026-01-29T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:23 crc kubenswrapper[4826]: E0129 06:44:23.571940 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.577053 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.578536 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.578594 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.578615 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.578639 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.578694 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:23Z","lastTransitionTime":"2026-01-29T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.590879 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: E0129 06:44:23.600481 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.605583 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.605788 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.605927 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.606126 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.608360 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:23Z","lastTransitionTime":"2026-01-29T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.608911 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: E0129 06:44:23.628227 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.628863 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.633063 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.633103 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.633119 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.633143 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.633160 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:23Z","lastTransitionTime":"2026-01-29T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.652724 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: E0129 06:44:23.655905 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: E0129 06:44:23.656480 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.658850 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.658911 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.658933 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.658962 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.658986 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:23Z","lastTransitionTime":"2026-01-29T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.676432 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.697407 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.716860 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.736100 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.759448 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:28:34.901633679 +0000 UTC Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.762227 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.762347 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.762379 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.762414 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.762438 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:23Z","lastTransitionTime":"2026-01-29T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.768113 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"netes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.415755 6263 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416359 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416612 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416806 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416853 6263 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416807 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.417498 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:15.417553 6263 factory.go:656] Stopping watch factory\\\\nI0129 06:44:15.417567 6263 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-s7xfk_openshift-ovn-kubernetes(6f0c380f-ebc1-482f-9a91-8b08033eadf2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.785651 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.804081 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11d649f8-dcd0-4c52-96f1-f5c229546376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qxzb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:23Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.808424 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.808553 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:23 crc kubenswrapper[4826]: E0129 06:44:23.808597 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:23 crc kubenswrapper[4826]: E0129 06:44:23.808783 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.865387 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.865442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.865459 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.865484 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.865501 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:23Z","lastTransitionTime":"2026-01-29T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.968509 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.968545 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.968556 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.968574 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:23 crc kubenswrapper[4826]: I0129 06:44:23.968586 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:23Z","lastTransitionTime":"2026-01-29T06:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.072209 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.072902 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.073053 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.073193 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.073391 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:24Z","lastTransitionTime":"2026-01-29T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.176799 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.177141 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.177386 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.177637 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.177829 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:24Z","lastTransitionTime":"2026-01-29T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.281265 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.281611 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.281885 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.282080 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.282360 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:24Z","lastTransitionTime":"2026-01-29T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.387566 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.387623 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.387641 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.387667 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.387684 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:24Z","lastTransitionTime":"2026-01-29T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.491511 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.491565 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.491581 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.491606 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.491628 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:24Z","lastTransitionTime":"2026-01-29T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.594415 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.594472 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.594491 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.594515 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.594532 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:24Z","lastTransitionTime":"2026-01-29T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.698112 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.698159 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.698176 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.698199 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.698217 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:24Z","lastTransitionTime":"2026-01-29T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.760241 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:48:51.781312452 +0000 UTC Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.801885 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.801955 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.801973 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.802003 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.802022 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:24Z","lastTransitionTime":"2026-01-29T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.808609 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.808656 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:24 crc kubenswrapper[4826]: E0129 06:44:24.808830 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:24 crc kubenswrapper[4826]: E0129 06:44:24.809022 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.905465 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.905569 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.905590 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.905618 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:24 crc kubenswrapper[4826]: I0129 06:44:24.905635 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:24Z","lastTransitionTime":"2026-01-29T06:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.008968 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.009051 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.009074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.009602 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.009656 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:25Z","lastTransitionTime":"2026-01-29T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.116082 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.116139 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.116155 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.116195 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.116214 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:25Z","lastTransitionTime":"2026-01-29T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.219671 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.219741 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.219758 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.219789 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.219811 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:25Z","lastTransitionTime":"2026-01-29T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.323244 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.323347 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.323364 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.323392 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.323414 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:25Z","lastTransitionTime":"2026-01-29T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.426656 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.426713 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.426739 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.426773 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.426795 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:25Z","lastTransitionTime":"2026-01-29T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.530864 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.530934 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.530954 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.530982 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.531001 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:25Z","lastTransitionTime":"2026-01-29T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.634072 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.634135 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.634152 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.634179 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.634200 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:25Z","lastTransitionTime":"2026-01-29T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.642854 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs\") pod \"network-metrics-daemon-6qxzb\" (UID: \"11d649f8-dcd0-4c52-96f1-f5c229546376\") " pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:25 crc kubenswrapper[4826]: E0129 06:44:25.643076 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:25 crc kubenswrapper[4826]: E0129 06:44:25.643172 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs podName:11d649f8-dcd0-4c52-96f1-f5c229546376 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:33.643147876 +0000 UTC m=+57.504940975 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs") pod "network-metrics-daemon-6qxzb" (UID: "11d649f8-dcd0-4c52-96f1-f5c229546376") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.737497 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.737559 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.737577 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.737602 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.737620 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:25Z","lastTransitionTime":"2026-01-29T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.761181 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:28:38.953746797 +0000 UTC Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.808762 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:25 crc kubenswrapper[4826]: E0129 06:44:25.808978 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.809193 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:25 crc kubenswrapper[4826]: E0129 06:44:25.809573 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.841699 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.841785 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.841803 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.841858 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.841879 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:25Z","lastTransitionTime":"2026-01-29T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.945678 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.945982 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.946200 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.946387 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:25 crc kubenswrapper[4826]: I0129 06:44:25.946529 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:25Z","lastTransitionTime":"2026-01-29T06:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.049596 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.049669 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.049692 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.049718 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.049738 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:26Z","lastTransitionTime":"2026-01-29T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.153891 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.153957 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.153983 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.154018 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.154041 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:26Z","lastTransitionTime":"2026-01-29T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.256995 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.257057 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.257069 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.257088 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.257102 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:26Z","lastTransitionTime":"2026-01-29T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.361230 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.361666 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.361816 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.361949 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.362100 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:26Z","lastTransitionTime":"2026-01-29T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.465067 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.465422 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.465609 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.465773 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.465922 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:26Z","lastTransitionTime":"2026-01-29T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.531633 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.546741 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.554145 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.568542 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.568866 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.569028 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.569168 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.569327 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:26Z","lastTransitionTime":"2026-01-29T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.579206 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.604324 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.622792 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11d649f8-dcd0-4c52-96f1-f5c229546376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qxzb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.645479 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.666047 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.671983 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.672031 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.672047 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.672070 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.672088 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:26Z","lastTransitionTime":"2026-01-29T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.687122 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.705778 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.736425 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"netes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.415755 6263 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416359 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416612 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416806 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416853 6263 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416807 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.417498 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:15.417553 6263 factory.go:656] Stopping watch factory\\\\nI0129 06:44:15.417567 6263 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-s7xfk_openshift-ovn-kubernetes(6f0c380f-ebc1-482f-9a91-8b08033eadf2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.757177 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.761777 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:26:30.520252049 +0000 UTC Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.775189 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.775250 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.775277 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.775338 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.775359 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:26Z","lastTransitionTime":"2026-01-29T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.776873 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.792559 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.808825 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.808844 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:26 crc kubenswrapper[4826]: E0129 06:44:26.809015 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:26 crc kubenswrapper[4826]: E0129 06:44:26.809137 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.816725 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.834130 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.852742 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.866481 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.877879 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.877947 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.877961 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.877984 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.877998 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:26Z","lastTransitionTime":"2026-01-29T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.885214 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40d650-bb6c-4813-9a2a-d59da7bf6e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bf3ca8147e4012c2cb95e4ce01e17baa83040615adfbca0a88c05d446efc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69cfd89c00e517ce856d21fe9f9ab1014c40a6b7d9237740e75d756007b061d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c432a0b0dd88840a13aec51ffb48857ec3d4c744a84a48852deccb7fc8422ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.905489 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.926341 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.941232 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.960873 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.978279 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.981973 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.982074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.982103 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.982132 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:26 crc kubenswrapper[4826]: I0129 06:44:26.982152 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:26Z","lastTransitionTime":"2026-01-29T06:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.001364 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:26Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.017583 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11d649f8-dcd0-4c52-96f1-f5c229546376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qxzb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:27Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.036121 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:27Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.053779 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:27Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.072958 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:27Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.085546 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.085596 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.085613 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.085638 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.085657 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:27Z","lastTransitionTime":"2026-01-29T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.092785 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:27Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.128593 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"netes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.415755 6263 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416359 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416612 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416806 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416853 6263 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416807 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.417498 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:15.417553 6263 factory.go:656] Stopping watch factory\\\\nI0129 06:44:15.417567 6263 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-s7xfk_openshift-ovn-kubernetes(6f0c380f-ebc1-482f-9a91-8b08033eadf2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:27Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.148812 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:27Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.168998 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:27Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.185462 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:27Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.188551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.188816 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.189029 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.189246 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.189516 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:27Z","lastTransitionTime":"2026-01-29T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.206488 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:27Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.293122 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.293507 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.293657 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.293789 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.293982 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:27Z","lastTransitionTime":"2026-01-29T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.397737 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.397809 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.397832 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.397865 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.397920 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:27Z","lastTransitionTime":"2026-01-29T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.501177 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.501241 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.501258 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.501280 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.501328 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:27Z","lastTransitionTime":"2026-01-29T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.604440 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.604510 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.604529 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.604555 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.604574 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:27Z","lastTransitionTime":"2026-01-29T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.707764 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.707820 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.707838 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.707864 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.707886 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:27Z","lastTransitionTime":"2026-01-29T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.762849 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:02:38.183447918 +0000 UTC Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.810969 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.811018 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.811035 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.811064 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.811082 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:27Z","lastTransitionTime":"2026-01-29T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.842724 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:27 crc kubenswrapper[4826]: E0129 06:44:27.843836 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.842757 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:27 crc kubenswrapper[4826]: E0129 06:44:27.844246 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.914604 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.914687 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.914717 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.914751 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:27 crc kubenswrapper[4826]: I0129 06:44:27.914773 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:27Z","lastTransitionTime":"2026-01-29T06:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.017664 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.017714 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.017731 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.017753 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.017770 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:28Z","lastTransitionTime":"2026-01-29T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.121192 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.121285 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.121414 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.121443 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.121462 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:28Z","lastTransitionTime":"2026-01-29T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.171074 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.171205 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.171269 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.171552 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.171631 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:45:00.171609082 +0000 UTC m=+84.033402191 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.171717 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:00.171701334 +0000 UTC m=+84.033494433 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.171823 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.171868 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:45:00.171855948 +0000 UTC m=+84.033649047 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.225016 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.225077 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.225094 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.225122 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.225140 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:28Z","lastTransitionTime":"2026-01-29T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.272704 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.273239 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.272949 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.273680 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.273816 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.274007 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:45:00.273978922 +0000 UTC m=+84.135772031 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.273412 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.274267 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.274470 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.274655 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:45:00.27463637 +0000 UTC m=+84.136429469 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.332560 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.332622 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.332640 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.332663 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.332680 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:28Z","lastTransitionTime":"2026-01-29T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.435956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.436340 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.436478 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.436663 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.436827 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:28Z","lastTransitionTime":"2026-01-29T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.540011 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.540065 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.540119 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.540143 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.540160 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:28Z","lastTransitionTime":"2026-01-29T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.643928 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.643978 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.643995 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.644016 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.644032 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:28Z","lastTransitionTime":"2026-01-29T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.747954 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.748199 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.748402 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.748618 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.748809 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:28Z","lastTransitionTime":"2026-01-29T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.763716 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:10:54.331164431 +0000 UTC Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.809028 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.809086 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.809241 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:28 crc kubenswrapper[4826]: E0129 06:44:28.809410 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.853086 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.853140 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.853158 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.853186 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.853208 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:28Z","lastTransitionTime":"2026-01-29T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.956888 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.956976 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.957028 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.957057 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:28 crc kubenswrapper[4826]: I0129 06:44:28.957074 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:28Z","lastTransitionTime":"2026-01-29T06:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.060473 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.060544 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.060562 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.060585 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.060603 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:29Z","lastTransitionTime":"2026-01-29T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.163753 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.163828 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.163846 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.163871 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.163891 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:29Z","lastTransitionTime":"2026-01-29T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.266776 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.266836 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.266849 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.267092 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.267112 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:29Z","lastTransitionTime":"2026-01-29T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.370799 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.370890 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.370915 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.370986 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.371014 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:29Z","lastTransitionTime":"2026-01-29T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.474266 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.474368 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.474387 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.474415 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.474434 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:29Z","lastTransitionTime":"2026-01-29T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.577832 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.577886 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.577903 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.577927 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.577944 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:29Z","lastTransitionTime":"2026-01-29T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.681081 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.681495 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.681685 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.681883 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.682032 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:29Z","lastTransitionTime":"2026-01-29T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.764872 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:14:19.874389548 +0000 UTC Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.785588 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.785660 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.785678 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.785704 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.785724 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:29Z","lastTransitionTime":"2026-01-29T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.807974 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.808050 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:29 crc kubenswrapper[4826]: E0129 06:44:29.808612 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:29 crc kubenswrapper[4826]: E0129 06:44:29.808768 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.889048 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.889117 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.889142 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.889176 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.889203 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:29Z","lastTransitionTime":"2026-01-29T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.992391 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.992829 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.993014 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.993160 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:29 crc kubenswrapper[4826]: I0129 06:44:29.993286 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:29Z","lastTransitionTime":"2026-01-29T06:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.096282 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.096382 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.096399 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.096428 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.096461 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:30Z","lastTransitionTime":"2026-01-29T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.199269 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.200214 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.200572 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.200768 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.200908 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:30Z","lastTransitionTime":"2026-01-29T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.304777 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.304832 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.304851 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.304893 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.304911 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:30Z","lastTransitionTime":"2026-01-29T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.408563 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.408626 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.408643 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.408670 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.408689 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:30Z","lastTransitionTime":"2026-01-29T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.511491 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.511566 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.511585 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.511611 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.511631 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:30Z","lastTransitionTime":"2026-01-29T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.614531 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.614599 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.614617 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.614642 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.614660 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:30Z","lastTransitionTime":"2026-01-29T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.718485 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.718532 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.718551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.718572 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.718587 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:30Z","lastTransitionTime":"2026-01-29T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.765588 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 23:25:28.193378842 +0000 UTC Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.808571 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.808784 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:30 crc kubenswrapper[4826]: E0129 06:44:30.809641 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:30 crc kubenswrapper[4826]: E0129 06:44:30.809652 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.821387 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.821434 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.821452 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.821475 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.821495 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:30Z","lastTransitionTime":"2026-01-29T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.925112 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.925539 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.925694 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.925881 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:30 crc kubenswrapper[4826]: I0129 06:44:30.926195 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:30Z","lastTransitionTime":"2026-01-29T06:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.029039 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.029127 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.029150 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.029184 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.029204 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:31Z","lastTransitionTime":"2026-01-29T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.132557 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.132615 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.132633 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.132658 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.132676 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:31Z","lastTransitionTime":"2026-01-29T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.235281 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.235371 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.235387 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.235411 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.235428 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:31Z","lastTransitionTime":"2026-01-29T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.338684 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.339017 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.339194 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.339397 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.339542 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:31Z","lastTransitionTime":"2026-01-29T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.442959 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.443019 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.443036 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.443063 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.443081 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:31Z","lastTransitionTime":"2026-01-29T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.545697 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.545996 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.546026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.546058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.546082 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:31Z","lastTransitionTime":"2026-01-29T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.649638 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.649699 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.649717 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.649747 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.649766 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:31Z","lastTransitionTime":"2026-01-29T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.752073 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.752124 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.752141 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.752165 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.752183 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:31Z","lastTransitionTime":"2026-01-29T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.765977 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:36:22.295293849 +0000 UTC Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.808673 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.808697 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:31 crc kubenswrapper[4826]: E0129 06:44:31.808869 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:31 crc kubenswrapper[4826]: E0129 06:44:31.808981 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.855068 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.855132 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.855150 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.855175 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.855191 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:31Z","lastTransitionTime":"2026-01-29T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.958802 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.958856 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.958874 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.958898 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:31 crc kubenswrapper[4826]: I0129 06:44:31.958916 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:31Z","lastTransitionTime":"2026-01-29T06:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.061592 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.061665 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.061682 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.061708 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.061727 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:32Z","lastTransitionTime":"2026-01-29T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.165506 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.165741 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.165920 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.166072 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.166291 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:32Z","lastTransitionTime":"2026-01-29T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.269942 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.269996 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.270013 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.270039 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.270056 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:32Z","lastTransitionTime":"2026-01-29T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.373772 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.373866 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.373889 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.373914 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.373932 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:32Z","lastTransitionTime":"2026-01-29T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.476839 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.476931 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.476950 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.476975 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.476992 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:32Z","lastTransitionTime":"2026-01-29T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.581269 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.581363 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.581381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.581407 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.581426 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:32Z","lastTransitionTime":"2026-01-29T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.685409 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.685476 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.685495 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.685520 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.685538 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:32Z","lastTransitionTime":"2026-01-29T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.767089 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 18:15:09.194336401 +0000 UTC Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.788563 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.788602 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.788615 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.788633 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.788644 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:32Z","lastTransitionTime":"2026-01-29T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.808821 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.808860 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:32 crc kubenswrapper[4826]: E0129 06:44:32.809020 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:32 crc kubenswrapper[4826]: E0129 06:44:32.809189 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.902724 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.902800 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.902819 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.902845 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:32 crc kubenswrapper[4826]: I0129 06:44:32.902863 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:32Z","lastTransitionTime":"2026-01-29T06:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.005500 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.005552 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.005566 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.005584 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.005597 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:33Z","lastTransitionTime":"2026-01-29T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.107608 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.107679 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.107703 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.107736 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.107762 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:33Z","lastTransitionTime":"2026-01-29T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.210390 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.210457 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.210478 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.210506 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.210526 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:33Z","lastTransitionTime":"2026-01-29T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.313623 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.313686 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.313708 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.313738 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.313759 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:33Z","lastTransitionTime":"2026-01-29T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.416178 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.416225 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.416237 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.416253 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.416266 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:33Z","lastTransitionTime":"2026-01-29T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.518870 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.518924 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.518940 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.518962 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.518979 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:33Z","lastTransitionTime":"2026-01-29T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.621901 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.621981 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.622001 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.622025 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.622044 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:33Z","lastTransitionTime":"2026-01-29T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.724807 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.724860 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.724877 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.724900 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.724916 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:33Z","lastTransitionTime":"2026-01-29T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.736261 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs\") pod \"network-metrics-daemon-6qxzb\" (UID: \"11d649f8-dcd0-4c52-96f1-f5c229546376\") " pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:33 crc kubenswrapper[4826]: E0129 06:44:33.736491 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:33 crc kubenswrapper[4826]: E0129 06:44:33.736573 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs podName:11d649f8-dcd0-4c52-96f1-f5c229546376 nodeName:}" failed. No retries permitted until 2026-01-29 06:44:49.736550735 +0000 UTC m=+73.598343844 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs") pod "network-metrics-daemon-6qxzb" (UID: "11d649f8-dcd0-4c52-96f1-f5c229546376") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.767439 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:09:33.122317773 +0000 UTC Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.808152 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.808152 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:33 crc kubenswrapper[4826]: E0129 06:44:33.808340 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:33 crc kubenswrapper[4826]: E0129 06:44:33.808522 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.828697 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.828778 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.828843 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.828874 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.828940 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:33Z","lastTransitionTime":"2026-01-29T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.931823 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.931899 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.931923 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.931957 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:33 crc kubenswrapper[4826]: I0129 06:44:33.931977 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:33Z","lastTransitionTime":"2026-01-29T06:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.004018 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.004058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.004071 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.004086 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.004098 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: E0129 06:44:34.020740 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.026204 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.026245 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.026260 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.026281 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.026326 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: E0129 06:44:34.043528 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.047856 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.047889 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.047900 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.047916 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.047927 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: E0129 06:44:34.065719 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.069840 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.069880 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.069895 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.069915 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.069930 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: E0129 06:44:34.087358 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.092105 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.092151 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.092167 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.092188 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.092204 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: E0129 06:44:34.109468 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:34Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:34 crc kubenswrapper[4826]: E0129 06:44:34.109623 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.111994 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.112055 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.112091 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.112113 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.112128 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.214640 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.214792 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.214812 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.214836 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.214895 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.317812 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.317880 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.317897 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.317923 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.317942 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.421525 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.421643 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.421707 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.421732 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.421781 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.524659 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.524737 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.524749 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.524768 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.524858 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.628635 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.628692 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.628711 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.628736 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.628755 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.733007 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.733077 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.733100 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.733129 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.733151 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.768574 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:00:18.092834333 +0000 UTC Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.808505 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.808613 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:34 crc kubenswrapper[4826]: E0129 06:44:34.808714 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:34 crc kubenswrapper[4826]: E0129 06:44:34.808832 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.809938 4826 scope.go:117] "RemoveContainer" containerID="06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.835942 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.835997 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.836014 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.836035 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.836052 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.938897 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.938952 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.938968 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.938992 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:34 crc kubenswrapper[4826]: I0129 06:44:34.939009 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:34Z","lastTransitionTime":"2026-01-29T06:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.042335 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.042380 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.042397 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.042418 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.042434 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:35Z","lastTransitionTime":"2026-01-29T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.145346 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.145381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.145389 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.145404 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.145413 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:35Z","lastTransitionTime":"2026-01-29T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.219071 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovnkube-controller/1.log" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.223750 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerStarted","Data":"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb"} Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.224773 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.249636 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.249695 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.249716 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.249740 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.249759 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:35Z","lastTransitionTime":"2026-01-29T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.250808 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.276944 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.305743 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.330443 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.351672 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.351710 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.351720 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.351732 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.351741 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:35Z","lastTransitionTime":"2026-01-29T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.359431 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.384121 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.395951 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40d650-bb6c-4813-9a2a-d59da7bf6e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bf3ca8147e4012c2cb95e4ce01e17baa83040615adfbca0a88c05d446efc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69cfd89c00e517ce856d21fe9f9ab1014c40a6b7d9237740e75d756007b061d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c432a0b0dd88840a13aec51ffb48857ec3d4c744a84a48852deccb7fc8422ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.411152 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.423571 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.435696 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.446092 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.453540 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.453579 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.453592 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.453610 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.453621 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:35Z","lastTransitionTime":"2026-01-29T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.456854 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.466087 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.481898 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"netes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.415755 6263 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416359 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416612 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416806 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416853 6263 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416807 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.417498 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:15.417553 6263 factory.go:656] Stopping watch factory\\\\nI0129 06:44:15.417567 6263 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.492591 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.500390 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11d649f8-dcd0-4c52-96f1-f5c229546376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qxzb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.516659 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:35Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.555655 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.555701 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.555710 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.555724 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.555734 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:35Z","lastTransitionTime":"2026-01-29T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.657939 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.657994 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.658012 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.658035 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.658052 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:35Z","lastTransitionTime":"2026-01-29T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.760645 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.760729 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.760748 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.760772 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.760788 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:35Z","lastTransitionTime":"2026-01-29T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.769141 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:40:19.554647062 +0000 UTC Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.809457 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.809553 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:35 crc kubenswrapper[4826]: E0129 06:44:35.809703 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:35 crc kubenswrapper[4826]: E0129 06:44:35.809921 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.863844 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.863889 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.863901 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.863918 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.863931 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:35Z","lastTransitionTime":"2026-01-29T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.966820 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.966883 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.966900 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.966926 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:35 crc kubenswrapper[4826]: I0129 06:44:35.966943 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:35Z","lastTransitionTime":"2026-01-29T06:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.069367 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.069434 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.069451 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.069473 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.069491 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:36Z","lastTransitionTime":"2026-01-29T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.172371 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.172439 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.172457 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.172482 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.172501 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:36Z","lastTransitionTime":"2026-01-29T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.229515 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovnkube-controller/2.log" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.230503 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovnkube-controller/1.log" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.234184 4826 generic.go:334] "Generic (PLEG): container finished" podID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerID="08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb" exitCode=1 Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.234237 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerDied","Data":"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb"} Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.234289 4826 scope.go:117] "RemoveContainer" containerID="06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.235333 4826 scope.go:117] "RemoveContainer" containerID="08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb" Jan 29 06:44:36 crc kubenswrapper[4826]: E0129 06:44:36.235638 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s7xfk_openshift-ovn-kubernetes(6f0c380f-ebc1-482f-9a91-8b08033eadf2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.267631 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"netes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.415755 6263 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416359 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416612 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416806 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416853 6263 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416807 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.417498 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:15.417553 6263 factory.go:656] Stopping watch factory\\\\nI0129 06:44:15.417567 6263 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:35Z\\\",\\\"message\\\":\\\"-apiserver/apiserver retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{apiserver openshift-kube-apiserver 1c35c6c5-2bb9-4633-8ecb-881a1ff8d2fe 7887 0 2025-02-23 05:33:28 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 06:44:35.852966 6525 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:Servic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.278925 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.278991 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.279010 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.279036 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.279054 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:36Z","lastTransitionTime":"2026-01-29T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.287983 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.301184 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11d649f8-dcd0-4c52-96f1-f5c229546376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qxzb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.316021 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.332021 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.350437 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.367274 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.382292 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.382370 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.382386 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.382407 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.382424 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:36Z","lastTransitionTime":"2026-01-29T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.385893 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.404816 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.420634 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.438757 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40d650-bb6c-4813-9a2a-d59da7bf6e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bf3ca8147e4012c2cb95e4ce01e17baa83040615adfbca0a88c05d446efc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69cfd89c00e517ce856d21fe9f9ab1014c40a6b7d9237740e75d756007b061d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c432a0b0dd88840a13aec51ffb48857ec3d4c744a84a48852deccb7fc8422ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.459011 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.480822 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.485799 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.485852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.485868 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.485895 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.485911 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:36Z","lastTransitionTime":"2026-01-29T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.495881 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.515460 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.536843 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.559754 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.589600 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.589662 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.589679 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.589706 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.589724 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:36Z","lastTransitionTime":"2026-01-29T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.692605 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.692669 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.692686 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.692713 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.692733 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:36Z","lastTransitionTime":"2026-01-29T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.769281 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:39:01.457754166 +0000 UTC Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.795897 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.795995 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.796921 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.797022 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.797052 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:36Z","lastTransitionTime":"2026-01-29T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.807846 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.807989 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:36 crc kubenswrapper[4826]: E0129 06:44:36.808252 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:36 crc kubenswrapper[4826]: E0129 06:44:36.808426 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.829881 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.849100 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.869931 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.887529 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40d650-bb6c-4813-9a2a-d59da7bf6e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bf3ca8147e4012c2cb95e4ce01e17baa83040615adfbca0a88c05d446efc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69cfd89c00e517ce856d21fe9f9ab1014c40a6b7d9237740e75d756007b061d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c432a0b0dd88840a13aec51ffb48857ec3d4c744a84a48852deccb7fc8422ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.901068 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.901124 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.901141 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.901163 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.901182 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:36Z","lastTransitionTime":"2026-01-29T06:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.910226 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.932514 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.949053 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.968567 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:36 crc kubenswrapper[4826]: I0129 06:44:36.990518 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:36Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.005865 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.005919 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.005936 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.005962 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.005981 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:37Z","lastTransitionTime":"2026-01-29T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.016442 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.035103 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11d649f8-dcd0-4c52-96f1-f5c229546376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qxzb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.056487 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.076716 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.097095 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.109235 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.109320 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.109338 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.109364 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.109382 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:37Z","lastTransitionTime":"2026-01-29T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.117559 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.148965 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06cf2896ef9d4791845a7888dbd5bb23d27cbd3c2c0226e2b2876bfb226eb5d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"message\\\":\\\"netes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.415755 6263 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416359 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416612 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416806 6263 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 06:44:15.416853 6263 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.416807 6263 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 06:44:15.417498 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 06:44:15.417553 6263 factory.go:656] Stopping watch factory\\\\nI0129 06:44:15.417567 6263 ovnkube.go:599] Stopped ovnkube\\\\nI0129 06:44:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:35Z\\\",\\\"message\\\":\\\"-apiserver/apiserver retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{apiserver openshift-kube-apiserver 1c35c6c5-2bb9-4633-8ecb-881a1ff8d2fe 7887 0 2025-02-23 05:33:28 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 06:44:35.852966 6525 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:Servic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.166805 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.212287 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.212373 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.212408 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.212426 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.212440 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:37Z","lastTransitionTime":"2026-01-29T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.240130 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovnkube-controller/2.log" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.246531 4826 scope.go:117] "RemoveContainer" containerID="08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb" Jan 29 06:44:37 crc kubenswrapper[4826]: E0129 06:44:37.246814 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s7xfk_openshift-ovn-kubernetes(6f0c380f-ebc1-482f-9a91-8b08033eadf2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.267414 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.285122 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.308746 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.315827 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.315895 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.315918 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.315948 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.315966 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:37Z","lastTransitionTime":"2026-01-29T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.329812 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40d650-bb6c-4813-9a2a-d59da7bf6e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bf3ca8147e4012c2cb95e4ce01e17baa83040615adfbca0a88c05d446efc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69cfd89c00e517ce856d21fe9f9ab1014c40a6b7d9237740e75d756007b061d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c432a0b0dd88840a13aec51ffb48857ec3d4c744a84a48852deccb7fc8422ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.350558 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.372498 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.390871 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.414943 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.431044 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.431097 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.431116 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.431142 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.431162 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:37Z","lastTransitionTime":"2026-01-29T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.434282 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.459058 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.482953 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.503360 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.522964 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.533796 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.533852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.533870 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.533895 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.533913 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:37Z","lastTransitionTime":"2026-01-29T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.540803 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.570390 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:35Z\\\",\\\"message\\\":\\\"-apiserver/apiserver retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{apiserver openshift-kube-apiserver 1c35c6c5-2bb9-4633-8ecb-881a1ff8d2fe 7887 0 2025-02-23 05:33:28 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 06:44:35.852966 6525 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:Servic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s7xfk_openshift-ovn-kubernetes(6f0c380f-ebc1-482f-9a91-8b08033eadf2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.584831 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.599919 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11d649f8-dcd0-4c52-96f1-f5c229546376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qxzb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:37Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.637085 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.637131 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.637148 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.637172 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.637189 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:37Z","lastTransitionTime":"2026-01-29T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.740167 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.740255 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.740281 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.740349 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.740377 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:37Z","lastTransitionTime":"2026-01-29T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.770747 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:36:43.395292253 +0000 UTC Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.808131 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.808131 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:37 crc kubenswrapper[4826]: E0129 06:44:37.808407 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:37 crc kubenswrapper[4826]: E0129 06:44:37.808482 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.842908 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.842963 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.842988 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.843016 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.843036 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:37Z","lastTransitionTime":"2026-01-29T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.945636 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.945688 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.945704 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.945727 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:37 crc kubenswrapper[4826]: I0129 06:44:37.945746 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:37Z","lastTransitionTime":"2026-01-29T06:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.049067 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.049133 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.049150 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.049174 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.049192 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:38Z","lastTransitionTime":"2026-01-29T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.152366 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.152418 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.152430 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.152450 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.152464 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:38Z","lastTransitionTime":"2026-01-29T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.254435 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.254487 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.254501 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.254519 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.254532 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:38Z","lastTransitionTime":"2026-01-29T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.357635 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.357729 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.357742 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.357762 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.357775 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:38Z","lastTransitionTime":"2026-01-29T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.460860 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.460913 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.460927 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.460946 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.460961 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:38Z","lastTransitionTime":"2026-01-29T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.564005 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.564072 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.564090 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.564429 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.564475 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:38Z","lastTransitionTime":"2026-01-29T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.667853 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.667897 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.667905 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.667929 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.667939 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:38Z","lastTransitionTime":"2026-01-29T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.770761 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.770833 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.770855 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.770881 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.770901 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:38Z","lastTransitionTime":"2026-01-29T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.771006 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:52:28.793362166 +0000 UTC Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.808459 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.808467 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:38 crc kubenswrapper[4826]: E0129 06:44:38.808653 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:38 crc kubenswrapper[4826]: E0129 06:44:38.808724 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.873686 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.873763 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.873781 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.873806 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.873824 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:38Z","lastTransitionTime":"2026-01-29T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.980169 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.981389 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.981426 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.981454 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:38 crc kubenswrapper[4826]: I0129 06:44:38.981472 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:38Z","lastTransitionTime":"2026-01-29T06:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.083180 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.083220 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.083233 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.083250 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.083263 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:39Z","lastTransitionTime":"2026-01-29T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.191596 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.191646 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.191658 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.191679 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.191694 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:39Z","lastTransitionTime":"2026-01-29T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.293611 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.293675 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.293698 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.293730 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.293752 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:39Z","lastTransitionTime":"2026-01-29T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.397001 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.397158 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.397362 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.398075 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.399400 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:39Z","lastTransitionTime":"2026-01-29T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.503710 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.503766 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.503783 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.503805 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.503822 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:39Z","lastTransitionTime":"2026-01-29T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.606372 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.606442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.606466 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.606493 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.606518 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:39Z","lastTransitionTime":"2026-01-29T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.708601 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.708645 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.708662 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.708683 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.708699 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:39Z","lastTransitionTime":"2026-01-29T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.772010 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 17:57:53.70728181 +0000 UTC Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.808445 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:39 crc kubenswrapper[4826]: E0129 06:44:39.808635 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.809127 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:39 crc kubenswrapper[4826]: E0129 06:44:39.809395 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.814665 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.814721 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.814832 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.814872 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.814894 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:39Z","lastTransitionTime":"2026-01-29T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.918117 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.918186 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.918207 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.918235 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:39 crc kubenswrapper[4826]: I0129 06:44:39.918253 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:39Z","lastTransitionTime":"2026-01-29T06:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.021324 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.021381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.021397 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.021425 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.021442 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:40Z","lastTransitionTime":"2026-01-29T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.125603 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.125701 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.125723 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.125761 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.125789 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:40Z","lastTransitionTime":"2026-01-29T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.228768 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.228831 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.228852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.228880 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.228897 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:40Z","lastTransitionTime":"2026-01-29T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.331469 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.331540 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.331563 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.331593 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.331615 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:40Z","lastTransitionTime":"2026-01-29T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.434898 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.434959 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.434976 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.435003 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.435021 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:40Z","lastTransitionTime":"2026-01-29T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.537485 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.537539 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.537558 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.537582 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.537600 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:40Z","lastTransitionTime":"2026-01-29T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.641723 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.641801 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.641822 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.641849 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.641868 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:40Z","lastTransitionTime":"2026-01-29T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.745692 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.745770 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.745792 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.745820 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.745839 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:40Z","lastTransitionTime":"2026-01-29T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.772777 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:12:45.852402581 +0000 UTC Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.807843 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:40 crc kubenswrapper[4826]: E0129 06:44:40.808033 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.809919 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:40 crc kubenswrapper[4826]: E0129 06:44:40.810080 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.848850 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.848918 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.848939 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.848968 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.848987 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:40Z","lastTransitionTime":"2026-01-29T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.953441 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.953489 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.953506 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.953534 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:40 crc kubenswrapper[4826]: I0129 06:44:40.953552 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:40Z","lastTransitionTime":"2026-01-29T06:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.056498 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.056583 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.056604 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.056631 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.056648 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:41Z","lastTransitionTime":"2026-01-29T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.159733 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.159778 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.159796 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.159825 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.159842 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:41Z","lastTransitionTime":"2026-01-29T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.262257 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.262317 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.262328 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.262347 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.262357 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:41Z","lastTransitionTime":"2026-01-29T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.364311 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.364370 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.364386 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.364412 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.364428 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:41Z","lastTransitionTime":"2026-01-29T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.466953 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.467227 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.467244 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.467260 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.467339 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:41Z","lastTransitionTime":"2026-01-29T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.571003 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.571089 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.571105 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.571135 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.571157 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:41Z","lastTransitionTime":"2026-01-29T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.674323 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.674449 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.674462 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.674486 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.674502 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:41Z","lastTransitionTime":"2026-01-29T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.773029 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:02:45.911643677 +0000 UTC Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.777478 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.777545 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.777568 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.777629 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.777656 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:41Z","lastTransitionTime":"2026-01-29T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.807651 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.807762 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:41 crc kubenswrapper[4826]: E0129 06:44:41.807800 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:41 crc kubenswrapper[4826]: E0129 06:44:41.807957 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.880735 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.880772 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.880782 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.880797 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.880807 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:41Z","lastTransitionTime":"2026-01-29T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.983801 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.983839 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.983847 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.983860 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:41 crc kubenswrapper[4826]: I0129 06:44:41.983869 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:41Z","lastTransitionTime":"2026-01-29T06:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.086596 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.086666 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.086690 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.086721 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.086744 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:42Z","lastTransitionTime":"2026-01-29T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.189009 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.189058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.189074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.189099 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.189117 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:42Z","lastTransitionTime":"2026-01-29T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.291810 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.291868 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.291886 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.291910 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.291928 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:42Z","lastTransitionTime":"2026-01-29T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.397515 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.397574 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.397591 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.397619 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.397638 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:42Z","lastTransitionTime":"2026-01-29T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.499757 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.499805 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.499814 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.499830 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.499840 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:42Z","lastTransitionTime":"2026-01-29T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.602810 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.602886 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.602906 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.602933 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.602956 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:42Z","lastTransitionTime":"2026-01-29T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.704723 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.704758 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.704768 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.704782 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.704791 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:42Z","lastTransitionTime":"2026-01-29T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.773506 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:14:37.626588408 +0000 UTC Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.807415 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.807501 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.807523 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.807552 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.807576 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:42Z","lastTransitionTime":"2026-01-29T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.807746 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:42 crc kubenswrapper[4826]: E0129 06:44:42.807837 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.807895 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:42 crc kubenswrapper[4826]: E0129 06:44:42.808077 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.910795 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.910863 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.910881 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.910911 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:42 crc kubenswrapper[4826]: I0129 06:44:42.910930 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:42Z","lastTransitionTime":"2026-01-29T06:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.013852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.013898 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.013906 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.013920 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.013930 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:43Z","lastTransitionTime":"2026-01-29T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.116513 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.116572 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.116588 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.116610 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.116629 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:43Z","lastTransitionTime":"2026-01-29T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.219426 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.219490 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.219508 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.219534 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.219553 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:43Z","lastTransitionTime":"2026-01-29T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.321784 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.321882 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.321901 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.321923 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.321942 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:43Z","lastTransitionTime":"2026-01-29T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.424345 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.424434 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.424453 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.424487 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.424512 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:43Z","lastTransitionTime":"2026-01-29T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.527440 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.527490 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.527507 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.527573 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.527594 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:43Z","lastTransitionTime":"2026-01-29T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.630629 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.630682 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.630699 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.630722 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.630739 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:43Z","lastTransitionTime":"2026-01-29T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.733670 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.733736 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.733753 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.733781 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.733800 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:43Z","lastTransitionTime":"2026-01-29T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.774392 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:09:33.578439298 +0000 UTC Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.807712 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:43 crc kubenswrapper[4826]: E0129 06:44:43.807963 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.807736 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:43 crc kubenswrapper[4826]: E0129 06:44:43.808745 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.836029 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.836075 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.836092 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.836116 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.836134 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:43Z","lastTransitionTime":"2026-01-29T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.938401 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.938499 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.938524 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.938559 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:43 crc kubenswrapper[4826]: I0129 06:44:43.938584 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:43Z","lastTransitionTime":"2026-01-29T06:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.041825 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.041888 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.041906 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.041932 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.041950 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.145770 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.146026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.146101 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.146207 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.146313 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.249579 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.249637 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.249654 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.249680 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.249697 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.352638 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.352702 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.352718 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.352744 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.352762 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.451058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.451117 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.451140 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.451172 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.451196 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: E0129 06:44:44.463516 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.467188 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.467211 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.467222 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.467239 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.467251 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: E0129 06:44:44.479279 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.482779 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.482881 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.482938 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.483001 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.483059 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: E0129 06:44:44.494638 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.497660 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.497687 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.497696 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.497711 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.497721 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: E0129 06:44:44.510275 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.513807 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.513847 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.513859 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.513876 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.513888 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: E0129 06:44:44.528463 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:44Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:44 crc kubenswrapper[4826]: E0129 06:44:44.528718 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.530074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.530116 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.530134 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.530153 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.530166 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.632289 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.632381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.632405 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.632426 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.632444 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.734502 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.734563 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.734586 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.734613 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.734636 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.775362 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:04:28.527037145 +0000 UTC Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.808760 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.808760 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:44 crc kubenswrapper[4826]: E0129 06:44:44.808870 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:44 crc kubenswrapper[4826]: E0129 06:44:44.808942 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.808760 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:44 crc kubenswrapper[4826]: E0129 06:44:44.809050 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.836399 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.836441 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.836450 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.836467 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.836477 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.938819 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.938885 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.938901 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.938930 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:44 crc kubenswrapper[4826]: I0129 06:44:44.938948 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:44Z","lastTransitionTime":"2026-01-29T06:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.041885 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.042290 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.042466 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.042642 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.042788 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:45Z","lastTransitionTime":"2026-01-29T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.146278 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.146363 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.146383 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.146433 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.146455 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:45Z","lastTransitionTime":"2026-01-29T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.248990 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.249218 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.249367 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.249518 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.249701 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:45Z","lastTransitionTime":"2026-01-29T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.352350 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.352391 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.352400 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.352415 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.352425 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:45Z","lastTransitionTime":"2026-01-29T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.455538 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.455637 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.455655 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.455683 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.455702 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:45Z","lastTransitionTime":"2026-01-29T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.558486 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.558529 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.558540 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.558560 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.558576 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:45Z","lastTransitionTime":"2026-01-29T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.660430 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.660477 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.660488 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.660506 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.660519 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:45Z","lastTransitionTime":"2026-01-29T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.763071 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.763118 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.763134 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.763154 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.763170 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:45Z","lastTransitionTime":"2026-01-29T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.776473 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:00:10.968064143 +0000 UTC Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.807951 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:45 crc kubenswrapper[4826]: E0129 06:44:45.808142 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.865565 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.865622 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.865642 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.865667 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.865684 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:45Z","lastTransitionTime":"2026-01-29T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.968563 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.968622 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.968639 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.968662 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:45 crc kubenswrapper[4826]: I0129 06:44:45.968679 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:45Z","lastTransitionTime":"2026-01-29T06:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.072452 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.072501 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.072516 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.072539 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.072555 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:46Z","lastTransitionTime":"2026-01-29T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.175762 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.175794 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.175804 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.175818 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.175827 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:46Z","lastTransitionTime":"2026-01-29T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.277134 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.277229 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.277250 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.277276 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.277294 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:46Z","lastTransitionTime":"2026-01-29T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.378873 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.378905 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.378914 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.378928 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.378939 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:46Z","lastTransitionTime":"2026-01-29T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.481476 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.481540 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.481560 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.481586 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.481603 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:46Z","lastTransitionTime":"2026-01-29T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.584760 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.584837 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.584861 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.584898 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.584922 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:46Z","lastTransitionTime":"2026-01-29T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.687697 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.687737 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.687747 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.687763 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.687773 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:46Z","lastTransitionTime":"2026-01-29T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.776838 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:35:11.264172696 +0000 UTC Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.790127 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.790162 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.790172 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.790187 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.790197 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:46Z","lastTransitionTime":"2026-01-29T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.808464 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.808539 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:46 crc kubenswrapper[4826]: E0129 06:44:46.808561 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:46 crc kubenswrapper[4826]: E0129 06:44:46.808752 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.808811 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:46 crc kubenswrapper[4826]: E0129 06:44:46.809713 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.828112 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.844802 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.856381 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.874566 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.893365 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.893419 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.893437 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.893461 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.893478 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:46Z","lastTransitionTime":"2026-01-29T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.894799 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:35Z\\\",\\\"message\\\":\\\"-apiserver/apiserver retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{apiserver openshift-kube-apiserver 1c35c6c5-2bb9-4633-8ecb-881a1ff8d2fe 7887 0 2025-02-23 05:33:28 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 06:44:35.852966 6525 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:Servic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s7xfk_openshift-ovn-kubernetes(6f0c380f-ebc1-482f-9a91-8b08033eadf2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.907717 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.919536 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11d649f8-dcd0-4c52-96f1-f5c229546376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qxzb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.931503 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.943195 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.955914 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.968514 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40d650-bb6c-4813-9a2a-d59da7bf6e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bf3ca8147e4012c2cb95e4ce01e17baa83040615adfbca0a88c05d446efc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69cfd89c00e517ce856d21fe9f9ab1014c40a6b7d9237740e75d756007b061d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c432a0b0dd88840a13aec51ffb48857ec3d4c744a84a48852deccb7fc8422ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.982495 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.993821 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:46Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.996464 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.996602 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.996668 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.996687 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:46 crc kubenswrapper[4826]: I0129 06:44:46.996699 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:46Z","lastTransitionTime":"2026-01-29T06:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.004666 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.015564 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.026869 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.039510 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:47Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.099573 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.099610 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.099626 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.099647 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.099663 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:47Z","lastTransitionTime":"2026-01-29T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.202944 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.203022 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.203050 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.203083 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.203110 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:47Z","lastTransitionTime":"2026-01-29T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.305828 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.305926 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.305939 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.305957 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.305968 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:47Z","lastTransitionTime":"2026-01-29T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.408223 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.408271 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.408283 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.408325 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.408341 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:47Z","lastTransitionTime":"2026-01-29T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.511758 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.511812 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.511829 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.511852 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.511868 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:47Z","lastTransitionTime":"2026-01-29T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.613972 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.614025 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.614044 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.614067 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.614083 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:47Z","lastTransitionTime":"2026-01-29T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.716675 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.716759 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.716785 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.716817 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.716843 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:47Z","lastTransitionTime":"2026-01-29T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.777378 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:15:54.770153725 +0000 UTC Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.807965 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:47 crc kubenswrapper[4826]: E0129 06:44:47.808181 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.819688 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.819772 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.819782 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.819795 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.819834 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:47Z","lastTransitionTime":"2026-01-29T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.922735 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.922781 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.922799 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.922822 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:47 crc kubenswrapper[4826]: I0129 06:44:47.922841 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:47Z","lastTransitionTime":"2026-01-29T06:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.025835 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.025875 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.025885 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.025899 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.025907 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:48Z","lastTransitionTime":"2026-01-29T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.128471 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.128659 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.128793 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.128930 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.128957 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:48Z","lastTransitionTime":"2026-01-29T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.231024 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.231055 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.231063 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.231076 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.231086 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:48Z","lastTransitionTime":"2026-01-29T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.335163 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.335220 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.335236 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.335257 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.335270 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:48Z","lastTransitionTime":"2026-01-29T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.438446 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.438499 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.438514 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.438533 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.438547 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:48Z","lastTransitionTime":"2026-01-29T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.542272 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.542367 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.542385 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.542412 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.542431 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:48Z","lastTransitionTime":"2026-01-29T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.645436 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.645480 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.645489 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.645504 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.645514 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:48Z","lastTransitionTime":"2026-01-29T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.748125 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.748198 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.748217 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.748243 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.748262 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:48Z","lastTransitionTime":"2026-01-29T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.777756 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:08:08.184244954 +0000 UTC Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.808394 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.808452 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:48 crc kubenswrapper[4826]: E0129 06:44:48.808537 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.808402 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:48 crc kubenswrapper[4826]: E0129 06:44:48.808655 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:48 crc kubenswrapper[4826]: E0129 06:44:48.808817 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.850110 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.850173 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.850190 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.850214 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.850230 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:48Z","lastTransitionTime":"2026-01-29T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.952078 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.952115 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.952124 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.952139 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:48 crc kubenswrapper[4826]: I0129 06:44:48.952153 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:48Z","lastTransitionTime":"2026-01-29T06:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.055109 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.055188 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.055206 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.055235 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.055261 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:49Z","lastTransitionTime":"2026-01-29T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.158051 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.158111 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.158131 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.158159 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.158178 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:49Z","lastTransitionTime":"2026-01-29T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.261162 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.261240 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.261262 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.261285 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.261331 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:49Z","lastTransitionTime":"2026-01-29T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.364284 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.364384 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.364408 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.364439 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.364461 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:49Z","lastTransitionTime":"2026-01-29T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.467109 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.467154 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.467163 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.467178 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.467187 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:49Z","lastTransitionTime":"2026-01-29T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.569622 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.569670 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.569682 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.569698 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.569708 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:49Z","lastTransitionTime":"2026-01-29T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.672878 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.672936 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.672952 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.672973 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.672991 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:49Z","lastTransitionTime":"2026-01-29T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.776609 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.776676 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.776696 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.776724 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.776743 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:49Z","lastTransitionTime":"2026-01-29T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.778766 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:12:44.975974964 +0000 UTC Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.790761 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs\") pod \"network-metrics-daemon-6qxzb\" (UID: \"11d649f8-dcd0-4c52-96f1-f5c229546376\") " pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:49 crc kubenswrapper[4826]: E0129 06:44:49.790945 4826 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:49 crc kubenswrapper[4826]: E0129 06:44:49.791019 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs podName:11d649f8-dcd0-4c52-96f1-f5c229546376 nodeName:}" failed. No retries permitted until 2026-01-29 06:45:21.791002529 +0000 UTC m=+105.652795658 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs") pod "network-metrics-daemon-6qxzb" (UID: "11d649f8-dcd0-4c52-96f1-f5c229546376") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.807926 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:49 crc kubenswrapper[4826]: E0129 06:44:49.808055 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.817593 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.879518 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.879568 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.879579 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.879596 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.879607 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:49Z","lastTransitionTime":"2026-01-29T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.982519 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.982578 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.982590 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.982611 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:49 crc kubenswrapper[4826]: I0129 06:44:49.982624 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:49Z","lastTransitionTime":"2026-01-29T06:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.085008 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.085055 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.085067 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.085082 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.085092 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:50Z","lastTransitionTime":"2026-01-29T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.188392 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.188456 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.188473 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.188497 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.188515 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:50Z","lastTransitionTime":"2026-01-29T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.291020 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.291079 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.291097 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.291122 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.291142 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:50Z","lastTransitionTime":"2026-01-29T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.394378 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.394473 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.394491 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.394515 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.394532 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:50Z","lastTransitionTime":"2026-01-29T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.498773 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.498847 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.498867 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.498895 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.498916 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:50Z","lastTransitionTime":"2026-01-29T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.601979 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.602052 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.602074 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.602104 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.602128 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:50Z","lastTransitionTime":"2026-01-29T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.704881 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.704929 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.704938 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.704954 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.704966 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:50Z","lastTransitionTime":"2026-01-29T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.779645 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:46:49.303744639 +0000 UTC Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.807726 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.807852 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:50 crc kubenswrapper[4826]: E0129 06:44:50.807913 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.807738 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.808107 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.808170 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.808189 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.808215 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:50 crc kubenswrapper[4826]: E0129 06:44:50.808207 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.808234 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:50Z","lastTransitionTime":"2026-01-29T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:50 crc kubenswrapper[4826]: E0129 06:44:50.808280 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.809542 4826 scope.go:117] "RemoveContainer" containerID="08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb" Jan 29 06:44:50 crc kubenswrapper[4826]: E0129 06:44:50.809790 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s7xfk_openshift-ovn-kubernetes(6f0c380f-ebc1-482f-9a91-8b08033eadf2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.910544 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.910954 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.910973 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.910998 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:50 crc kubenswrapper[4826]: I0129 06:44:50.911016 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:50Z","lastTransitionTime":"2026-01-29T06:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.013928 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.014000 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.014025 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.014062 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.014088 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:51Z","lastTransitionTime":"2026-01-29T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.116955 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.117026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.117049 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.117085 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.117107 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:51Z","lastTransitionTime":"2026-01-29T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.221880 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.221959 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.221993 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.222026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.222061 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:51Z","lastTransitionTime":"2026-01-29T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.288427 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kdv64_fa65a108-1826-4e74-8e8a-1eae605298f3/kube-multus/0.log" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.288533 4826 generic.go:334] "Generic (PLEG): container finished" podID="fa65a108-1826-4e74-8e8a-1eae605298f3" containerID="7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209" exitCode=1 Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.288598 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kdv64" event={"ID":"fa65a108-1826-4e74-8e8a-1eae605298f3","Type":"ContainerDied","Data":"7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209"} Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.289251 4826 scope.go:117] "RemoveContainer" containerID="7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.302707 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.315642 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.326279 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.326430 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.326462 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.326518 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.326547 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:51Z","lastTransitionTime":"2026-01-29T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.331341 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.346217 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11d649f8-dcd0-4c52-96f1-f5c229546376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qxzb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.361446 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.381053 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.393670 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.409906 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.434728 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:35Z\\\",\\\"message\\\":\\\"-apiserver/apiserver retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{apiserver openshift-kube-apiserver 1c35c6c5-2bb9-4633-8ecb-881a1ff8d2fe 7887 0 2025-02-23 05:33:28 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 06:44:35.852966 6525 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:Servic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s7xfk_openshift-ovn-kubernetes(6f0c380f-ebc1-482f-9a91-8b08033eadf2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.435058 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.435782 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.436445 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.436513 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.436537 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:51Z","lastTransitionTime":"2026-01-29T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.451370 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.468081 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc033d51-31f2-4bfb-8b96-c2de7bc1e560\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1ef5698c2a7ab7f9271b23d26588313d4efdf9866dff2e5dfbe495de8ad6da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac5d94df68e0024eb8a1bce853c4f7c2c7da0b4e59b79b7ad1fefff060ac3c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac5d94df68e0024eb8a1bce853c4f7c2c7da0b4e59b79b7ad1fefff060ac3c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.483483 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.498958 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.516587 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:51Z\\\",\\\"message\\\":\\\"2026-01-29T06:44:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_56b8e8ae-ebdf-4490-8819-a93a4ba2080c\\\\n2026-01-29T06:44:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_56b8e8ae-ebdf-4490-8819-a93a4ba2080c to /host/opt/cni/bin/\\\\n2026-01-29T06:44:06Z [verbose] multus-daemon started\\\\n2026-01-29T06:44:06Z [verbose] Readiness Indicator file check\\\\n2026-01-29T06:44:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.534546 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40d650-bb6c-4813-9a2a-d59da7bf6e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bf3ca8147e4012c2cb95e4ce01e17baa83040615adfbca0a88c05d446efc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69cfd89c00e517ce856d21fe9f9ab1014c40a6b7d9237740e75d756007b061d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c432a0b0dd88840a13aec51ffb48857ec3d4c744a84a48852deccb7fc8422ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.544720 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.544814 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.544839 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.544876 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.544902 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:51Z","lastTransitionTime":"2026-01-29T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.559231 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.573988 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.589417 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:51Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.647866 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.647960 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.647985 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.648025 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.648049 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:51Z","lastTransitionTime":"2026-01-29T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.750866 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.751609 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.751723 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.751834 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.751922 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:51Z","lastTransitionTime":"2026-01-29T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.780508 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:15:48.097453986 +0000 UTC Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.808154 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:51 crc kubenswrapper[4826]: E0129 06:44:51.808373 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.855899 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.855972 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.855991 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.856018 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.856036 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:51Z","lastTransitionTime":"2026-01-29T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.959652 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.959770 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.959798 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.959838 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:51 crc kubenswrapper[4826]: I0129 06:44:51.959864 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:51Z","lastTransitionTime":"2026-01-29T06:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.062605 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.062670 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.062690 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.062719 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.062740 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:52Z","lastTransitionTime":"2026-01-29T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.165692 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.165756 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.165773 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.165794 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.165812 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:52Z","lastTransitionTime":"2026-01-29T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.269562 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.269670 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.269698 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.269728 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.269746 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:52Z","lastTransitionTime":"2026-01-29T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.296051 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kdv64_fa65a108-1826-4e74-8e8a-1eae605298f3/kube-multus/0.log" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.296196 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kdv64" event={"ID":"fa65a108-1826-4e74-8e8a-1eae605298f3","Type":"ContainerStarted","Data":"68861c1d0c499dea2e10366881f21ddfe8325202fa0e7a18c8162c45279ed5eb"} Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.312983 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.326340 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.346945 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68861c1d0c499dea2e10366881f21ddfe8325202fa0e7a18c8162c45279ed5eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:51Z\\\",\\\"message\\\":\\\"2026-01-29T06:44:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_56b8e8ae-ebdf-4490-8819-a93a4ba2080c\\\\n2026-01-29T06:44:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_56b8e8ae-ebdf-4490-8819-a93a4ba2080c to /host/opt/cni/bin/\\\\n2026-01-29T06:44:06Z [verbose] multus-daemon started\\\\n2026-01-29T06:44:06Z [verbose] Readiness Indicator file check\\\\n2026-01-29T06:44:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.363652 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc033d51-31f2-4bfb-8b96-c2de7bc1e560\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1ef5698c2a7ab7f9271b23d26588313d4efdf9866dff2e5dfbe495de8ad6da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac5d94df68e0024eb8a1bce853c4f7c2c7da0b4e59b79b7ad1fefff060ac3c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac5d94df68e0024eb8a1bce853c4f7c2c7da0b4e59b79b7ad1fefff060ac3c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.374005 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.374221 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.374476 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.374694 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.374889 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:52Z","lastTransitionTime":"2026-01-29T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.383119 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.401272 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.419247 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.439141 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40d650-bb6c-4813-9a2a-d59da7bf6e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bf3ca8147e4012c2cb95e4ce01e17baa83040615adfbca0a88c05d446efc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69cfd89c00e517ce856d21fe9f9ab1014c40a6b7d9237740e75d756007b061d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c432a0b0dd88840a13aec51ffb48857ec3d4c744a84a48852deccb7fc8422ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.460221 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.479175 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.480699 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.480775 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.480796 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.480828 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.480847 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:52Z","lastTransitionTime":"2026-01-29T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.511657 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.532771 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.551152 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.568730 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.585373 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.585422 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.585440 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.585466 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.585486 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:52Z","lastTransitionTime":"2026-01-29T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.597749 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:35Z\\\",\\\"message\\\":\\\"-apiserver/apiserver retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{apiserver openshift-kube-apiserver 1c35c6c5-2bb9-4633-8ecb-881a1ff8d2fe 7887 0 2025-02-23 05:33:28 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 06:44:35.852966 6525 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:Servic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s7xfk_openshift-ovn-kubernetes(6f0c380f-ebc1-482f-9a91-8b08033eadf2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.612131 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.628747 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11d649f8-dcd0-4c52-96f1-f5c229546376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qxzb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.649483 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:52Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.688638 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.688678 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.688694 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.688716 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.688733 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:52Z","lastTransitionTime":"2026-01-29T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.781419 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:48:40.339991001 +0000 UTC Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.792841 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.792884 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.792900 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.792924 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.792941 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:52Z","lastTransitionTime":"2026-01-29T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.808149 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:52 crc kubenswrapper[4826]: E0129 06:44:52.808329 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.808563 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:52 crc kubenswrapper[4826]: E0129 06:44:52.808673 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.813000 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:52 crc kubenswrapper[4826]: E0129 06:44:52.813134 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.895863 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.895903 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.895920 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.895939 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.895956 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:52Z","lastTransitionTime":"2026-01-29T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.999102 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.999145 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.999161 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.999181 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:52 crc kubenswrapper[4826]: I0129 06:44:52.999197 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:52Z","lastTransitionTime":"2026-01-29T06:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.102126 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.102193 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.102216 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.102245 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.102263 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:53Z","lastTransitionTime":"2026-01-29T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.205753 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.205816 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.205834 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.205858 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.205880 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:53Z","lastTransitionTime":"2026-01-29T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.308743 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.308821 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.308849 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.308882 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.308908 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:53Z","lastTransitionTime":"2026-01-29T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.411857 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.411924 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.411942 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.411967 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.411985 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:53Z","lastTransitionTime":"2026-01-29T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.514274 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.514356 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.514372 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.514397 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.514414 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:53Z","lastTransitionTime":"2026-01-29T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.617790 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.617854 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.617872 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.617899 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.617917 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:53Z","lastTransitionTime":"2026-01-29T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.721366 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.721430 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.721447 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.721472 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.721490 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:53Z","lastTransitionTime":"2026-01-29T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.782496 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:56:57.198974731 +0000 UTC Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.808137 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:53 crc kubenswrapper[4826]: E0129 06:44:53.808341 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.847715 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.847767 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.847784 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.847807 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.847827 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:53Z","lastTransitionTime":"2026-01-29T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.950494 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.950570 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.950594 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.950618 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:53 crc kubenswrapper[4826]: I0129 06:44:53.950639 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:53Z","lastTransitionTime":"2026-01-29T06:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.054113 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.054181 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.054202 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.054227 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.054246 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.156962 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.157022 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.157039 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.157066 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.157083 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.260336 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.260431 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.260449 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.260475 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.260493 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.363990 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.364060 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.364079 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.364105 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.364123 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.467515 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.467575 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.467592 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.467615 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.467632 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.570595 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.570656 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.570674 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.570700 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.570717 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.594003 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.594085 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.594103 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.594131 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.594150 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: E0129 06:44:54.617107 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.622555 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.622613 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.622631 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.622657 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.622675 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: E0129 06:44:54.644517 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.649589 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.649644 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.649661 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.649682 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.649698 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: E0129 06:44:54.669657 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.675468 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.675539 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.675561 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.675590 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.675610 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: E0129 06:44:54.695810 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.700383 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.700459 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.700483 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.700511 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.700530 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: E0129 06:44:54.720343 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"905e3489-492d-4437-968b-82f79ce0edd7\\\",\\\"systemUUID\\\":\\\"8978a1d2-9b20-4f3c-a5b9-0aed7eb7584e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:54Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:54 crc kubenswrapper[4826]: E0129 06:44:54.720585 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.722638 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.722707 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.722726 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.722754 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.722775 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.783610 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 23:33:40.930959895 +0000 UTC Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.808384 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.808441 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.808511 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:54 crc kubenswrapper[4826]: E0129 06:44:54.808590 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:54 crc kubenswrapper[4826]: E0129 06:44:54.808720 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:54 crc kubenswrapper[4826]: E0129 06:44:54.808847 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.826097 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.826164 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.826187 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.826219 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.826240 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.928896 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.928973 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.928996 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.929026 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:54 crc kubenswrapper[4826]: I0129 06:44:54.929048 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:54Z","lastTransitionTime":"2026-01-29T06:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.032226 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.032346 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.032368 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.032402 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.032420 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:55Z","lastTransitionTime":"2026-01-29T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.135672 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.135733 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.135749 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.135775 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.135792 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:55Z","lastTransitionTime":"2026-01-29T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.238744 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.238839 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.238859 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.238883 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.238900 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:55Z","lastTransitionTime":"2026-01-29T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.341843 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.341912 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.341933 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.341959 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.341981 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:55Z","lastTransitionTime":"2026-01-29T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.444661 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.444729 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.444746 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.444770 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.444787 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:55Z","lastTransitionTime":"2026-01-29T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.547875 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.547970 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.547996 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.548032 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.548055 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:55Z","lastTransitionTime":"2026-01-29T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.651493 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.651579 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.651600 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.651625 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.651643 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:55Z","lastTransitionTime":"2026-01-29T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.754225 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.754328 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.754346 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.754375 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.754397 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:55Z","lastTransitionTime":"2026-01-29T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.784135 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:31:07.195106191 +0000 UTC Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.808583 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:55 crc kubenswrapper[4826]: E0129 06:44:55.808781 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.857108 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.857172 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.857189 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.857216 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.857236 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:55Z","lastTransitionTime":"2026-01-29T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.960712 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.960770 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.960787 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.960810 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:55 crc kubenswrapper[4826]: I0129 06:44:55.960827 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:55Z","lastTransitionTime":"2026-01-29T06:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.064844 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.064894 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.064912 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.064934 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.064951 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:56Z","lastTransitionTime":"2026-01-29T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.168046 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.168115 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.168140 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.168168 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.168189 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:56Z","lastTransitionTime":"2026-01-29T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.271814 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.271894 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.271913 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.271939 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.271958 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:56Z","lastTransitionTime":"2026-01-29T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.374946 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.374984 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.374993 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.375008 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.375020 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:56Z","lastTransitionTime":"2026-01-29T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.478248 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.478366 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.478390 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.478421 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.478443 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:56Z","lastTransitionTime":"2026-01-29T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.581530 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.581562 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.581571 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.581585 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.581593 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:56Z","lastTransitionTime":"2026-01-29T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.684824 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.684857 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.684865 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.684878 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.684888 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:56Z","lastTransitionTime":"2026-01-29T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.784919 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:30:54.41863313 +0000 UTC Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.787173 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.787233 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.787250 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.787274 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.787321 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:56Z","lastTransitionTime":"2026-01-29T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.808644 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.808652 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:56 crc kubenswrapper[4826]: E0129 06:44:56.808863 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:56 crc kubenswrapper[4826]: E0129 06:44:56.808955 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.809596 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:56 crc kubenswrapper[4826]: E0129 06:44:56.810018 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.826607 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc033d51-31f2-4bfb-8b96-c2de7bc1e560\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f1ef5698c2a7ab7f9271b23d26588313d4efdf9866dff2e5dfbe495de8ad6da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac5d94df68e0024eb8a1bce853c4f7c2c7da0b4e59b79b7ad1fefff060ac3c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac5d94df68e0024eb8a1bce853c4f7c2c7da0b4e59b79b7ad1fefff060ac3c8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:56Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.846795 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:56Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.861985 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tdzw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550bdc9c-0324-4f3c-98df-95fbf1029eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42e6b57814763e9d213b1dae98fbad4d36b326203d58b6542d27010170f6565c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcc9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tdzw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:56Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.884500 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kdv64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa65a108-1826-4e74-8e8a-1eae605298f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68861c1d0c499dea2e10366881f21ddfe8325202fa0e7a18c8162c45279ed5eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:51Z\\\",\\\"message\\\":\\\"2026-01-29T06:44:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_56b8e8ae-ebdf-4490-8819-a93a4ba2080c\\\\n2026-01-29T06:44:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_56b8e8ae-ebdf-4490-8819-a93a4ba2080c to /host/opt/cni/bin/\\\\n2026-01-29T06:44:06Z [verbose] multus-daemon started\\\\n2026-01-29T06:44:06Z [verbose] Readiness Indicator file check\\\\n2026-01-29T06:44:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfhbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kdv64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:56Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.889914 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.889996 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.890013 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.890038 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.890057 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:56Z","lastTransitionTime":"2026-01-29T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.904167 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e40d650-bb6c-4813-9a2a-d59da7bf6e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bf3ca8147e4012c2cb95e4ce01e17baa83040615adfbca0a88c05d446efc555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69cfd89c00e517ce856d21fe9f9ab1014c40a6b7d9237740e75d756007b061d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c432a0b0dd88840a13aec51ffb48857ec3d4c744a84a48852deccb7fc8422ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2797dd065e70892b17073f3cf3b9be36a1e405a85a86b3f52b42d4805db80dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:56Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.928209 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f93ba8c06281d82ec2584250e6e843d591d364356425831cd6a463234d0a7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:56Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.948594 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e568937c5e0ee46243b6cb16e14122e18272ccae66f1127cd74ce3041785733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://235ac7c60074600d9c3d1e573e4d72fa8819213a4f3076403df3c706454730a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:56Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.964112 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b69cv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3bbd3c9-5a5b-4b55-9742-0fe17ceab252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1589556806e44a6795b7caf7695e584f4d9682e652350d2b4813197b62110119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d9qn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b69cv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:56Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.983448 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e7e9d69-49ad-4177-a388-7d7556ddd380\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c25a1a60e522b5c06e95368ba6cd2727c0b3bb0dc42a8ff78c6afea6d7d5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd8cce464615767426dd78c948b03894eeccf28e4abd93cd66efcbaed2887b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07e746868738216535af20d17aab43ad1a4a1228bd78ee2907d46755e361569b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:56Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.992831 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.992876 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.992887 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.992905 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:56 crc kubenswrapper[4826]: I0129 06:44:56.992918 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:56Z","lastTransitionTime":"2026-01-29T06:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.002990 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:57Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.025850 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca8a794-1985-4f1b-8651-03cfce7dd20c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734f84e758ca06a23933bafe67941e9d76b6c40f31a2081ebbcf31aa2ab7ddea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1abfe385641cd3794001263cfd1cda2e40e2fdfdf792650b06e82b081371697\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30d017f45a89c47945cc63a3096b6327dc976e85c7a2dc092baa9f067629da9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://345e1ecd50d25aabf3ff2195ab94ac2e015c947632ba25b628f6d351056e0a8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e029777389dd78436a955daa5b5f295023715e9de9b281863f793eb09a34e6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://319f6e68c6fcd1c16db42c3a28159b5f5ef527030152de7a23a5c9f2a339e476\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1483b6f5667d69c06695c93a22ae5ccc683ef684fe007d9946bfddbdf0744323\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8rttd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5nvkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:57Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.042685 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11d649f8-dcd0-4c52-96f1-f5c229546376\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b4kx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-6qxzb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:57Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.064072 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66b74e7b-cd1e-4181-8ce6-eb41576c41e7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:43:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:43:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:43:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:57Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.084247 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:57Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.096422 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.096487 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.096509 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.096540 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.096564 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:57Z","lastTransitionTime":"2026-01-29T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.102910 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T06:43:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca127c5fe560e373e07f8aecf0e41b5f18be23bf64a56a99d1f5a8809a53471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:57Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.121034 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://262e4dafc3b9dfefff3c3f64bcbf34a713ed43c895ef8b11cc6ca240f9348420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4ktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-llzmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:57Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.151725 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0c380f-ebc1-482f-9a91-8b08033eadf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T06:44:35Z\\\",\\\"message\\\":\\\"-apiserver/apiserver retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{apiserver openshift-kube-apiserver 1c35c6c5-2bb9-4633-8ecb-881a1ff8d2fe 7887 0 2025-02-23 05:33:28 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:2787a90499aeabb4cf7acbefa3d43f6c763431fdc60904fdfa1fe74cd04203ee] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.4.93,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.93],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0129 06:44:35.852966 6525 services_controller.go:434] Service default/kubernetes retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kubernetes default 1fcaffea-cfe2-4295-9c2a-a3b3626fb3f1 259 0 2025-02-23 05:11:12 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[component:apiserver provider:kubernetes] map[] [] [] []},Spec:Servic\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-s7xfk_openshift-ovn-kubernetes(6f0c380f-ebc1-482f-9a91-8b08033eadf2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T06:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T06:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8s9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-s7xfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:57Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.171179 4826 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be51f861-02cd-4b43-8b55-eddc27a15272\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T06:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60cd6040fe32abede192b2672e7a3c58bf438bbf9289ada60031ac50d1dd1dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd54979536f60b2ac8ad4e665f9d0fb7917eea17801ec620dca7e547e16bb2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T06:44:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sjcd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T06:44:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5gq6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T06:44:57Z is after 2025-08-24T17:21:41Z" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.200237 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.200349 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.200374 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.200406 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.200433 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:57Z","lastTransitionTime":"2026-01-29T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.303511 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.303586 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.303605 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.303631 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.303652 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:57Z","lastTransitionTime":"2026-01-29T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.407473 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.407556 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.407581 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.407611 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.407636 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:57Z","lastTransitionTime":"2026-01-29T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.510485 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.510551 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.510575 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.510604 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.510670 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:57Z","lastTransitionTime":"2026-01-29T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.614016 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.614086 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.614111 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.614144 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.614173 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:57Z","lastTransitionTime":"2026-01-29T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.717337 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.717405 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.717426 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.717455 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.717480 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:57Z","lastTransitionTime":"2026-01-29T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.786144 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:10:04.790198898 +0000 UTC Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.808634 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:57 crc kubenswrapper[4826]: E0129 06:44:57.809073 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.821012 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.821075 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.821092 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.821113 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.821131 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:57Z","lastTransitionTime":"2026-01-29T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.923983 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.924036 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.924050 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.924068 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:57 crc kubenswrapper[4826]: I0129 06:44:57.924086 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:57Z","lastTransitionTime":"2026-01-29T06:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.027396 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.027454 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.027463 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.027479 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.027492 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:58Z","lastTransitionTime":"2026-01-29T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.130391 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.130443 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.130457 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.130474 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.130487 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:58Z","lastTransitionTime":"2026-01-29T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.232924 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.232963 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.232996 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.233012 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.233024 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:58Z","lastTransitionTime":"2026-01-29T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.335381 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.335447 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.335460 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.335475 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.335490 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:58Z","lastTransitionTime":"2026-01-29T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.438215 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.438371 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.438394 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.438416 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.438433 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:58Z","lastTransitionTime":"2026-01-29T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.541929 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.541983 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.542000 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.542027 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.542045 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:58Z","lastTransitionTime":"2026-01-29T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.645280 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.645341 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.645355 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.645373 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.645386 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:58Z","lastTransitionTime":"2026-01-29T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.748858 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.748916 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.748933 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.748956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.748973 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:58Z","lastTransitionTime":"2026-01-29T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.786744 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:04:36.416435278 +0000 UTC Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.808170 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.808183 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:44:58 crc kubenswrapper[4826]: E0129 06:44:58.808361 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.808431 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:44:58 crc kubenswrapper[4826]: E0129 06:44:58.808550 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:44:58 crc kubenswrapper[4826]: E0129 06:44:58.808641 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.851777 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.851831 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.851853 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.851882 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.851904 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:58Z","lastTransitionTime":"2026-01-29T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.955104 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.955165 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.955182 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.955208 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:58 crc kubenswrapper[4826]: I0129 06:44:58.955227 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:58Z","lastTransitionTime":"2026-01-29T06:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.058989 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.059056 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.059073 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.059098 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.059118 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:59Z","lastTransitionTime":"2026-01-29T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.162639 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.162693 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.162710 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.162733 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.162754 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:59Z","lastTransitionTime":"2026-01-29T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.265118 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.265186 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.265201 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.265222 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.265243 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:59Z","lastTransitionTime":"2026-01-29T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.368577 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.368653 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.368671 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.368697 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.368717 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:59Z","lastTransitionTime":"2026-01-29T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.471729 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.471794 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.471820 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.471848 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.471870 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:59Z","lastTransitionTime":"2026-01-29T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.574104 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.574152 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.574170 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.574193 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.574211 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:59Z","lastTransitionTime":"2026-01-29T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.677684 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.677750 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.677774 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.677801 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.677827 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:59Z","lastTransitionTime":"2026-01-29T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.780848 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.780908 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.780928 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.780957 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.780980 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:59Z","lastTransitionTime":"2026-01-29T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.786955 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:55:33.534694742 +0000 UTC Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.807966 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:44:59 crc kubenswrapper[4826]: E0129 06:44:59.808155 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.884840 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.885017 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.885036 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.885110 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.885168 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:59Z","lastTransitionTime":"2026-01-29T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.988725 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.988785 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.988831 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.988855 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:44:59 crc kubenswrapper[4826]: I0129 06:44:59.988873 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:44:59Z","lastTransitionTime":"2026-01-29T06:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.092520 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.092597 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.092622 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.092651 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.092671 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:00Z","lastTransitionTime":"2026-01-29T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.195068 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.195160 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.195176 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.195196 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.195212 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:00Z","lastTransitionTime":"2026-01-29T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.206754 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.206849 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.206889 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.207013 4826 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.207108 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:46:04.207018915 +0000 UTC m=+148.068812024 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.207135 4826 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.207173 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:46:04.207140468 +0000 UTC m=+148.068933577 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.207261 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 06:46:04.207231611 +0000 UTC m=+148.069024710 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.298549 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.298613 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.298634 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.298664 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.298685 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:00Z","lastTransitionTime":"2026-01-29T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.308420 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.308625 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.308651 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.308692 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.308727 4826 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.308821 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 06:46:04.308795682 +0000 UTC m=+148.170588791 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.308924 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.308962 4826 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.308982 4826 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.309074 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 06:46:04.309045958 +0000 UTC m=+148.170839067 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.401168 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.401238 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.401255 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.401364 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.401409 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:00Z","lastTransitionTime":"2026-01-29T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.504270 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.504374 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.504391 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.504415 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.504431 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:00Z","lastTransitionTime":"2026-01-29T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.607290 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.607388 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.607412 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.607442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.607462 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:00Z","lastTransitionTime":"2026-01-29T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.710012 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.710075 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.710093 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.710259 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.710350 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:00Z","lastTransitionTime":"2026-01-29T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.788125 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:27:44.909031157 +0000 UTC Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.807847 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.807859 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.807922 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.808611 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.808747 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:45:00 crc kubenswrapper[4826]: E0129 06:45:00.808965 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.813803 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.813853 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.813869 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.814659 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.814709 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:00Z","lastTransitionTime":"2026-01-29T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.833700 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.917822 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.917893 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.917930 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.917965 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:00 crc kubenswrapper[4826]: I0129 06:45:00.918011 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:00Z","lastTransitionTime":"2026-01-29T06:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.020754 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.020820 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.020838 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.020862 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.020882 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:01Z","lastTransitionTime":"2026-01-29T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.123741 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.123813 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.123830 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.123859 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.123877 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:01Z","lastTransitionTime":"2026-01-29T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.227368 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.227428 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.227442 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.227463 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.227475 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:01Z","lastTransitionTime":"2026-01-29T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.329572 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.329626 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.329643 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.329664 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.329683 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:01Z","lastTransitionTime":"2026-01-29T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.433477 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.433545 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.433567 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.433596 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.433618 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:01Z","lastTransitionTime":"2026-01-29T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.536164 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.536254 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.536288 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.536395 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.536423 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:01Z","lastTransitionTime":"2026-01-29T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.638719 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.639028 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.639153 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.639363 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.639497 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:01Z","lastTransitionTime":"2026-01-29T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.741595 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.741648 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.741659 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.741674 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.741686 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:01Z","lastTransitionTime":"2026-01-29T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.788977 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:54:59.787340047 +0000 UTC Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.808416 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:45:01 crc kubenswrapper[4826]: E0129 06:45:01.808773 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.848267 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.848402 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.848483 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.848522 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.848560 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:01Z","lastTransitionTime":"2026-01-29T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.952261 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.952674 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.952835 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.952988 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:01 crc kubenswrapper[4826]: I0129 06:45:01.953120 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:01Z","lastTransitionTime":"2026-01-29T06:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.056626 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.056689 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.056706 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.056730 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.056747 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:02Z","lastTransitionTime":"2026-01-29T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.159377 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.159436 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.159453 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.159478 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.159499 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:02Z","lastTransitionTime":"2026-01-29T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.262916 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.262974 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.262991 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.263013 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.263035 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:02Z","lastTransitionTime":"2026-01-29T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.366286 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.366375 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.366397 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.366426 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.366448 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:02Z","lastTransitionTime":"2026-01-29T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.469914 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.469956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.469967 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.469983 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.469994 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:02Z","lastTransitionTime":"2026-01-29T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.573624 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.573988 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.574207 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.574450 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.574612 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:02Z","lastTransitionTime":"2026-01-29T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.678190 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.678269 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.678293 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.678349 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.678367 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:02Z","lastTransitionTime":"2026-01-29T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.781872 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.781939 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.781956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.781982 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.782009 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:02Z","lastTransitionTime":"2026-01-29T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.789190 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:09:00.52156993 +0000 UTC Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.808720 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.808819 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.809529 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:45:02 crc kubenswrapper[4826]: E0129 06:45:02.809743 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:45:02 crc kubenswrapper[4826]: E0129 06:45:02.809892 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:45:02 crc kubenswrapper[4826]: E0129 06:45:02.810108 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.810850 4826 scope.go:117] "RemoveContainer" containerID="08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.884707 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.884752 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.884761 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.884777 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.884787 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:02Z","lastTransitionTime":"2026-01-29T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.988223 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.988656 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.988680 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.988746 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:02 crc kubenswrapper[4826]: I0129 06:45:02.988767 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:02Z","lastTransitionTime":"2026-01-29T06:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.091656 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.091717 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.091735 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.091760 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.091778 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:03Z","lastTransitionTime":"2026-01-29T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.196412 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.196472 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.196494 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.196522 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.196540 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:03Z","lastTransitionTime":"2026-01-29T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.299180 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.299226 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.299239 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.299256 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.299271 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:03Z","lastTransitionTime":"2026-01-29T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.342474 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovnkube-controller/2.log" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.345771 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerStarted","Data":"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21"} Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.346391 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.374018 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=3.373998982 podStartE2EDuration="3.373998982s" podCreationTimestamp="2026-01-29 06:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:03.373333475 +0000 UTC m=+87.235126554" watchObservedRunningTime="2026-01-29 06:45:03.373998982 +0000 UTC m=+87.235792061" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.390690 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=64.390670939 podStartE2EDuration="1m4.390670939s" podCreationTimestamp="2026-01-29 06:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:03.390473724 +0000 UTC m=+87.252266803" watchObservedRunningTime="2026-01-29 06:45:03.390670939 +0000 UTC m=+87.252464018" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.401932 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.402002 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.402016 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.402035 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.402047 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:03Z","lastTransitionTime":"2026-01-29T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.454437 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5nvkq" podStartSLOduration=60.454405671 podStartE2EDuration="1m0.454405671s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:03.436263576 +0000 UTC m=+87.298056655" watchObservedRunningTime="2026-01-29 06:45:03.454405671 +0000 UTC m=+87.316198780" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.466847 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5gq6p" podStartSLOduration=60.466820519 podStartE2EDuration="1m0.466820519s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:03.454043442 +0000 UTC m=+87.315836551" watchObservedRunningTime="2026-01-29 06:45:03.466820519 +0000 UTC m=+87.328613628" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.489581 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.489558541 podStartE2EDuration="1m9.489558541s" podCreationTimestamp="2026-01-29 06:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:03.489039038 +0000 UTC m=+87.350832117" watchObservedRunningTime="2026-01-29 06:45:03.489558541 +0000 UTC m=+87.351351630" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.504446 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.504495 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.504510 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.504529 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.504548 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:03Z","lastTransitionTime":"2026-01-29T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.561761 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podStartSLOduration=60.56174369 podStartE2EDuration="1m0.56174369s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:03.561021771 +0000 UTC m=+87.422814840" watchObservedRunningTime="2026-01-29 06:45:03.56174369 +0000 UTC m=+87.423536759" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.587201 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podStartSLOduration=60.587178691 podStartE2EDuration="1m0.587178691s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:03.586844923 +0000 UTC m=+87.448637992" watchObservedRunningTime="2026-01-29 06:45:03.587178691 +0000 UTC m=+87.448971780" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.598259 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=14.598238685 podStartE2EDuration="14.598238685s" podCreationTimestamp="2026-01-29 06:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:03.597948687 +0000 UTC m=+87.459741756" watchObservedRunningTime="2026-01-29 06:45:03.598238685 +0000 UTC m=+87.460031754" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.607057 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.607204 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.607333 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.607431 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.607530 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:03Z","lastTransitionTime":"2026-01-29T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.636489 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tdzw4" podStartSLOduration=61.636470444 podStartE2EDuration="1m1.636470444s" podCreationTimestamp="2026-01-29 06:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:03.636159276 +0000 UTC m=+87.497952345" watchObservedRunningTime="2026-01-29 06:45:03.636470444 +0000 UTC m=+87.498263533" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.672578 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kdv64" podStartSLOduration=60.672561288 podStartE2EDuration="1m0.672561288s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:03.654748732 +0000 UTC m=+87.516541801" watchObservedRunningTime="2026-01-29 06:45:03.672561288 +0000 UTC m=+87.534354357" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.672873 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=37.672869496 podStartE2EDuration="37.672869496s" podCreationTimestamp="2026-01-29 06:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:03.672867116 +0000 UTC m=+87.534660185" watchObservedRunningTime="2026-01-29 06:45:03.672869496 +0000 UTC m=+87.534662565" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.709836 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.709868 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.709876 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.709893 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.709904 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:03Z","lastTransitionTime":"2026-01-29T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.746889 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-b69cv" podStartSLOduration=61.746851791 podStartE2EDuration="1m1.746851791s" podCreationTimestamp="2026-01-29 06:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:03.719594513 +0000 UTC m=+87.581387582" watchObservedRunningTime="2026-01-29 06:45:03.746851791 +0000 UTC m=+87.608644850" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.747761 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6qxzb"] Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.747856 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:45:03 crc kubenswrapper[4826]: E0129 06:45:03.747951 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.790163 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:23:47.579597788 +0000 UTC Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.808493 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:45:03 crc kubenswrapper[4826]: E0129 06:45:03.808594 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.812123 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.812141 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.812149 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.812160 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.812169 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:03Z","lastTransitionTime":"2026-01-29T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.914408 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.914469 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.914491 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.914522 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:03 crc kubenswrapper[4826]: I0129 06:45:03.914544 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:03Z","lastTransitionTime":"2026-01-29T06:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.017254 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.017310 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.017320 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.017333 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.017343 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:04Z","lastTransitionTime":"2026-01-29T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.120922 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.120984 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.121002 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.121027 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.121045 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:04Z","lastTransitionTime":"2026-01-29T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.224731 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.224826 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.224849 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.224881 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.224899 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:04Z","lastTransitionTime":"2026-01-29T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.328537 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.328603 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.328621 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.328648 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.328667 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:04Z","lastTransitionTime":"2026-01-29T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.432250 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.432360 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.432385 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.432416 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.432456 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:04Z","lastTransitionTime":"2026-01-29T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.535608 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.535673 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.535697 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.535728 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.535750 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:04Z","lastTransitionTime":"2026-01-29T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.639347 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.639406 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.639431 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.639456 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.639475 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:04Z","lastTransitionTime":"2026-01-29T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.743920 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.743999 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.744019 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.744050 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.744073 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:04Z","lastTransitionTime":"2026-01-29T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.790888 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:41:56.900973524 +0000 UTC Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.808420 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.808437 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:45:04 crc kubenswrapper[4826]: E0129 06:45:04.808801 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 06:45:04 crc kubenswrapper[4826]: E0129 06:45:04.809011 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.847956 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.848024 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.848047 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.848077 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.848097 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:04Z","lastTransitionTime":"2026-01-29T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.952686 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.952742 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.952758 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.952779 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.952798 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:04Z","lastTransitionTime":"2026-01-29T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.972570 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.972629 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.972646 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.972671 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 06:45:04 crc kubenswrapper[4826]: I0129 06:45:04.972694 4826 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T06:45:04Z","lastTransitionTime":"2026-01-29T06:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.040100 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r"] Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.041602 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.044759 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.046483 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.046989 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.047238 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.065277 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3311098a-f814-4506-88b6-fdead54022ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.065437 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3311098a-f814-4506-88b6-fdead54022ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.065500 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3311098a-f814-4506-88b6-fdead54022ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.065555 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3311098a-f814-4506-88b6-fdead54022ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.065617 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3311098a-f814-4506-88b6-fdead54022ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.166942 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3311098a-f814-4506-88b6-fdead54022ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.167023 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3311098a-f814-4506-88b6-fdead54022ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.167076 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3311098a-f814-4506-88b6-fdead54022ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.167115 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3311098a-f814-4506-88b6-fdead54022ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.167193 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3311098a-f814-4506-88b6-fdead54022ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.167241 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3311098a-f814-4506-88b6-fdead54022ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.167430 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3311098a-f814-4506-88b6-fdead54022ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.168671 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3311098a-f814-4506-88b6-fdead54022ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.178974 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3311098a-f814-4506-88b6-fdead54022ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.200376 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3311098a-f814-4506-88b6-fdead54022ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-tbg9r\" (UID: \"3311098a-f814-4506-88b6-fdead54022ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.370468 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" Jan 29 06:45:05 crc kubenswrapper[4826]: W0129 06:45:05.385552 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3311098a_f814_4506_88b6_fdead54022ed.slice/crio-a606f69ed5fb923e168c5ddf00c5ea742bb937021bddf7be13d4ff6d5a3647e2 WatchSource:0}: Error finding container a606f69ed5fb923e168c5ddf00c5ea742bb937021bddf7be13d4ff6d5a3647e2: Status 404 returned error can't find the container with id a606f69ed5fb923e168c5ddf00c5ea742bb937021bddf7be13d4ff6d5a3647e2 Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.792121 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:34:26.568305908 +0000 UTC Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.793484 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.805028 4826 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.808114 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:45:05 crc kubenswrapper[4826]: I0129 06:45:05.808131 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:45:05 crc kubenswrapper[4826]: E0129 06:45:05.808659 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6qxzb" podUID="11d649f8-dcd0-4c52-96f1-f5c229546376" Jan 29 06:45:05 crc kubenswrapper[4826]: E0129 06:45:05.808763 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.359676 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" event={"ID":"3311098a-f814-4506-88b6-fdead54022ed","Type":"ContainerStarted","Data":"644049acad204b1143549520fa766548e99bb8bd463660a9b43e7ebb3dd90291"} Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.359756 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" event={"ID":"3311098a-f814-4506-88b6-fdead54022ed","Type":"ContainerStarted","Data":"a606f69ed5fb923e168c5ddf00c5ea742bb937021bddf7be13d4ff6d5a3647e2"} Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.506489 4826 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.506660 4826 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.545600 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-tbg9r" podStartSLOduration=64.545578107 podStartE2EDuration="1m4.545578107s" podCreationTimestamp="2026-01-29 06:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:06.388268228 +0000 UTC m=+90.250061327" watchObservedRunningTime="2026-01-29 06:45:06.545578107 +0000 UTC m=+90.407371206" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.546546 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lzzb6"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.547320 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.548269 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-djctr"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.549479 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.549678 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.550758 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.558744 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.558882 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.558921 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.559173 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.559199 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.559256 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.559360 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.559386 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.559473 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.559486 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.559498 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.559577 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.559818 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.560071 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.560592 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.561952 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.562060 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.566337 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.570127 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.570809 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.572408 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.572896 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.579581 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.579687 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.579825 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.580026 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.580106 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.580391 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.580505 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.580670 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.580851 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.580938 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.581266 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.581386 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.581394 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2fdxd"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.581430 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.581560 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.581948 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.582063 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.585565 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2svg\" (UniqueName: \"kubernetes.io/projected/41546796-854f-46bf-9b24-e2b51d6890a5-kube-api-access-p2svg\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.585616 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/144d8289-078b-45cd-9539-901b6c72a980-images\") pod \"machine-api-operator-5694c8668f-djctr\" (UID: \"144d8289-078b-45cd-9539-901b6c72a980\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.585644 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbm55\" (UniqueName: \"kubernetes.io/projected/144d8289-078b-45cd-9539-901b6c72a980-kube-api-access-pbm55\") pod \"machine-api-operator-5694c8668f-djctr\" (UID: \"144d8289-078b-45cd-9539-901b6c72a980\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.585664 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgccc\" (UniqueName: \"kubernetes.io/projected/0f6370c6-e353-4dcc-916c-26406b8ff40a-kube-api-access-lgccc\") pod \"openshift-apiserver-operator-796bbdcf4f-578bc\" (UID: \"0f6370c6-e353-4dcc-916c-26406b8ff40a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.585682 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41546796-854f-46bf-9b24-e2b51d6890a5-serving-cert\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.585697 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-config\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.585719 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/144d8289-078b-45cd-9539-901b6c72a980-config\") pod \"machine-api-operator-5694c8668f-djctr\" (UID: \"144d8289-078b-45cd-9539-901b6c72a980\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.585736 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/144d8289-078b-45cd-9539-901b6c72a980-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-djctr\" (UID: \"144d8289-078b-45cd-9539-901b6c72a980\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.585752 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-client-ca\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.585768 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.585790 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f6370c6-e353-4dcc-916c-26406b8ff40a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-578bc\" (UID: \"0f6370c6-e353-4dcc-916c-26406b8ff40a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.585807 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f6370c6-e353-4dcc-916c-26406b8ff40a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-578bc\" (UID: \"0f6370c6-e353-4dcc-916c-26406b8ff40a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.586056 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.586093 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.587430 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.587618 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.587784 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.587851 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.589552 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hpvzg"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.589905 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.590815 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.591008 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.591170 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.591229 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.591282 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.591550 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.592083 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.604251 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.604788 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.606541 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.608800 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2qc9p"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.610466 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tpz64"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.612011 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.613707 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g4h97"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.613764 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2qc9p" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.620567 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.620771 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.620806 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.620819 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.620907 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.620938 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.621192 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.621264 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.621601 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.621946 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.623744 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-7wl5q"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.623996 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-djctr"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.624016 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.624031 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.624401 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ddt5t"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.624646 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.624911 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-t4qwq"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.625202 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vp6lh"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.625411 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.625518 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.625566 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.625698 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.625856 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.625993 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.625410 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.626574 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nbsns"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.626861 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7wl5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.626884 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.626941 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.627965 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.628097 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.628765 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.629341 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.636331 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.636626 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2s49r"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.636966 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.637251 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.637430 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.637578 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2s49r" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.637665 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.637678 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.638888 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.642078 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.642672 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.642937 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.643315 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.643752 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.643854 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.644076 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.644320 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.644669 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.644751 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.644851 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.645101 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.645396 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.645421 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.649455 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.649598 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.650059 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.650138 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.650230 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2qc9p"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.657412 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.657782 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.658030 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.658251 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.658612 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.658641 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.658850 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.664626 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.665785 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.666081 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.666459 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.666671 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7wl5q"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.666862 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.667014 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.667884 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.668144 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.692938 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.693062 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.693125 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.693063 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.694007 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ddt5t"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.694046 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.694246 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.694863 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.694936 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695441 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-t4qwq"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695613 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f5d736-918d-438f-a419-e37ca4242df9-serving-cert\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695654 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20f5d736-918d-438f-a419-e37ca4242df9-encryption-config\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695683 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-trusted-ca-bundle\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695709 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/55aa777b-0339-400d-bcdb-63f1d464b03b-node-pullsecrets\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695753 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-audit-policies\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695775 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20f5d736-918d-438f-a419-e37ca4242df9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695792 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65m4t\" (UniqueName: \"kubernetes.io/projected/618ab45f-146f-4e1a-a92b-aaa531cede89-kube-api-access-65m4t\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695813 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64hpj\" (UniqueName: \"kubernetes.io/projected/2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa-kube-api-access-64hpj\") pod \"dns-operator-744455d44c-2qc9p\" (UID: \"2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa\") " pod="openshift-dns-operator/dns-operator-744455d44c-2qc9p" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695840 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/144d8289-078b-45cd-9539-901b6c72a980-images\") pod \"machine-api-operator-5694c8668f-djctr\" (UID: \"144d8289-078b-45cd-9539-901b6c72a980\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695920 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695951 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-etcd-serving-ca\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695965 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/55aa777b-0339-400d-bcdb-63f1d464b03b-audit-dir\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695979 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-service-ca-bundle\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.695994 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-audit\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.696099 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mwhf\" (UniqueName: \"kubernetes.io/projected/7da7e397-19dd-4eaa-86bc-44e555785978-kube-api-access-4mwhf\") pod \"control-plane-machine-set-operator-78cbb6b69f-xnm5r\" (UID: \"7da7e397-19dd-4eaa-86bc-44e555785978\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.696748 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/144d8289-078b-45cd-9539-901b6c72a980-images\") pod \"machine-api-operator-5694c8668f-djctr\" (UID: \"144d8289-078b-45cd-9539-901b6c72a980\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.696865 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697349 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.696117 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-image-import-ca\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697415 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697436 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697467 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbm55\" (UniqueName: \"kubernetes.io/projected/144d8289-078b-45cd-9539-901b6c72a980-kube-api-access-pbm55\") pod \"machine-api-operator-5694c8668f-djctr\" (UID: \"144d8289-078b-45cd-9539-901b6c72a980\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697488 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5384f8de-a07b-4bec-9366-61227201eb43-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tv2tp\" (UID: \"5384f8de-a07b-4bec-9366-61227201eb43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697504 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30bc5222-e3c7-4cad-8e68-d39368e9d00d-audit-dir\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697526 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6613300-6034-4de0-923f-9ed7ac56ddfa-config\") pod \"machine-approver-56656f9798-gxgts\" (UID: \"c6613300-6034-4de0-923f-9ed7ac56ddfa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697544 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrvc9\" (UniqueName: \"kubernetes.io/projected/c6613300-6034-4de0-923f-9ed7ac56ddfa-kube-api-access-wrvc9\") pod \"machine-approver-56656f9798-gxgts\" (UID: \"c6613300-6034-4de0-923f-9ed7ac56ddfa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697564 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/98eff8b7-3e41-407f-880b-8cf237b9886f-proxy-tls\") pod \"machine-config-controller-84d6567774-m4cbp\" (UID: \"98eff8b7-3e41-407f-880b-8cf237b9886f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697580 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-config\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697602 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgccc\" (UniqueName: \"kubernetes.io/projected/0f6370c6-e353-4dcc-916c-26406b8ff40a-kube-api-access-lgccc\") pod \"openshift-apiserver-operator-796bbdcf4f-578bc\" (UID: \"0f6370c6-e353-4dcc-916c-26406b8ff40a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697636 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b31f55be-1122-4a3a-b381-469b9363c0d6-config\") pod \"kube-controller-manager-operator-78b949d7b-fk6kh\" (UID: \"b31f55be-1122-4a3a-b381-469b9363c0d6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697661 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df9bd5fb-3887-44e9-8b90-b5a8611ff50a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cj5mb\" (UID: \"df9bd5fb-3887-44e9-8b90-b5a8611ff50a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697681 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/618ab45f-146f-4e1a-a92b-aaa531cede89-metrics-certs\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697701 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697723 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41546796-854f-46bf-9b24-e2b51d6890a5-serving-cert\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697744 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55aa777b-0339-400d-bcdb-63f1d464b03b-serving-cert\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697765 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6j7w\" (UniqueName: \"kubernetes.io/projected/5384f8de-a07b-4bec-9366-61227201eb43-kube-api-access-p6j7w\") pod \"cluster-samples-operator-665b6dd947-tv2tp\" (UID: \"5384f8de-a07b-4bec-9366-61227201eb43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697787 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-config\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697806 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20f5d736-918d-438f-a419-e37ca4242df9-etcd-client\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697825 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwncc\" (UniqueName: \"kubernetes.io/projected/b59ae2a8-2233-43a3-a1ac-cd5ead659130-kube-api-access-wwncc\") pod \"openshift-config-operator-7777fb866f-w4j5q\" (UID: \"b59ae2a8-2233-43a3-a1ac-cd5ead659130\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697841 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxmfq\" (UniqueName: \"kubernetes.io/projected/b9b12f8e-c8d1-499f-99b5-a4c8970c55ab-kube-api-access-jxmfq\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwx5q\" (UID: \"b9b12f8e-c8d1-499f-99b5-a4c8970c55ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697862 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/618ab45f-146f-4e1a-a92b-aaa531cede89-default-certificate\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697923 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20f5d736-918d-438f-a419-e37ca4242df9-audit-dir\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697952 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5k2v\" (UniqueName: \"kubernetes.io/projected/df9bd5fb-3887-44e9-8b90-b5a8611ff50a-kube-api-access-f5k2v\") pod \"cluster-image-registry-operator-dc59b4c8b-cj5mb\" (UID: \"df9bd5fb-3887-44e9-8b90-b5a8611ff50a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697978 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-serving-cert\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.697996 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-config\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698013 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698056 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/144d8289-078b-45cd-9539-901b6c72a980-config\") pod \"machine-api-operator-5694c8668f-djctr\" (UID: \"144d8289-078b-45cd-9539-901b6c72a980\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698174 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/144d8289-078b-45cd-9539-901b6c72a980-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-djctr\" (UID: \"144d8289-078b-45cd-9539-901b6c72a980\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698213 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r9qj\" (UniqueName: \"kubernetes.io/projected/1a23bf6a-86ab-4319-8a5a-e447509ac03f-kube-api-access-8r9qj\") pod \"route-controller-manager-6576b87f9c-sg4dq\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698232 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-oauth-config\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698257 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b12f8e-c8d1-499f-99b5-a4c8970c55ab-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwx5q\" (UID: \"b9b12f8e-c8d1-499f-99b5-a4c8970c55ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698643 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/144d8289-078b-45cd-9539-901b6c72a980-config\") pod \"machine-api-operator-5694c8668f-djctr\" (UID: \"144d8289-078b-45cd-9539-901b6c72a980\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698735 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698764 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2bzc\" (UniqueName: \"kubernetes.io/projected/adcf399e-d9c8-4495-9d73-458228398c5c-kube-api-access-l2bzc\") pod \"downloads-7954f5f757-7wl5q\" (UID: \"adcf399e-d9c8-4495-9d73-458228398c5c\") " pod="openshift-console/downloads-7954f5f757-7wl5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698792 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8pnl\" (UniqueName: \"kubernetes.io/projected/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-kube-api-access-l8pnl\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698818 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-client-ca\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698834 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a23bf6a-86ab-4319-8a5a-e447509ac03f-client-ca\") pod \"route-controller-manager-6576b87f9c-sg4dq\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698860 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-serving-cert\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.698972 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-config\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.699151 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6613300-6034-4de0-923f-9ed7ac56ddfa-auth-proxy-config\") pod \"machine-approver-56656f9798-gxgts\" (UID: \"c6613300-6034-4de0-923f-9ed7ac56ddfa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.699184 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.699201 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-service-ca\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.699229 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b59ae2a8-2233-43a3-a1ac-cd5ead659130-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w4j5q\" (UID: \"b59ae2a8-2233-43a3-a1ac-cd5ead659130\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.699341 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a23bf6a-86ab-4319-8a5a-e447509ac03f-serving-cert\") pod \"route-controller-manager-6576b87f9c-sg4dq\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.699395 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7da7e397-19dd-4eaa-86bc-44e555785978-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xnm5r\" (UID: \"7da7e397-19dd-4eaa-86bc-44e555785978\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.699448 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.699476 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdprl\" (UniqueName: \"kubernetes.io/projected/20f5d736-918d-438f-a419-e37ca4242df9-kube-api-access-gdprl\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.699493 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b12f8e-c8d1-499f-99b5-a4c8970c55ab-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwx5q\" (UID: \"b9b12f8e-c8d1-499f-99b5-a4c8970c55ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.699513 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.699547 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.699577 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53042c19-99ed-40d1-ac8c-7d07d03ec763-trusted-ca\") pod \"ingress-operator-5b745b69d9-lg7nk\" (UID: \"53042c19-99ed-40d1-ac8c-7d07d03ec763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.699622 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-config\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.701947 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/144d8289-078b-45cd-9539-901b6c72a980-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-djctr\" (UID: \"144d8289-078b-45cd-9539-901b6c72a980\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702179 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702236 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/df9bd5fb-3887-44e9-8b90-b5a8611ff50a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cj5mb\" (UID: \"df9bd5fb-3887-44e9-8b90-b5a8611ff50a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702264 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c6613300-6034-4de0-923f-9ed7ac56ddfa-machine-approver-tls\") pod \"machine-approver-56656f9798-gxgts\" (UID: \"c6613300-6034-4de0-923f-9ed7ac56ddfa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702287 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702331 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702351 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a23bf6a-86ab-4319-8a5a-e447509ac03f-config\") pod \"route-controller-manager-6576b87f9c-sg4dq\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702367 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b59ae2a8-2233-43a3-a1ac-cd5ead659130-serving-cert\") pod \"openshift-config-operator-7777fb866f-w4j5q\" (UID: \"b59ae2a8-2233-43a3-a1ac-cd5ead659130\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702388 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20f5d736-918d-438f-a419-e37ca4242df9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702402 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/618ab45f-146f-4e1a-a92b-aaa531cede89-service-ca-bundle\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702418 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702457 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f6370c6-e353-4dcc-916c-26406b8ff40a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-578bc\" (UID: \"0f6370c6-e353-4dcc-916c-26406b8ff40a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702510 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20f5d736-918d-438f-a419-e37ca4242df9-audit-policies\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702529 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/55aa777b-0339-400d-bcdb-63f1d464b03b-encryption-config\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702665 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f6370c6-e353-4dcc-916c-26406b8ff40a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-578bc\" (UID: \"0f6370c6-e353-4dcc-916c-26406b8ff40a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702694 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53042c19-99ed-40d1-ac8c-7d07d03ec763-metrics-tls\") pod \"ingress-operator-5b745b69d9-lg7nk\" (UID: \"53042c19-99ed-40d1-ac8c-7d07d03ec763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702732 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/98eff8b7-3e41-407f-880b-8cf237b9886f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m4cbp\" (UID: \"98eff8b7-3e41-407f-880b-8cf237b9886f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702740 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-client-ca\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702780 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702804 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvp59\" (UniqueName: \"kubernetes.io/projected/516d6698-0690-4e9b-93dc-a65c873eac43-kube-api-access-jvp59\") pod \"migrator-59844c95c7-2s49r\" (UID: \"516d6698-0690-4e9b-93dc-a65c873eac43\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2s49r" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702835 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-oauth-serving-cert\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702873 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2svg\" (UniqueName: \"kubernetes.io/projected/41546796-854f-46bf-9b24-e2b51d6890a5-kube-api-access-p2svg\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702891 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b31f55be-1122-4a3a-b381-469b9363c0d6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fk6kh\" (UID: \"b31f55be-1122-4a3a-b381-469b9363c0d6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702924 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b31f55be-1122-4a3a-b381-469b9363c0d6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fk6kh\" (UID: \"b31f55be-1122-4a3a-b381-469b9363c0d6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702932 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41546796-854f-46bf-9b24-e2b51d6890a5-serving-cert\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702940 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwpck\" (UniqueName: \"kubernetes.io/projected/98eff8b7-3e41-407f-880b-8cf237b9886f-kube-api-access-fwpck\") pod \"machine-config-controller-84d6567774-m4cbp\" (UID: \"98eff8b7-3e41-407f-880b-8cf237b9886f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.702959 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hdnd\" (UniqueName: \"kubernetes.io/projected/53042c19-99ed-40d1-ac8c-7d07d03ec763-kube-api-access-5hdnd\") pod \"ingress-operator-5b745b69d9-lg7nk\" (UID: \"53042c19-99ed-40d1-ac8c-7d07d03ec763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.703051 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa-metrics-tls\") pod \"dns-operator-744455d44c-2qc9p\" (UID: \"2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa\") " pod="openshift-dns-operator/dns-operator-744455d44c-2qc9p" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.703263 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53042c19-99ed-40d1-ac8c-7d07d03ec763-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lg7nk\" (UID: \"53042c19-99ed-40d1-ac8c-7d07d03ec763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.703323 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df9bd5fb-3887-44e9-8b90-b5a8611ff50a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cj5mb\" (UID: \"df9bd5fb-3887-44e9-8b90-b5a8611ff50a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.703343 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmjvj\" (UniqueName: \"kubernetes.io/projected/55aa777b-0339-400d-bcdb-63f1d464b03b-kube-api-access-kmjvj\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.703357 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fht9n\" (UniqueName: \"kubernetes.io/projected/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-kube-api-access-fht9n\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.703444 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/618ab45f-146f-4e1a-a92b-aaa531cede89-stats-auth\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.703464 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/55aa777b-0339-400d-bcdb-63f1d464b03b-etcd-client\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.703480 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.705871 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f6370c6-e353-4dcc-916c-26406b8ff40a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-578bc\" (UID: \"0f6370c6-e353-4dcc-916c-26406b8ff40a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.706119 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f6370c6-e353-4dcc-916c-26406b8ff40a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-578bc\" (UID: \"0f6370c6-e353-4dcc-916c-26406b8ff40a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.707697 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.708595 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.709020 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.710071 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.711850 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.714175 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.717829 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.723404 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.725932 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.727570 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.732916 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.735758 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kh889"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.736735 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kh889" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.739158 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.740493 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.743770 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.744398 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.745575 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.746343 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.748038 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.754031 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.755401 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.755890 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.756036 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2fdxd"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.756056 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.756177 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.757236 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.758076 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.762922 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bmqnb"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.763643 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.764185 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.768388 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-h9bff"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.768778 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9hk7n"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.769093 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-h9bff" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.769136 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.769988 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.770362 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.770518 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.771067 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.771887 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.773262 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.774379 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-r7682"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.774816 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-r7682" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.775546 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g4h97"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.777022 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4599s"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.777805 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4599s" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.779216 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2957f"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.780276 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2957f" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.782379 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.783485 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.784584 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kh889"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.787517 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.787566 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.788371 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.788592 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.789520 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lzzb6"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.790965 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.792613 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hpvzg"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.793747 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.796624 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.800472 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.803057 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2s49r"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804074 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tpz64"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804499 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa-metrics-tls\") pod \"dns-operator-744455d44c-2qc9p\" (UID: \"2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa\") " pod="openshift-dns-operator/dns-operator-744455d44c-2qc9p" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804528 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53042c19-99ed-40d1-ac8c-7d07d03ec763-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lg7nk\" (UID: \"53042c19-99ed-40d1-ac8c-7d07d03ec763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804549 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df9bd5fb-3887-44e9-8b90-b5a8611ff50a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cj5mb\" (UID: \"df9bd5fb-3887-44e9-8b90-b5a8611ff50a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804569 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmjvj\" (UniqueName: \"kubernetes.io/projected/55aa777b-0339-400d-bcdb-63f1d464b03b-kube-api-access-kmjvj\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804587 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/55aa777b-0339-400d-bcdb-63f1d464b03b-etcd-client\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804605 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804626 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fht9n\" (UniqueName: \"kubernetes.io/projected/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-kube-api-access-fht9n\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804644 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/618ab45f-146f-4e1a-a92b-aaa531cede89-stats-auth\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804663 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20f5d736-918d-438f-a419-e37ca4242df9-encryption-config\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804681 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-trusted-ca-bundle\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804700 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/55aa777b-0339-400d-bcdb-63f1d464b03b-node-pullsecrets\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804723 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f5d736-918d-438f-a419-e37ca4242df9-serving-cert\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804739 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-audit-policies\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804753 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65m4t\" (UniqueName: \"kubernetes.io/projected/618ab45f-146f-4e1a-a92b-aaa531cede89-kube-api-access-65m4t\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804769 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20f5d736-918d-438f-a419-e37ca4242df9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804796 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64hpj\" (UniqueName: \"kubernetes.io/projected/2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa-kube-api-access-64hpj\") pod \"dns-operator-744455d44c-2qc9p\" (UID: \"2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa\") " pod="openshift-dns-operator/dns-operator-744455d44c-2qc9p" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804815 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804833 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-etcd-serving-ca\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804848 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/55aa777b-0339-400d-bcdb-63f1d464b03b-audit-dir\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804879 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8618dc83-b333-4237-b792-57114fffc127-trusted-ca\") pod \"console-operator-58897d9998-hpvzg\" (UID: \"8618dc83-b333-4237-b792-57114fffc127\") " pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804896 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8618dc83-b333-4237-b792-57114fffc127-serving-cert\") pod \"console-operator-58897d9998-hpvzg\" (UID: \"8618dc83-b333-4237-b792-57114fffc127\") " pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804920 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-service-ca-bundle\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804940 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-audit\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804970 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngg5l\" (UniqueName: \"kubernetes.io/projected/30bc5222-e3c7-4cad-8e68-d39368e9d00d-kube-api-access-ngg5l\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.804995 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805027 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mwhf\" (UniqueName: \"kubernetes.io/projected/7da7e397-19dd-4eaa-86bc-44e555785978-kube-api-access-4mwhf\") pod \"control-plane-machine-set-operator-78cbb6b69f-xnm5r\" (UID: \"7da7e397-19dd-4eaa-86bc-44e555785978\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805047 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-image-import-ca\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805068 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6613300-6034-4de0-923f-9ed7ac56ddfa-config\") pod \"machine-approver-56656f9798-gxgts\" (UID: \"c6613300-6034-4de0-923f-9ed7ac56ddfa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805092 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5384f8de-a07b-4bec-9366-61227201eb43-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tv2tp\" (UID: \"5384f8de-a07b-4bec-9366-61227201eb43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805113 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30bc5222-e3c7-4cad-8e68-d39368e9d00d-audit-dir\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805134 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/98eff8b7-3e41-407f-880b-8cf237b9886f-proxy-tls\") pod \"machine-config-controller-84d6567774-m4cbp\" (UID: \"98eff8b7-3e41-407f-880b-8cf237b9886f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805154 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-config\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805176 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrvc9\" (UniqueName: \"kubernetes.io/projected/c6613300-6034-4de0-923f-9ed7ac56ddfa-kube-api-access-wrvc9\") pod \"machine-approver-56656f9798-gxgts\" (UID: \"c6613300-6034-4de0-923f-9ed7ac56ddfa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805208 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b31f55be-1122-4a3a-b381-469b9363c0d6-config\") pod \"kube-controller-manager-operator-78b949d7b-fk6kh\" (UID: \"b31f55be-1122-4a3a-b381-469b9363c0d6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805233 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df9bd5fb-3887-44e9-8b90-b5a8611ff50a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cj5mb\" (UID: \"df9bd5fb-3887-44e9-8b90-b5a8611ff50a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805255 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/618ab45f-146f-4e1a-a92b-aaa531cede89-metrics-certs\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805278 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805389 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bmqnb"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805399 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55aa777b-0339-400d-bcdb-63f1d464b03b-serving-cert\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805426 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6j7w\" (UniqueName: \"kubernetes.io/projected/5384f8de-a07b-4bec-9366-61227201eb43-kube-api-access-p6j7w\") pod \"cluster-samples-operator-665b6dd947-tv2tp\" (UID: \"5384f8de-a07b-4bec-9366-61227201eb43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805448 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20f5d736-918d-438f-a419-e37ca4242df9-etcd-client\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805471 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwncc\" (UniqueName: \"kubernetes.io/projected/b59ae2a8-2233-43a3-a1ac-cd5ead659130-kube-api-access-wwncc\") pod \"openshift-config-operator-7777fb866f-w4j5q\" (UID: \"b59ae2a8-2233-43a3-a1ac-cd5ead659130\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805497 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxmfq\" (UniqueName: \"kubernetes.io/projected/b9b12f8e-c8d1-499f-99b5-a4c8970c55ab-kube-api-access-jxmfq\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwx5q\" (UID: \"b9b12f8e-c8d1-499f-99b5-a4c8970c55ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805534 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/618ab45f-146f-4e1a-a92b-aaa531cede89-default-certificate\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805557 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-serving-cert\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805584 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-config\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805610 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805638 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20f5d736-918d-438f-a419-e37ca4242df9-audit-dir\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805660 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5k2v\" (UniqueName: \"kubernetes.io/projected/df9bd5fb-3887-44e9-8b90-b5a8611ff50a-kube-api-access-f5k2v\") pod \"cluster-image-registry-operator-dc59b4c8b-cj5mb\" (UID: \"df9bd5fb-3887-44e9-8b90-b5a8611ff50a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805692 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl2gv\" (UniqueName: \"kubernetes.io/projected/8618dc83-b333-4237-b792-57114fffc127-kube-api-access-dl2gv\") pod \"console-operator-58897d9998-hpvzg\" (UID: \"8618dc83-b333-4237-b792-57114fffc127\") " pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805725 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r9qj\" (UniqueName: \"kubernetes.io/projected/1a23bf6a-86ab-4319-8a5a-e447509ac03f-kube-api-access-8r9qj\") pod \"route-controller-manager-6576b87f9c-sg4dq\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805749 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-oauth-config\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805777 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b12f8e-c8d1-499f-99b5-a4c8970c55ab-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwx5q\" (UID: \"b9b12f8e-c8d1-499f-99b5-a4c8970c55ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805805 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805835 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2bzc\" (UniqueName: \"kubernetes.io/projected/adcf399e-d9c8-4495-9d73-458228398c5c-kube-api-access-l2bzc\") pod \"downloads-7954f5f757-7wl5q\" (UID: \"adcf399e-d9c8-4495-9d73-458228398c5c\") " pod="openshift-console/downloads-7954f5f757-7wl5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805867 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8pnl\" (UniqueName: \"kubernetes.io/projected/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-kube-api-access-l8pnl\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805897 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a23bf6a-86ab-4319-8a5a-e447509ac03f-client-ca\") pod \"route-controller-manager-6576b87f9c-sg4dq\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805920 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-serving-cert\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805948 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6613300-6034-4de0-923f-9ed7ac56ddfa-auth-proxy-config\") pod \"machine-approver-56656f9798-gxgts\" (UID: \"c6613300-6034-4de0-923f-9ed7ac56ddfa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805976 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.805995 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-audit-policies\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806001 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b59ae2a8-2233-43a3-a1ac-cd5ead659130-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w4j5q\" (UID: \"b59ae2a8-2233-43a3-a1ac-cd5ead659130\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806052 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-service-ca\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806074 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8618dc83-b333-4237-b792-57114fffc127-config\") pod \"console-operator-58897d9998-hpvzg\" (UID: \"8618dc83-b333-4237-b792-57114fffc127\") " pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806095 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a23bf6a-86ab-4319-8a5a-e447509ac03f-serving-cert\") pod \"route-controller-manager-6576b87f9c-sg4dq\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806115 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7da7e397-19dd-4eaa-86bc-44e555785978-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xnm5r\" (UID: \"7da7e397-19dd-4eaa-86bc-44e555785978\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806139 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdprl\" (UniqueName: \"kubernetes.io/projected/20f5d736-918d-438f-a419-e37ca4242df9-kube-api-access-gdprl\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806145 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vp6lh"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806157 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b12f8e-c8d1-499f-99b5-a4c8970c55ab-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwx5q\" (UID: \"b9b12f8e-c8d1-499f-99b5-a4c8970c55ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806176 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806194 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806210 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53042c19-99ed-40d1-ac8c-7d07d03ec763-trusted-ca\") pod \"ingress-operator-5b745b69d9-lg7nk\" (UID: \"53042c19-99ed-40d1-ac8c-7d07d03ec763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806239 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-config\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806254 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806273 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806324 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a23bf6a-86ab-4319-8a5a-e447509ac03f-config\") pod \"route-controller-manager-6576b87f9c-sg4dq\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806342 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b59ae2a8-2233-43a3-a1ac-cd5ead659130-serving-cert\") pod \"openshift-config-operator-7777fb866f-w4j5q\" (UID: \"b59ae2a8-2233-43a3-a1ac-cd5ead659130\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806367 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/df9bd5fb-3887-44e9-8b90-b5a8611ff50a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cj5mb\" (UID: \"df9bd5fb-3887-44e9-8b90-b5a8611ff50a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806419 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c6613300-6034-4de0-923f-9ed7ac56ddfa-machine-approver-tls\") pod \"machine-approver-56656f9798-gxgts\" (UID: \"c6613300-6034-4de0-923f-9ed7ac56ddfa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806433 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b59ae2a8-2233-43a3-a1ac-cd5ead659130-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w4j5q\" (UID: \"b59ae2a8-2233-43a3-a1ac-cd5ead659130\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806438 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806537 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20f5d736-918d-438f-a419-e37ca4242df9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806571 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/618ab45f-146f-4e1a-a92b-aaa531cede89-service-ca-bundle\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806594 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20f5d736-918d-438f-a419-e37ca4242df9-audit-policies\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806615 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/55aa777b-0339-400d-bcdb-63f1d464b03b-encryption-config\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806635 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53042c19-99ed-40d1-ac8c-7d07d03ec763-metrics-tls\") pod \"ingress-operator-5b745b69d9-lg7nk\" (UID: \"53042c19-99ed-40d1-ac8c-7d07d03ec763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806661 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/98eff8b7-3e41-407f-880b-8cf237b9886f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m4cbp\" (UID: \"98eff8b7-3e41-407f-880b-8cf237b9886f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806685 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806720 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvp59\" (UniqueName: \"kubernetes.io/projected/516d6698-0690-4e9b-93dc-a65c873eac43-kube-api-access-jvp59\") pod \"migrator-59844c95c7-2s49r\" (UID: \"516d6698-0690-4e9b-93dc-a65c873eac43\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2s49r" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806742 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-oauth-serving-cert\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806764 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwpck\" (UniqueName: \"kubernetes.io/projected/98eff8b7-3e41-407f-880b-8cf237b9886f-kube-api-access-fwpck\") pod \"machine-config-controller-84d6567774-m4cbp\" (UID: \"98eff8b7-3e41-407f-880b-8cf237b9886f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806790 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hdnd\" (UniqueName: \"kubernetes.io/projected/53042c19-99ed-40d1-ac8c-7d07d03ec763-kube-api-access-5hdnd\") pod \"ingress-operator-5b745b69d9-lg7nk\" (UID: \"53042c19-99ed-40d1-ac8c-7d07d03ec763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.808376 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b31f55be-1122-4a3a-b381-469b9363c0d6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fk6kh\" (UID: \"b31f55be-1122-4a3a-b381-469b9363c0d6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.808472 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b31f55be-1122-4a3a-b381-469b9363c0d6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fk6kh\" (UID: \"b31f55be-1122-4a3a-b381-469b9363c0d6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.807466 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.809071 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/55aa777b-0339-400d-bcdb-63f1d464b03b-etcd-client\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.809505 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20f5d736-918d-438f-a419-e37ca4242df9-encryption-config\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.809624 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20f5d736-918d-438f-a419-e37ca4242df9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.809666 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.809704 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/55aa777b-0339-400d-bcdb-63f1d464b03b-node-pullsecrets\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.809752 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20f5d736-918d-438f-a419-e37ca4242df9-audit-policies\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.810667 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.810875 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.810993 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6613300-6034-4de0-923f-9ed7ac56ddfa-auth-proxy-config\") pod \"machine-approver-56656f9798-gxgts\" (UID: \"c6613300-6034-4de0-923f-9ed7ac56ddfa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.811372 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-config\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.811577 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.811588 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.811961 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.812007 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-etcd-serving-ca\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.812033 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/55aa777b-0339-400d-bcdb-63f1d464b03b-audit-dir\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.812242 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/55aa777b-0339-400d-bcdb-63f1d464b03b-encryption-config\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.812245 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-serving-cert\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.812361 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20f5d736-918d-438f-a419-e37ca4242df9-audit-dir\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.812598 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.812841 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-service-ca-bundle\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.812902 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.813771 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20f5d736-918d-438f-a419-e37ca4242df9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.814034 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/98eff8b7-3e41-407f-880b-8cf237b9886f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-m4cbp\" (UID: \"98eff8b7-3e41-407f-880b-8cf237b9886f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.814046 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806815 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df9bd5fb-3887-44e9-8b90-b5a8611ff50a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cj5mb\" (UID: \"df9bd5fb-3887-44e9-8b90-b5a8611ff50a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.814571 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-service-ca\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.806886 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-trusted-ca-bundle\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.814833 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2957f"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.814876 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.814889 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.814922 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.815509 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20f5d736-918d-438f-a419-e37ca4242df9-etcd-client\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.816318 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.816646 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-audit\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.816985 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6613300-6034-4de0-923f-9ed7ac56ddfa-config\") pod \"machine-approver-56656f9798-gxgts\" (UID: \"c6613300-6034-4de0-923f-9ed7ac56ddfa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.817047 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30bc5222-e3c7-4cad-8e68-d39368e9d00d-audit-dir\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.817136 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55aa777b-0339-400d-bcdb-63f1d464b03b-serving-cert\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.817458 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a23bf6a-86ab-4319-8a5a-e447509ac03f-client-ca\") pod \"route-controller-manager-6576b87f9c-sg4dq\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.817470 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a23bf6a-86ab-4319-8a5a-e447509ac03f-serving-cert\") pod \"route-controller-manager-6576b87f9c-sg4dq\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.817700 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa-metrics-tls\") pod \"dns-operator-744455d44c-2qc9p\" (UID: \"2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa\") " pod="openshift-dns-operator/dns-operator-744455d44c-2qc9p" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.817795 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-r7682"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.818095 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5384f8de-a07b-4bec-9366-61227201eb43-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tv2tp\" (UID: \"5384f8de-a07b-4bec-9366-61227201eb43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.818251 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a23bf6a-86ab-4319-8a5a-e447509ac03f-config\") pod \"route-controller-manager-6576b87f9c-sg4dq\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.818355 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-config\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.819503 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.819566 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.819591 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/df9bd5fb-3887-44e9-8b90-b5a8611ff50a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cj5mb\" (UID: \"df9bd5fb-3887-44e9-8b90-b5a8611ff50a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.820237 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/55aa777b-0339-400d-bcdb-63f1d464b03b-image-import-ca\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.820817 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53042c19-99ed-40d1-ac8c-7d07d03ec763-trusted-ca\") pod \"ingress-operator-5b745b69d9-lg7nk\" (UID: \"53042c19-99ed-40d1-ac8c-7d07d03ec763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.820872 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4599s"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.821389 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-serving-cert\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.821564 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f5d736-918d-438f-a419-e37ca4242df9-serving-cert\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.822447 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.823483 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c6613300-6034-4de0-923f-9ed7ac56ddfa-machine-approver-tls\") pod \"machine-approver-56656f9798-gxgts\" (UID: \"c6613300-6034-4de0-923f-9ed7ac56ddfa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.823686 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b59ae2a8-2233-43a3-a1ac-cd5ead659130-serving-cert\") pod \"openshift-config-operator-7777fb866f-w4j5q\" (UID: \"b59ae2a8-2233-43a3-a1ac-cd5ead659130\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.824004 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/53042c19-99ed-40d1-ac8c-7d07d03ec763-metrics-tls\") pod \"ingress-operator-5b745b69d9-lg7nk\" (UID: \"53042c19-99ed-40d1-ac8c-7d07d03ec763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.824015 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-oauth-config\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.825233 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.826158 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.826331 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9hk7n"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.826510 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.827895 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq"] Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.829861 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.850929 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.870376 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.878468 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-oauth-serving-cert\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.890332 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.911488 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.912973 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngg5l\" (UniqueName: \"kubernetes.io/projected/30bc5222-e3c7-4cad-8e68-d39368e9d00d-kube-api-access-ngg5l\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.913902 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl2gv\" (UniqueName: \"kubernetes.io/projected/8618dc83-b333-4237-b792-57114fffc127-kube-api-access-dl2gv\") pod \"console-operator-58897d9998-hpvzg\" (UID: \"8618dc83-b333-4237-b792-57114fffc127\") " pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.917268 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8618dc83-b333-4237-b792-57114fffc127-config\") pod \"console-operator-58897d9998-hpvzg\" (UID: \"8618dc83-b333-4237-b792-57114fffc127\") " pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.917777 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8618dc83-b333-4237-b792-57114fffc127-trusted-ca\") pod \"console-operator-58897d9998-hpvzg\" (UID: \"8618dc83-b333-4237-b792-57114fffc127\") " pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.917937 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8618dc83-b333-4237-b792-57114fffc127-serving-cert\") pod \"console-operator-58897d9998-hpvzg\" (UID: \"8618dc83-b333-4237-b792-57114fffc127\") " pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.918641 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8618dc83-b333-4237-b792-57114fffc127-config\") pod \"console-operator-58897d9998-hpvzg\" (UID: \"8618dc83-b333-4237-b792-57114fffc127\") " pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.919036 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8618dc83-b333-4237-b792-57114fffc127-trusted-ca\") pod \"console-operator-58897d9998-hpvzg\" (UID: \"8618dc83-b333-4237-b792-57114fffc127\") " pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.922510 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8618dc83-b333-4237-b792-57114fffc127-serving-cert\") pod \"console-operator-58897d9998-hpvzg\" (UID: \"8618dc83-b333-4237-b792-57114fffc127\") " pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.931134 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.950098 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.968831 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 06:45:06 crc kubenswrapper[4826]: I0129 06:45:06.993964 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.005711 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/618ab45f-146f-4e1a-a92b-aaa531cede89-metrics-certs\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.009071 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.018056 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/618ab45f-146f-4e1a-a92b-aaa531cede89-service-ca-bundle\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.029190 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.051090 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.073511 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.079423 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-config\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.090021 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.100630 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/618ab45f-146f-4e1a-a92b-aaa531cede89-default-certificate\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.109785 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.117635 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/618ab45f-146f-4e1a-a92b-aaa531cede89-stats-auth\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.130100 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.148804 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.168956 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.189900 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.211235 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.230588 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.250455 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.264075 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b12f8e-c8d1-499f-99b5-a4c8970c55ab-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwx5q\" (UID: \"b9b12f8e-c8d1-499f-99b5-a4c8970c55ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.271123 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.289703 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.309957 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.319287 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b12f8e-c8d1-499f-99b5-a4c8970c55ab-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwx5q\" (UID: \"b9b12f8e-c8d1-499f-99b5-a4c8970c55ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.331068 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.342353 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7da7e397-19dd-4eaa-86bc-44e555785978-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xnm5r\" (UID: \"7da7e397-19dd-4eaa-86bc-44e555785978\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.351514 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.370292 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.390325 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.409873 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.429293 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.439533 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b31f55be-1122-4a3a-b381-469b9363c0d6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fk6kh\" (UID: \"b31f55be-1122-4a3a-b381-469b9363c0d6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.449475 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.456981 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b31f55be-1122-4a3a-b381-469b9363c0d6-config\") pod \"kube-controller-manager-operator-78b949d7b-fk6kh\" (UID: \"b31f55be-1122-4a3a-b381-469b9363c0d6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.470232 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.490505 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.510046 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.529585 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.550259 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.569270 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.590339 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.611801 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.623429 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/98eff8b7-3e41-407f-880b-8cf237b9886f-proxy-tls\") pod \"machine-config-controller-84d6567774-m4cbp\" (UID: \"98eff8b7-3e41-407f-880b-8cf237b9886f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.670127 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.690567 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.707740 4826 request.go:700] Waited for 1.010093579s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0 Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.709843 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.731734 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.750531 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.799396 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbm55\" (UniqueName: \"kubernetes.io/projected/144d8289-078b-45cd-9539-901b6c72a980-kube-api-access-pbm55\") pod \"machine-api-operator-5694c8668f-djctr\" (UID: \"144d8289-078b-45cd-9539-901b6c72a980\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.808630 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.808721 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.819782 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgccc\" (UniqueName: \"kubernetes.io/projected/0f6370c6-e353-4dcc-916c-26406b8ff40a-kube-api-access-lgccc\") pod \"openshift-apiserver-operator-796bbdcf4f-578bc\" (UID: \"0f6370c6-e353-4dcc-916c-26406b8ff40a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.830222 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2svg\" (UniqueName: \"kubernetes.io/projected/41546796-854f-46bf-9b24-e2b51d6890a5-kube-api-access-p2svg\") pod \"controller-manager-879f6c89f-lzzb6\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.830565 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.849815 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.870201 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.891246 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.911667 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.929641 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.950154 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.970383 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 06:45:07 crc kubenswrapper[4826]: I0129 06:45:07.990617 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.010562 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.030488 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.050107 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.070941 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.073660 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.090158 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.091169 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.110881 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.113667 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.130278 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.150366 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.184716 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.191488 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.210314 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.230422 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.257055 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.270040 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.290433 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.310227 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.325980 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-djctr"] Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.332170 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 06:45:08 crc kubenswrapper[4826]: W0129 06:45:08.333530 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod144d8289_078b_45cd_9539_901b6c72a980.slice/crio-d136df1d9124558aa3afc39ffbf3bf4baaf3db5364bd15cfcc4b57504ef29c13 WatchSource:0}: Error finding container d136df1d9124558aa3afc39ffbf3bf4baaf3db5364bd15cfcc4b57504ef29c13: Status 404 returned error can't find the container with id d136df1d9124558aa3afc39ffbf3bf4baaf3db5364bd15cfcc4b57504ef29c13 Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.350817 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.370037 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.370245 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" event={"ID":"144d8289-078b-45cd-9539-901b6c72a980","Type":"ContainerStarted","Data":"d136df1d9124558aa3afc39ffbf3bf4baaf3db5364bd15cfcc4b57504ef29c13"} Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.388819 4826 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.410304 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.430365 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.449995 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.470940 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.490376 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.510572 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.531826 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.549457 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lzzb6"] Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.551672 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc"] Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.552542 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.572257 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.590260 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.629759 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.634697 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.649985 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.671873 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.708863 4826 request.go:700] Waited for 1.904077494s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.720886 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df9bd5fb-3887-44e9-8b90-b5a8611ff50a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cj5mb\" (UID: \"df9bd5fb-3887-44e9-8b90-b5a8611ff50a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.726901 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53042c19-99ed-40d1-ac8c-7d07d03ec763-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lg7nk\" (UID: \"53042c19-99ed-40d1-ac8c-7d07d03ec763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.743493 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmjvj\" (UniqueName: \"kubernetes.io/projected/55aa777b-0339-400d-bcdb-63f1d464b03b-kube-api-access-kmjvj\") pod \"apiserver-76f77b778f-tpz64\" (UID: \"55aa777b-0339-400d-bcdb-63f1d464b03b\") " pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.764822 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65m4t\" (UniqueName: \"kubernetes.io/projected/618ab45f-146f-4e1a-a92b-aaa531cede89-kube-api-access-65m4t\") pod \"router-default-5444994796-nbsns\" (UID: \"618ab45f-146f-4e1a-a92b-aaa531cede89\") " pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.792534 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fht9n\" (UniqueName: \"kubernetes.io/projected/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-kube-api-access-fht9n\") pod \"console-f9d7485db-t4qwq\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.806566 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mwhf\" (UniqueName: \"kubernetes.io/projected/7da7e397-19dd-4eaa-86bc-44e555785978-kube-api-access-4mwhf\") pod \"control-plane-machine-set-operator-78cbb6b69f-xnm5r\" (UID: \"7da7e397-19dd-4eaa-86bc-44e555785978\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.828795 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64hpj\" (UniqueName: \"kubernetes.io/projected/2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa-kube-api-access-64hpj\") pod \"dns-operator-744455d44c-2qc9p\" (UID: \"2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa\") " pod="openshift-dns-operator/dns-operator-744455d44c-2qc9p" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.830841 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.854915 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.865755 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2bzc\" (UniqueName: \"kubernetes.io/projected/adcf399e-d9c8-4495-9d73-458228398c5c-kube-api-access-l2bzc\") pod \"downloads-7954f5f757-7wl5q\" (UID: \"adcf399e-d9c8-4495-9d73-458228398c5c\") " pod="openshift-console/downloads-7954f5f757-7wl5q" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.866738 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2qc9p" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.889420 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8pnl\" (UniqueName: \"kubernetes.io/projected/8c1ec652-679e-4b4e-8f3a-83f39b7c9bec-kube-api-access-l8pnl\") pod \"authentication-operator-69f744f599-g4h97\" (UID: \"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.906042 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.919255 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5k2v\" (UniqueName: \"kubernetes.io/projected/df9bd5fb-3887-44e9-8b90-b5a8611ff50a-kube-api-access-f5k2v\") pod \"cluster-image-registry-operator-dc59b4c8b-cj5mb\" (UID: \"df9bd5fb-3887-44e9-8b90-b5a8611ff50a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.925755 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r9qj\" (UniqueName: \"kubernetes.io/projected/1a23bf6a-86ab-4319-8a5a-e447509ac03f-kube-api-access-8r9qj\") pod \"route-controller-manager-6576b87f9c-sg4dq\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.950087 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.950337 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwpck\" (UniqueName: \"kubernetes.io/projected/98eff8b7-3e41-407f-880b-8cf237b9886f-kube-api-access-fwpck\") pod \"machine-config-controller-84d6567774-m4cbp\" (UID: \"98eff8b7-3e41-407f-880b-8cf237b9886f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.960041 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.978738 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.993689 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7wl5q" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.994677 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 06:45:08 crc kubenswrapper[4826]: I0129 06:45:08.999614 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6j7w\" (UniqueName: \"kubernetes.io/projected/5384f8de-a07b-4bec-9366-61227201eb43-kube-api-access-p6j7w\") pod \"cluster-samples-operator-665b6dd947-tv2tp\" (UID: \"5384f8de-a07b-4bec-9366-61227201eb43\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.011488 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.030206 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.047365 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hdnd\" (UniqueName: \"kubernetes.io/projected/53042c19-99ed-40d1-ac8c-7d07d03ec763-kube-api-access-5hdnd\") pod \"ingress-operator-5b745b69d9-lg7nk\" (UID: \"53042c19-99ed-40d1-ac8c-7d07d03ec763\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.061695 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.064070 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.066137 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvp59\" (UniqueName: \"kubernetes.io/projected/516d6698-0690-4e9b-93dc-a65c873eac43-kube-api-access-jvp59\") pod \"migrator-59844c95c7-2s49r\" (UID: \"516d6698-0690-4e9b-93dc-a65c873eac43\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2s49r" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.084816 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b31f55be-1122-4a3a-b381-469b9363c0d6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fk6kh\" (UID: \"b31f55be-1122-4a3a-b381-469b9363c0d6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.097334 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.106253 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrvc9\" (UniqueName: \"kubernetes.io/projected/c6613300-6034-4de0-923f-9ed7ac56ddfa-kube-api-access-wrvc9\") pod \"machine-approver-56656f9798-gxgts\" (UID: \"c6613300-6034-4de0-923f-9ed7ac56ddfa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.117763 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.123988 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdprl\" (UniqueName: \"kubernetes.io/projected/20f5d736-918d-438f-a419-e37ca4242df9-kube-api-access-gdprl\") pod \"apiserver-7bbb656c7d-hjdjf\" (UID: \"20f5d736-918d-438f-a419-e37ca4242df9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.152239 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwncc\" (UniqueName: \"kubernetes.io/projected/b59ae2a8-2233-43a3-a1ac-cd5ead659130-kube-api-access-wwncc\") pod \"openshift-config-operator-7777fb866f-w4j5q\" (UID: \"b59ae2a8-2233-43a3-a1ac-cd5ead659130\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.164071 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tpz64"] Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.170787 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxmfq\" (UniqueName: \"kubernetes.io/projected/b9b12f8e-c8d1-499f-99b5-a4c8970c55ab-kube-api-access-jxmfq\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwx5q\" (UID: \"b9b12f8e-c8d1-499f-99b5-a4c8970c55ab\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.187722 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g4h97"] Jan 29 06:45:09 crc kubenswrapper[4826]: W0129 06:45:09.187926 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55aa777b_0339_400d_bcdb_63f1d464b03b.slice/crio-bde1487c0d53af4e83cf9c7f0a824188ff9169eeae6ca7c196bfe6d7ed2c5039 WatchSource:0}: Error finding container bde1487c0d53af4e83cf9c7f0a824188ff9169eeae6ca7c196bfe6d7ed2c5039: Status 404 returned error can't find the container with id bde1487c0d53af4e83cf9c7f0a824188ff9169eeae6ca7c196bfe6d7ed2c5039 Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.191103 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngg5l\" (UniqueName: \"kubernetes.io/projected/30bc5222-e3c7-4cad-8e68-d39368e9d00d-kube-api-access-ngg5l\") pod \"oauth-openshift-558db77b4-2fdxd\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.198570 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.217651 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl2gv\" (UniqueName: \"kubernetes.io/projected/8618dc83-b333-4237-b792-57114fffc127-kube-api-access-dl2gv\") pod \"console-operator-58897d9998-hpvzg\" (UID: \"8618dc83-b333-4237-b792-57114fffc127\") " pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.250737 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-trusted-ca\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.250998 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228926c4-7c89-4225-b8e2-eb33491a90f4-config\") pod \"kube-apiserver-operator-766d6c64bb-5lrns\" (UID: \"228926c4-7c89-4225-b8e2-eb33491a90f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251025 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-registry-certificates\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251043 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-etcd-service-ca\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251060 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-etcd-ca\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251078 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/228926c4-7c89-4225-b8e2-eb33491a90f4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5lrns\" (UID: \"228926c4-7c89-4225-b8e2-eb33491a90f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251092 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68473dc9-0fc2-4fd7-a46b-63dd443cef79-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7j48t\" (UID: \"68473dc9-0fc2-4fd7-a46b-63dd443cef79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251107 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-registry-tls\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251122 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251141 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-config\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251158 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-bound-sa-token\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251194 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251210 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg8qn\" (UniqueName: \"kubernetes.io/projected/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-kube-api-access-hg8qn\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251233 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68473dc9-0fc2-4fd7-a46b-63dd443cef79-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7j48t\" (UID: \"68473dc9-0fc2-4fd7-a46b-63dd443cef79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251248 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68473dc9-0fc2-4fd7-a46b-63dd443cef79-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7j48t\" (UID: \"68473dc9-0fc2-4fd7-a46b-63dd443cef79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251272 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228926c4-7c89-4225-b8e2-eb33491a90f4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5lrns\" (UID: \"228926c4-7c89-4225-b8e2-eb33491a90f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251292 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5skd\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-kube-api-access-n5skd\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251330 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-etcd-client\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251345 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.251373 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-serving-cert\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: E0129 06:45:09.251967 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:09.751954578 +0000 UTC m=+93.613747647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.252603 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.270070 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.318041 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.336965 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.337076 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.337351 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2qc9p"] Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.345676 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2s49r" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.353283 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:09 crc kubenswrapper[4826]: E0129 06:45:09.353474 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:09.853443607 +0000 UTC m=+93.715236676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.353705 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f-certs\") pod \"machine-config-server-h9bff\" (UID: \"0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f\") " pod="openshift-machine-config-operator/machine-config-server-h9bff" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.353736 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dkr5\" (UniqueName: \"kubernetes.io/projected/f11d7042-6756-418a-9e37-6d9d8e5ce14b-kube-api-access-7dkr5\") pod \"service-ca-operator-777779d784-xq9wq\" (UID: \"f11d7042-6756-418a-9e37-6d9d8e5ce14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.353812 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97682c6d-729b-4a9c-9377-cd34e69487e5-metrics-tls\") pod \"dns-default-2957f\" (UID: \"97682c6d-729b-4a9c-9377-cd34e69487e5\") " pod="openshift-dns/dns-default-2957f" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.353833 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7ebf8342-e0f9-413f-811d-57ca9df94f2d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bmqnb\" (UID: \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.353882 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.353901 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg8qn\" (UniqueName: \"kubernetes.io/projected/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-kube-api-access-hg8qn\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.355742 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f8a7361-1ef3-4c88-9797-93c96329b8c1-srv-cert\") pod \"catalog-operator-68c6474976-qrd8x\" (UID: \"8f8a7361-1ef3-4c88-9797-93c96329b8c1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.358194 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68473dc9-0fc2-4fd7-a46b-63dd443cef79-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7j48t\" (UID: \"68473dc9-0fc2-4fd7-a46b-63dd443cef79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" Jan 29 06:45:09 crc kubenswrapper[4826]: E0129 06:45:09.360653 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:09.858945678 +0000 UTC m=+93.720738747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.360763 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68473dc9-0fc2-4fd7-a46b-63dd443cef79-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7j48t\" (UID: \"68473dc9-0fc2-4fd7-a46b-63dd443cef79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.360808 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f-node-bootstrap-token\") pod \"machine-config-server-h9bff\" (UID: \"0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f\") " pod="openshift-machine-config-operator/machine-config-server-h9bff" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.360841 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6f396321-9263-49fa-9009-59a22725f914-tmpfs\") pod \"packageserver-d55dfcdfc-5nbf4\" (UID: \"6f396321-9263-49fa-9009-59a22725f914\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.360892 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f11d7042-6756-418a-9e37-6d9d8e5ce14b-serving-cert\") pod \"service-ca-operator-777779d784-xq9wq\" (UID: \"f11d7042-6756-418a-9e37-6d9d8e5ce14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.360924 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-mountpoint-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.360980 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfs42\" (UniqueName: \"kubernetes.io/projected/0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f-kube-api-access-wfs42\") pod \"machine-config-server-h9bff\" (UID: \"0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f\") " pod="openshift-machine-config-operator/machine-config-server-h9bff" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.360998 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh669\" (UniqueName: \"kubernetes.io/projected/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-kube-api-access-nh669\") pod \"collect-profiles-29494485-bfbgl\" (UID: \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361123 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228926c4-7c89-4225-b8e2-eb33491a90f4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5lrns\" (UID: \"228926c4-7c89-4225-b8e2-eb33491a90f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361153 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f396321-9263-49fa-9009-59a22725f914-webhook-cert\") pod \"packageserver-d55dfcdfc-5nbf4\" (UID: \"6f396321-9263-49fa-9009-59a22725f914\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361270 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ba0dbb4-b520-4ff8-8235-f145b05583c7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cwg5c\" (UID: \"7ba0dbb4-b520-4ff8-8235-f145b05583c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361290 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxqrg\" (UniqueName: \"kubernetes.io/projected/97682c6d-729b-4a9c-9377-cd34e69487e5-kube-api-access-hxqrg\") pod \"dns-default-2957f\" (UID: \"97682c6d-729b-4a9c-9377-cd34e69487e5\") " pod="openshift-dns/dns-default-2957f" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361393 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-plugins-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361410 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5glw\" (UniqueName: \"kubernetes.io/projected/12427837-cfa9-4d44-b78c-700aca3af676-kube-api-access-p5glw\") pod \"machine-config-operator-74547568cd-hnfnz\" (UID: \"12427837-cfa9-4d44-b78c-700aca3af676\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361430 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38b3e02d-b3b2-4ed2-90d3-0f495bf4a384-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kh889\" (UID: \"38b3e02d-b3b2-4ed2-90d3-0f495bf4a384\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kh889" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361467 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/12427837-cfa9-4d44-b78c-700aca3af676-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hnfnz\" (UID: \"12427837-cfa9-4d44-b78c-700aca3af676\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361512 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5skd\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-kube-api-access-n5skd\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361541 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-etcd-client\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361556 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6f396321-9263-49fa-9009-59a22725f914-apiservice-cert\") pod \"packageserver-d55dfcdfc-5nbf4\" (UID: \"6f396321-9263-49fa-9009-59a22725f914\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361573 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/12427837-cfa9-4d44-b78c-700aca3af676-proxy-tls\") pod \"machine-config-operator-74547568cd-hnfnz\" (UID: \"12427837-cfa9-4d44-b78c-700aca3af676\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361601 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-serving-cert\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.361620 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.365861 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spkj7\" (UniqueName: \"kubernetes.io/projected/badc3672-f89f-4bc5-bb4b-7422fffc3c0c-kube-api-access-spkj7\") pod \"olm-operator-6b444d44fb-9vgz7\" (UID: \"badc3672-f89f-4bc5-bb4b-7422fffc3c0c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.365927 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26f8\" (UniqueName: \"kubernetes.io/projected/c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295-kube-api-access-r26f8\") pod \"service-ca-9c57cc56f-r7682\" (UID: \"c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295\") " pod="openshift-service-ca/service-ca-9c57cc56f-r7682" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.365949 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/badc3672-f89f-4bc5-bb4b-7422fffc3c0c-srv-cert\") pod \"olm-operator-6b444d44fb-9vgz7\" (UID: \"badc3672-f89f-4bc5-bb4b-7422fffc3c0c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.365970 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkjjg\" (UniqueName: \"kubernetes.io/projected/7ebf8342-e0f9-413f-811d-57ca9df94f2d-kube-api-access-rkjjg\") pod \"marketplace-operator-79b997595-bmqnb\" (UID: \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.366005 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-trusted-ca\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.366493 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295-signing-key\") pod \"service-ca-9c57cc56f-r7682\" (UID: \"c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295\") " pod="openshift-service-ca/service-ca-9c57cc56f-r7682" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.366556 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228926c4-7c89-4225-b8e2-eb33491a90f4-config\") pod \"kube-apiserver-operator-766d6c64bb-5lrns\" (UID: \"228926c4-7c89-4225-b8e2-eb33491a90f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.367683 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54cwv\" (UniqueName: \"kubernetes.io/projected/7ba0dbb4-b520-4ff8-8235-f145b05583c7-kube-api-access-54cwv\") pod \"package-server-manager-789f6589d5-cwg5c\" (UID: \"7ba0dbb4-b520-4ff8-8235-f145b05583c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.367709 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d400efde-7a55-4b6d-82ff-4d0f8bfbe86c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fjvjz\" (UID: \"d400efde-7a55-4b6d-82ff-4d0f8bfbe86c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.367778 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/12427837-cfa9-4d44-b78c-700aca3af676-images\") pod \"machine-config-operator-74547568cd-hnfnz\" (UID: \"12427837-cfa9-4d44-b78c-700aca3af676\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.367809 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-secret-volume\") pod \"collect-profiles-29494485-bfbgl\" (UID: \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.367829 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjx9q\" (UniqueName: \"kubernetes.io/projected/6f396321-9263-49fa-9009-59a22725f914-kube-api-access-hjx9q\") pod \"packageserver-d55dfcdfc-5nbf4\" (UID: \"6f396321-9263-49fa-9009-59a22725f914\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.367848 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97682c6d-729b-4a9c-9377-cd34e69487e5-config-volume\") pod \"dns-default-2957f\" (UID: \"97682c6d-729b-4a9c-9377-cd34e69487e5\") " pod="openshift-dns/dns-default-2957f" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.367865 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295-signing-cabundle\") pod \"service-ca-9c57cc56f-r7682\" (UID: \"c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295\") " pod="openshift-service-ca/service-ca-9c57cc56f-r7682" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.367882 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-socket-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.367919 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt4vz\" (UniqueName: \"kubernetes.io/projected/d400efde-7a55-4b6d-82ff-4d0f8bfbe86c-kube-api-access-mt4vz\") pod \"kube-storage-version-migrator-operator-b67b599dd-fjvjz\" (UID: \"d400efde-7a55-4b6d-82ff-4d0f8bfbe86c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.367937 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-config-volume\") pod \"collect-profiles-29494485-bfbgl\" (UID: \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.367953 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-csi-data-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368018 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs9cq\" (UniqueName: \"kubernetes.io/projected/d3a19180-46ec-4d1c-9656-699f910a0fb1-kube-api-access-gs9cq\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368049 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f8a7361-1ef3-4c88-9797-93c96329b8c1-profile-collector-cert\") pod \"catalog-operator-68c6474976-qrd8x\" (UID: \"8f8a7361-1ef3-4c88-9797-93c96329b8c1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368079 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-registry-certificates\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368286 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-etcd-service-ca\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368326 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ebf8342-e0f9-413f-811d-57ca9df94f2d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bmqnb\" (UID: \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368343 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvrjz\" (UniqueName: \"kubernetes.io/projected/bc92fe41-b674-45b2-900d-99a1aa16e9c2-kube-api-access-jvrjz\") pod \"ingress-canary-4599s\" (UID: \"bc92fe41-b674-45b2-900d-99a1aa16e9c2\") " pod="openshift-ingress-canary/ingress-canary-4599s" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368364 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-etcd-ca\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368381 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2tbp\" (UniqueName: \"kubernetes.io/projected/38b3e02d-b3b2-4ed2-90d3-0f495bf4a384-kube-api-access-s2tbp\") pod \"multus-admission-controller-857f4d67dd-kh889\" (UID: \"38b3e02d-b3b2-4ed2-90d3-0f495bf4a384\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kh889" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368399 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc92fe41-b674-45b2-900d-99a1aa16e9c2-cert\") pod \"ingress-canary-4599s\" (UID: \"bc92fe41-b674-45b2-900d-99a1aa16e9c2\") " pod="openshift-ingress-canary/ingress-canary-4599s" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368417 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcnqr\" (UniqueName: \"kubernetes.io/projected/8f8a7361-1ef3-4c88-9797-93c96329b8c1-kube-api-access-pcnqr\") pod \"catalog-operator-68c6474976-qrd8x\" (UID: \"8f8a7361-1ef3-4c88-9797-93c96329b8c1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368803 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/228926c4-7c89-4225-b8e2-eb33491a90f4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5lrns\" (UID: \"228926c4-7c89-4225-b8e2-eb33491a90f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368831 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68473dc9-0fc2-4fd7-a46b-63dd443cef79-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7j48t\" (UID: \"68473dc9-0fc2-4fd7-a46b-63dd443cef79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368850 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d400efde-7a55-4b6d-82ff-4d0f8bfbe86c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fjvjz\" (UID: \"d400efde-7a55-4b6d-82ff-4d0f8bfbe86c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368867 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-registration-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.368920 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-registry-tls\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.369764 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.370364 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-config\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.370549 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/badc3672-f89f-4bc5-bb4b-7422fffc3c0c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9vgz7\" (UID: \"badc3672-f89f-4bc5-bb4b-7422fffc3c0c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.370619 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-bound-sa-token\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.370687 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f11d7042-6756-418a-9e37-6d9d8e5ce14b-config\") pod \"service-ca-operator-777779d784-xq9wq\" (UID: \"f11d7042-6756-418a-9e37-6d9d8e5ce14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.371764 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68473dc9-0fc2-4fd7-a46b-63dd443cef79-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7j48t\" (UID: \"68473dc9-0fc2-4fd7-a46b-63dd443cef79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.382593 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.382975 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-trusted-ca\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.383729 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228926c4-7c89-4225-b8e2-eb33491a90f4-config\") pod \"kube-apiserver-operator-766d6c64bb-5lrns\" (UID: \"228926c4-7c89-4225-b8e2-eb33491a90f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.387961 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-serving-cert\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.390541 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-etcd-client\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.397202 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.398412 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-etcd-service-ca\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.399718 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68473dc9-0fc2-4fd7-a46b-63dd443cef79-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7j48t\" (UID: \"68473dc9-0fc2-4fd7-a46b-63dd443cef79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.399972 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-etcd-ca\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: W0129 06:45:09.401807 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2408cff0_08c6_4c02_8fe5_8a92a1ddb6fa.slice/crio-c71f4afeab662e2eb54d0e162e5d6fd9a4bfe0657d5058d6ea18db215a1d4e19 WatchSource:0}: Error finding container c71f4afeab662e2eb54d0e162e5d6fd9a4bfe0657d5058d6ea18db215a1d4e19: Status 404 returned error can't find the container with id c71f4afeab662e2eb54d0e162e5d6fd9a4bfe0657d5058d6ea18db215a1d4e19 Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.405339 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r"] Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.407204 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-config\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.409326 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-registry-certificates\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.410462 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.410498 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228926c4-7c89-4225-b8e2-eb33491a90f4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5lrns\" (UID: \"228926c4-7c89-4225-b8e2-eb33491a90f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.410605 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg8qn\" (UniqueName: \"kubernetes.io/projected/3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97-kube-api-access-hg8qn\") pod \"etcd-operator-b45778765-vp6lh\" (UID: \"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.414132 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-registry-tls\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.428459 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.436524 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.445540 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68473dc9-0fc2-4fd7-a46b-63dd443cef79-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7j48t\" (UID: \"68473dc9-0fc2-4fd7-a46b-63dd443cef79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.446618 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.448327 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" event={"ID":"144d8289-078b-45cd-9539-901b6c72a980","Type":"ContainerStarted","Data":"a8e34ebe07072d3ad1007eda71b63a1aafc336175317ae53265b6b6ddaa0544f"} Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.448602 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" event={"ID":"144d8289-078b-45cd-9539-901b6c72a980","Type":"ContainerStarted","Data":"ad82f0a88ed2a694593d1fdd7304b619097ca74a19cccac8ed8f6e5847f0ebc4"} Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.451408 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" event={"ID":"41546796-854f-46bf-9b24-e2b51d6890a5","Type":"ContainerStarted","Data":"1a508eba68d65eb48fd7e47fbff16b8c1b72ae13f3bca726438510453cc47a3b"} Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.451449 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" event={"ID":"41546796-854f-46bf-9b24-e2b51d6890a5","Type":"ContainerStarted","Data":"f1b0ed153339c219a77124bca0ebce991432a946bd6fb917282bc677fbc5d7a2"} Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.451853 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.453457 4826 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lzzb6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.453495 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" podUID="41546796-854f-46bf-9b24-e2b51d6890a5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.456098 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5skd\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-kube-api-access-n5skd\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.463449 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nbsns" event={"ID":"618ab45f-146f-4e1a-a92b-aaa531cede89","Type":"ContainerStarted","Data":"c33af9731bad02684d96c38424fd29230d861d058c8eeff4481065b1481931cf"} Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.463487 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nbsns" event={"ID":"618ab45f-146f-4e1a-a92b-aaa531cede89","Type":"ContainerStarted","Data":"0d3bf267a91bc8030f9c4956e9cdd1cc2fdc596361b59062bbda789b92170592"} Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.464876 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" event={"ID":"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec","Type":"ContainerStarted","Data":"f1141588c0d2a5d120b1dcf762168ec6dabab6291e4ff2c11d5a60cba5af832b"} Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.464899 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" event={"ID":"8c1ec652-679e-4b4e-8f3a-83f39b7c9bec","Type":"ContainerStarted","Data":"42bf79bf76245b6348d30ac37bb069b16ff6c4f949038c852f9fb7ad50dcf428"} Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.469421 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" event={"ID":"0f6370c6-e353-4dcc-916c-26406b8ff40a","Type":"ContainerStarted","Data":"b060fe6aa2f1f227e70b3cf983bde6afb41e7d16ab4cc0b8feec6378ce9f917b"} Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.469483 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" event={"ID":"0f6370c6-e353-4dcc-916c-26406b8ff40a","Type":"ContainerStarted","Data":"7d414a95d4aeec2d4941ce67187adb8bfeda28f19ee27a6c8b3815a097020c17"} Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.471706 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.471901 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295-signing-key\") pod \"service-ca-9c57cc56f-r7682\" (UID: \"c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295\") " pod="openshift-service-ca/service-ca-9c57cc56f-r7682" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.471942 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54cwv\" (UniqueName: \"kubernetes.io/projected/7ba0dbb4-b520-4ff8-8235-f145b05583c7-kube-api-access-54cwv\") pod \"package-server-manager-789f6589d5-cwg5c\" (UID: \"7ba0dbb4-b520-4ff8-8235-f145b05583c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.471969 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d400efde-7a55-4b6d-82ff-4d0f8bfbe86c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fjvjz\" (UID: \"d400efde-7a55-4b6d-82ff-4d0f8bfbe86c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" Jan 29 06:45:09 crc kubenswrapper[4826]: E0129 06:45:09.472006 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:09.971982333 +0000 UTC m=+93.833775402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472042 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-secret-volume\") pod \"collect-profiles-29494485-bfbgl\" (UID: \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472082 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjx9q\" (UniqueName: \"kubernetes.io/projected/6f396321-9263-49fa-9009-59a22725f914-kube-api-access-hjx9q\") pod \"packageserver-d55dfcdfc-5nbf4\" (UID: \"6f396321-9263-49fa-9009-59a22725f914\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472104 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97682c6d-729b-4a9c-9377-cd34e69487e5-config-volume\") pod \"dns-default-2957f\" (UID: \"97682c6d-729b-4a9c-9377-cd34e69487e5\") " pod="openshift-dns/dns-default-2957f" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472126 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/12427837-cfa9-4d44-b78c-700aca3af676-images\") pod \"machine-config-operator-74547568cd-hnfnz\" (UID: \"12427837-cfa9-4d44-b78c-700aca3af676\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472149 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295-signing-cabundle\") pod \"service-ca-9c57cc56f-r7682\" (UID: \"c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295\") " pod="openshift-service-ca/service-ca-9c57cc56f-r7682" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472171 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-socket-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472196 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt4vz\" (UniqueName: \"kubernetes.io/projected/d400efde-7a55-4b6d-82ff-4d0f8bfbe86c-kube-api-access-mt4vz\") pod \"kube-storage-version-migrator-operator-b67b599dd-fjvjz\" (UID: \"d400efde-7a55-4b6d-82ff-4d0f8bfbe86c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472217 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-csi-data-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472240 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-config-volume\") pod \"collect-profiles-29494485-bfbgl\" (UID: \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472266 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs9cq\" (UniqueName: \"kubernetes.io/projected/d3a19180-46ec-4d1c-9656-699f910a0fb1-kube-api-access-gs9cq\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472286 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f8a7361-1ef3-4c88-9797-93c96329b8c1-profile-collector-cert\") pod \"catalog-operator-68c6474976-qrd8x\" (UID: \"8f8a7361-1ef3-4c88-9797-93c96329b8c1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472352 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ebf8342-e0f9-413f-811d-57ca9df94f2d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bmqnb\" (UID: \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472376 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2tbp\" (UniqueName: \"kubernetes.io/projected/38b3e02d-b3b2-4ed2-90d3-0f495bf4a384-kube-api-access-s2tbp\") pod \"multus-admission-controller-857f4d67dd-kh889\" (UID: \"38b3e02d-b3b2-4ed2-90d3-0f495bf4a384\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kh889" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472397 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc92fe41-b674-45b2-900d-99a1aa16e9c2-cert\") pod \"ingress-canary-4599s\" (UID: \"bc92fe41-b674-45b2-900d-99a1aa16e9c2\") " pod="openshift-ingress-canary/ingress-canary-4599s" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472418 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvrjz\" (UniqueName: \"kubernetes.io/projected/bc92fe41-b674-45b2-900d-99a1aa16e9c2-kube-api-access-jvrjz\") pod \"ingress-canary-4599s\" (UID: \"bc92fe41-b674-45b2-900d-99a1aa16e9c2\") " pod="openshift-ingress-canary/ingress-canary-4599s" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472454 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcnqr\" (UniqueName: \"kubernetes.io/projected/8f8a7361-1ef3-4c88-9797-93c96329b8c1-kube-api-access-pcnqr\") pod \"catalog-operator-68c6474976-qrd8x\" (UID: \"8f8a7361-1ef3-4c88-9797-93c96329b8c1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472475 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-registration-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472502 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d400efde-7a55-4b6d-82ff-4d0f8bfbe86c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fjvjz\" (UID: \"d400efde-7a55-4b6d-82ff-4d0f8bfbe86c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472559 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/badc3672-f89f-4bc5-bb4b-7422fffc3c0c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9vgz7\" (UID: \"badc3672-f89f-4bc5-bb4b-7422fffc3c0c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472584 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d400efde-7a55-4b6d-82ff-4d0f8bfbe86c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fjvjz\" (UID: \"d400efde-7a55-4b6d-82ff-4d0f8bfbe86c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472600 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f11d7042-6756-418a-9e37-6d9d8e5ce14b-config\") pod \"service-ca-operator-777779d784-xq9wq\" (UID: \"f11d7042-6756-418a-9e37-6d9d8e5ce14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472703 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f-certs\") pod \"machine-config-server-h9bff\" (UID: \"0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f\") " pod="openshift-machine-config-operator/machine-config-server-h9bff" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472734 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dkr5\" (UniqueName: \"kubernetes.io/projected/f11d7042-6756-418a-9e37-6d9d8e5ce14b-kube-api-access-7dkr5\") pod \"service-ca-operator-777779d784-xq9wq\" (UID: \"f11d7042-6756-418a-9e37-6d9d8e5ce14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472774 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7ebf8342-e0f9-413f-811d-57ca9df94f2d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bmqnb\" (UID: \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472797 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97682c6d-729b-4a9c-9377-cd34e69487e5-metrics-tls\") pod \"dns-default-2957f\" (UID: \"97682c6d-729b-4a9c-9377-cd34e69487e5\") " pod="openshift-dns/dns-default-2957f" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472827 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472873 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f8a7361-1ef3-4c88-9797-93c96329b8c1-srv-cert\") pod \"catalog-operator-68c6474976-qrd8x\" (UID: \"8f8a7361-1ef3-4c88-9797-93c96329b8c1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472900 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f-node-bootstrap-token\") pod \"machine-config-server-h9bff\" (UID: \"0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f\") " pod="openshift-machine-config-operator/machine-config-server-h9bff" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472919 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6f396321-9263-49fa-9009-59a22725f914-tmpfs\") pod \"packageserver-d55dfcdfc-5nbf4\" (UID: \"6f396321-9263-49fa-9009-59a22725f914\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472939 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f11d7042-6756-418a-9e37-6d9d8e5ce14b-serving-cert\") pod \"service-ca-operator-777779d784-xq9wq\" (UID: \"f11d7042-6756-418a-9e37-6d9d8e5ce14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472968 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfs42\" (UniqueName: \"kubernetes.io/projected/0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f-kube-api-access-wfs42\") pod \"machine-config-server-h9bff\" (UID: \"0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f\") " pod="openshift-machine-config-operator/machine-config-server-h9bff" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.472985 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-mountpoint-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473015 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh669\" (UniqueName: \"kubernetes.io/projected/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-kube-api-access-nh669\") pod \"collect-profiles-29494485-bfbgl\" (UID: \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473052 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f396321-9263-49fa-9009-59a22725f914-webhook-cert\") pod \"packageserver-d55dfcdfc-5nbf4\" (UID: \"6f396321-9263-49fa-9009-59a22725f914\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473079 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxqrg\" (UniqueName: \"kubernetes.io/projected/97682c6d-729b-4a9c-9377-cd34e69487e5-kube-api-access-hxqrg\") pod \"dns-default-2957f\" (UID: \"97682c6d-729b-4a9c-9377-cd34e69487e5\") " pod="openshift-dns/dns-default-2957f" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473098 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-plugins-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473116 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5glw\" (UniqueName: \"kubernetes.io/projected/12427837-cfa9-4d44-b78c-700aca3af676-kube-api-access-p5glw\") pod \"machine-config-operator-74547568cd-hnfnz\" (UID: \"12427837-cfa9-4d44-b78c-700aca3af676\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473136 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38b3e02d-b3b2-4ed2-90d3-0f495bf4a384-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kh889\" (UID: \"38b3e02d-b3b2-4ed2-90d3-0f495bf4a384\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kh889" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473155 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ba0dbb4-b520-4ff8-8235-f145b05583c7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cwg5c\" (UID: \"7ba0dbb4-b520-4ff8-8235-f145b05583c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473174 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/12427837-cfa9-4d44-b78c-700aca3af676-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hnfnz\" (UID: \"12427837-cfa9-4d44-b78c-700aca3af676\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473194 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6f396321-9263-49fa-9009-59a22725f914-apiservice-cert\") pod \"packageserver-d55dfcdfc-5nbf4\" (UID: \"6f396321-9263-49fa-9009-59a22725f914\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473211 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/12427837-cfa9-4d44-b78c-700aca3af676-proxy-tls\") pod \"machine-config-operator-74547568cd-hnfnz\" (UID: \"12427837-cfa9-4d44-b78c-700aca3af676\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473242 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spkj7\" (UniqueName: \"kubernetes.io/projected/badc3672-f89f-4bc5-bb4b-7422fffc3c0c-kube-api-access-spkj7\") pod \"olm-operator-6b444d44fb-9vgz7\" (UID: \"badc3672-f89f-4bc5-bb4b-7422fffc3c0c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473272 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r26f8\" (UniqueName: \"kubernetes.io/projected/c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295-kube-api-access-r26f8\") pod \"service-ca-9c57cc56f-r7682\" (UID: \"c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295\") " pod="openshift-service-ca/service-ca-9c57cc56f-r7682" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473319 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/badc3672-f89f-4bc5-bb4b-7422fffc3c0c-srv-cert\") pod \"olm-operator-6b444d44fb-9vgz7\" (UID: \"badc3672-f89f-4bc5-bb4b-7422fffc3c0c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.473336 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkjjg\" (UniqueName: \"kubernetes.io/projected/7ebf8342-e0f9-413f-811d-57ca9df94f2d-kube-api-access-rkjjg\") pod \"marketplace-operator-79b997595-bmqnb\" (UID: \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.474169 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tpz64" event={"ID":"55aa777b-0339-400d-bcdb-63f1d464b03b","Type":"ContainerStarted","Data":"bde1487c0d53af4e83cf9c7f0a824188ff9169eeae6ca7c196bfe6d7ed2c5039"} Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.477678 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7ebf8342-e0f9-413f-811d-57ca9df94f2d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bmqnb\" (UID: \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.478057 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-plugins-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.480165 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/12427837-cfa9-4d44-b78c-700aca3af676-proxy-tls\") pod \"machine-config-operator-74547568cd-hnfnz\" (UID: \"12427837-cfa9-4d44-b78c-700aca3af676\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.481425 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/badc3672-f89f-4bc5-bb4b-7422fffc3c0c-srv-cert\") pod \"olm-operator-6b444d44fb-9vgz7\" (UID: \"badc3672-f89f-4bc5-bb4b-7422fffc3c0c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.481505 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-registration-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.481733 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ebf8342-e0f9-413f-811d-57ca9df94f2d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bmqnb\" (UID: \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.482449 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97682c6d-729b-4a9c-9377-cd34e69487e5-config-volume\") pod \"dns-default-2957f\" (UID: \"97682c6d-729b-4a9c-9377-cd34e69487e5\") " pod="openshift-dns/dns-default-2957f" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.483316 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/12427837-cfa9-4d44-b78c-700aca3af676-images\") pod \"machine-config-operator-74547568cd-hnfnz\" (UID: \"12427837-cfa9-4d44-b78c-700aca3af676\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.483888 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-mountpoint-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.483998 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97682c6d-729b-4a9c-9377-cd34e69487e5-metrics-tls\") pod \"dns-default-2957f\" (UID: \"97682c6d-729b-4a9c-9377-cd34e69487e5\") " pod="openshift-dns/dns-default-2957f" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.484334 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295-signing-cabundle\") pod \"service-ca-9c57cc56f-r7682\" (UID: \"c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295\") " pod="openshift-service-ca/service-ca-9c57cc56f-r7682" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.484484 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-socket-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.484640 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d3a19180-46ec-4d1c-9656-699f910a0fb1-csi-data-dir\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:09 crc kubenswrapper[4826]: E0129 06:45:09.484732 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:09.984640627 +0000 UTC m=+93.846433686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.485224 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-config-volume\") pod \"collect-profiles-29494485-bfbgl\" (UID: \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.485596 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6f396321-9263-49fa-9009-59a22725f914-tmpfs\") pod \"packageserver-d55dfcdfc-5nbf4\" (UID: \"6f396321-9263-49fa-9009-59a22725f914\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.486136 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/badc3672-f89f-4bc5-bb4b-7422fffc3c0c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9vgz7\" (UID: \"badc3672-f89f-4bc5-bb4b-7422fffc3c0c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.486705 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc92fe41-b674-45b2-900d-99a1aa16e9c2-cert\") pod \"ingress-canary-4599s\" (UID: \"bc92fe41-b674-45b2-900d-99a1aa16e9c2\") " pod="openshift-ingress-canary/ingress-canary-4599s" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.487115 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38b3e02d-b3b2-4ed2-90d3-0f495bf4a384-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kh889\" (UID: \"38b3e02d-b3b2-4ed2-90d3-0f495bf4a384\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kh889" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.487688 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-secret-volume\") pod \"collect-profiles-29494485-bfbgl\" (UID: \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.487877 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/228926c4-7c89-4225-b8e2-eb33491a90f4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5lrns\" (UID: \"228926c4-7c89-4225-b8e2-eb33491a90f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.488411 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f396321-9263-49fa-9009-59a22725f914-webhook-cert\") pod \"packageserver-d55dfcdfc-5nbf4\" (UID: \"6f396321-9263-49fa-9009-59a22725f914\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.488427 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f11d7042-6756-418a-9e37-6d9d8e5ce14b-config\") pod \"service-ca-operator-777779d784-xq9wq\" (UID: \"f11d7042-6756-418a-9e37-6d9d8e5ce14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.488513 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f8a7361-1ef3-4c88-9797-93c96329b8c1-profile-collector-cert\") pod \"catalog-operator-68c6474976-qrd8x\" (UID: \"8f8a7361-1ef3-4c88-9797-93c96329b8c1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.488990 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295-signing-key\") pod \"service-ca-9c57cc56f-r7682\" (UID: \"c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295\") " pod="openshift-service-ca/service-ca-9c57cc56f-r7682" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.489670 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f8a7361-1ef3-4c88-9797-93c96329b8c1-srv-cert\") pod \"catalog-operator-68c6474976-qrd8x\" (UID: \"8f8a7361-1ef3-4c88-9797-93c96329b8c1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.490205 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ba0dbb4-b520-4ff8-8235-f145b05583c7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cwg5c\" (UID: \"7ba0dbb4-b520-4ff8-8235-f145b05583c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.490562 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d400efde-7a55-4b6d-82ff-4d0f8bfbe86c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fjvjz\" (UID: \"d400efde-7a55-4b6d-82ff-4d0f8bfbe86c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.491338 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.495613 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6f396321-9263-49fa-9009-59a22725f914-apiservice-cert\") pod \"packageserver-d55dfcdfc-5nbf4\" (UID: \"6f396321-9263-49fa-9009-59a22725f914\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.495756 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f-certs\") pod \"machine-config-server-h9bff\" (UID: \"0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f\") " pod="openshift-machine-config-operator/machine-config-server-h9bff" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.497054 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f-node-bootstrap-token\") pod \"machine-config-server-h9bff\" (UID: \"0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f\") " pod="openshift-machine-config-operator/machine-config-server-h9bff" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.505975 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-bound-sa-token\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.541979 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkjjg\" (UniqueName: \"kubernetes.io/projected/7ebf8342-e0f9-413f-811d-57ca9df94f2d-kube-api-access-rkjjg\") pod \"marketplace-operator-79b997595-bmqnb\" (UID: \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.573933 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:09 crc kubenswrapper[4826]: E0129 06:45:09.574133 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:10.074100588 +0000 UTC m=+93.935893657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.574665 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: E0129 06:45:09.578880 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:10.07887046 +0000 UTC m=+93.940663529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.582442 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxqrg\" (UniqueName: \"kubernetes.io/projected/97682c6d-729b-4a9c-9377-cd34e69487e5-kube-api-access-hxqrg\") pod \"dns-default-2957f\" (UID: \"97682c6d-729b-4a9c-9377-cd34e69487e5\") " pod="openshift-dns/dns-default-2957f" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.600504 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spkj7\" (UniqueName: \"kubernetes.io/projected/badc3672-f89f-4bc5-bb4b-7422fffc3c0c-kube-api-access-spkj7\") pod \"olm-operator-6b444d44fb-9vgz7\" (UID: \"badc3672-f89f-4bc5-bb4b-7422fffc3c0c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.607928 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.653656 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.676113 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:09 crc kubenswrapper[4826]: E0129 06:45:09.676598 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:10.176576153 +0000 UTC m=+94.038369222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.697528 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.731246 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.777088 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:09 crc kubenswrapper[4826]: E0129 06:45:09.777597 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:10.277583869 +0000 UTC m=+94.139376938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.824863 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2957f" Jan 29 06:45:09 crc kubenswrapper[4826]: I0129 06:45:09.878410 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:09 crc kubenswrapper[4826]: E0129 06:45:09.879168 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:10.37914238 +0000 UTC m=+94.240935479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.386025 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/12427837-cfa9-4d44-b78c-700aca3af676-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hnfnz\" (UID: \"12427837-cfa9-4d44-b78c-700aca3af676\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.386389 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.387542 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:10 crc kubenswrapper[4826]: E0129 06:45:10.387636 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:11.387616292 +0000 UTC m=+95.249409361 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.387889 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:10 crc kubenswrapper[4826]: E0129 06:45:10.388638 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:10.888618647 +0000 UTC m=+94.750411746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.394200 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f11d7042-6756-418a-9e37-6d9d8e5ce14b-serving-cert\") pod \"service-ca-operator-777779d784-xq9wq\" (UID: \"f11d7042-6756-418a-9e37-6d9d8e5ce14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.397972 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp"] Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.438558 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvrjz\" (UniqueName: \"kubernetes.io/projected/bc92fe41-b674-45b2-900d-99a1aa16e9c2-kube-api-access-jvrjz\") pod \"ingress-canary-4599s\" (UID: \"bc92fe41-b674-45b2-900d-99a1aa16e9c2\") " pod="openshift-ingress-canary/ingress-canary-4599s" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.438662 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.438699 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.447633 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2tbp\" (UniqueName: \"kubernetes.io/projected/38b3e02d-b3b2-4ed2-90d3-0f495bf4a384-kube-api-access-s2tbp\") pod \"multus-admission-controller-857f4d67dd-kh889\" (UID: \"38b3e02d-b3b2-4ed2-90d3-0f495bf4a384\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kh889" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.450334 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh669\" (UniqueName: \"kubernetes.io/projected/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-kube-api-access-nh669\") pod \"collect-profiles-29494485-bfbgl\" (UID: \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.454846 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjx9q\" (UniqueName: \"kubernetes.io/projected/6f396321-9263-49fa-9009-59a22725f914-kube-api-access-hjx9q\") pod \"packageserver-d55dfcdfc-5nbf4\" (UID: \"6f396321-9263-49fa-9009-59a22725f914\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.460970 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs9cq\" (UniqueName: \"kubernetes.io/projected/d3a19180-46ec-4d1c-9656-699f910a0fb1-kube-api-access-gs9cq\") pod \"csi-hostpathplugin-9hk7n\" (UID: \"d3a19180-46ec-4d1c-9656-699f910a0fb1\") " pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.461062 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dkr5\" (UniqueName: \"kubernetes.io/projected/f11d7042-6756-418a-9e37-6d9d8e5ce14b-kube-api-access-7dkr5\") pod \"service-ca-operator-777779d784-xq9wq\" (UID: \"f11d7042-6756-418a-9e37-6d9d8e5ce14b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.461401 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcnqr\" (UniqueName: \"kubernetes.io/projected/8f8a7361-1ef3-4c88-9797-93c96329b8c1-kube-api-access-pcnqr\") pod \"catalog-operator-68c6474976-qrd8x\" (UID: \"8f8a7361-1ef3-4c88-9797-93c96329b8c1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.461426 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r26f8\" (UniqueName: \"kubernetes.io/projected/c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295-kube-api-access-r26f8\") pod \"service-ca-9c57cc56f-r7682\" (UID: \"c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295\") " pod="openshift-service-ca/service-ca-9c57cc56f-r7682" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.462470 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq"] Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.463503 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-t4qwq"] Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.470616 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfs42\" (UniqueName: \"kubernetes.io/projected/0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f-kube-api-access-wfs42\") pod \"machine-config-server-h9bff\" (UID: \"0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f\") " pod="openshift-machine-config-operator/machine-config-server-h9bff" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.471672 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt4vz\" (UniqueName: \"kubernetes.io/projected/d400efde-7a55-4b6d-82ff-4d0f8bfbe86c-kube-api-access-mt4vz\") pod \"kube-storage-version-migrator-operator-b67b599dd-fjvjz\" (UID: \"d400efde-7a55-4b6d-82ff-4d0f8bfbe86c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.474706 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54cwv\" (UniqueName: \"kubernetes.io/projected/7ba0dbb4-b520-4ff8-8235-f145b05583c7-kube-api-access-54cwv\") pod \"package-server-manager-789f6589d5-cwg5c\" (UID: \"7ba0dbb4-b520-4ff8-8235-f145b05583c7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.476698 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5glw\" (UniqueName: \"kubernetes.io/projected/12427837-cfa9-4d44-b78c-700aca3af676-kube-api-access-p5glw\") pod \"machine-config-operator-74547568cd-hnfnz\" (UID: \"12427837-cfa9-4d44-b78c-700aca3af676\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.478793 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7wl5q"] Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.488891 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:10 crc kubenswrapper[4826]: E0129 06:45:10.489149 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:10.989125971 +0000 UTC m=+94.850919040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.489794 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:10 crc kubenswrapper[4826]: E0129 06:45:10.495731 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:10.9957141 +0000 UTC m=+94.857507179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.521530 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" event={"ID":"c6613300-6034-4de0-923f-9ed7ac56ddfa","Type":"ContainerStarted","Data":"4cad1e4dc7c3153f415c17c4a97a70372f5ee40e86f7c6732a8eb0126fdcd654"} Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.523371 4826 generic.go:334] "Generic (PLEG): container finished" podID="55aa777b-0339-400d-bcdb-63f1d464b03b" containerID="9cb88e0548f9b13478ba65e0df8a91121ab7e196879bd054e37b06359bcd85fb" exitCode=0 Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.523415 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tpz64" event={"ID":"55aa777b-0339-400d-bcdb-63f1d464b03b","Type":"ContainerDied","Data":"9cb88e0548f9b13478ba65e0df8a91121ab7e196879bd054e37b06359bcd85fb"} Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.531675 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r" event={"ID":"7da7e397-19dd-4eaa-86bc-44e555785978","Type":"ContainerStarted","Data":"d42f1a972080176a87de1df696dc83f77c305c7a30074e1c0e5292bc871860d9"} Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.533040 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2qc9p" event={"ID":"2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa","Type":"ContainerStarted","Data":"c71f4afeab662e2eb54d0e162e5d6fd9a4bfe0657d5058d6ea18db215a1d4e19"} Jan 29 06:45:10 crc kubenswrapper[4826]: W0129 06:45:10.539898 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98eff8b7_3e41_407f_880b_8cf237b9886f.slice/crio-5bc891367599ad845cca7488c5acb76ce26c1c9f4647967772414c0b14fe3742 WatchSource:0}: Error finding container 5bc891367599ad845cca7488c5acb76ce26c1c9f4647967772414c0b14fe3742: Status 404 returned error can't find the container with id 5bc891367599ad845cca7488c5acb76ce26c1c9f4647967772414c0b14fe3742 Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.540162 4826 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lzzb6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.540209 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" podUID="41546796-854f-46bf-9b24-e2b51d6890a5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.572778 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q"] Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.573057 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.580950 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kh889" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.591032 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:10 crc kubenswrapper[4826]: E0129 06:45:10.592417 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:11.092384836 +0000 UTC m=+94.954177905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.617970 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.631489 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.647886 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-h9bff" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.655925 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.663602 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp"] Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.668400 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.680381 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk"] Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.681758 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.682331 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.682617 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb"] Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.704582 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.704790 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" Jan 29 06:45:10 crc kubenswrapper[4826]: E0129 06:45:10.705180 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:11.205164194 +0000 UTC m=+95.066957263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.721625 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4599s" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.735071 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-r7682" Jan 29 06:45:10 crc kubenswrapper[4826]: W0129 06:45:10.739696 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9b12f8e_c8d1_499f_99b5_a4c8970c55ab.slice/crio-f76369b212422831d896c8ded0451c603ce033bc743b39f3e3a57b94dab86659 WatchSource:0}: Error finding container f76369b212422831d896c8ded0451c603ce033bc743b39f3e3a57b94dab86659: Status 404 returned error can't find the container with id f76369b212422831d896c8ded0451c603ce033bc743b39f3e3a57b94dab86659 Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.769649 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-djctr" podStartSLOduration=67.769628625 podStartE2EDuration="1m7.769628625s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:10.769072741 +0000 UTC m=+94.630865810" watchObservedRunningTime="2026-01-29 06:45:10.769628625 +0000 UTC m=+94.631421694" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.805569 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:10 crc kubenswrapper[4826]: E0129 06:45:10.806581 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:11.306562171 +0000 UTC m=+95.168355230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.832706 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-578bc" podStartSLOduration=68.83269177 podStartE2EDuration="1m8.83269177s" podCreationTimestamp="2026-01-29 06:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:10.831621973 +0000 UTC m=+94.693415042" watchObservedRunningTime="2026-01-29 06:45:10.83269177 +0000 UTC m=+94.694484839" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.872702 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" podStartSLOduration=67.872682645 podStartE2EDuration="1m7.872682645s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:10.872077269 +0000 UTC m=+94.733870338" watchObservedRunningTime="2026-01-29 06:45:10.872682645 +0000 UTC m=+94.734475714" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.880385 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh"] Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.907207 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:10 crc kubenswrapper[4826]: E0129 06:45:10.907818 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:11.407800744 +0000 UTC m=+95.269593813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.913945 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2s49r"] Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.935113 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nbsns" podStartSLOduration=67.935091933 podStartE2EDuration="1m7.935091933s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:10.93499108 +0000 UTC m=+94.796784149" watchObservedRunningTime="2026-01-29 06:45:10.935091933 +0000 UTC m=+94.796885002" Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.971482 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:10 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:10 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:10 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:10 crc kubenswrapper[4826]: I0129 06:45:10.971524 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.007863 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:11 crc kubenswrapper[4826]: E0129 06:45:11.008179 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:11.508165834 +0000 UTC m=+95.369958903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:11 crc kubenswrapper[4826]: W0129 06:45:11.022146 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb31f55be_1122_4a3a_b381_469b9363c0d6.slice/crio-700fcac554d5efbf8e6bd0c99e111e116832f04cfc81b14219de54e3d320e0ab WatchSource:0}: Error finding container 700fcac554d5efbf8e6bd0c99e111e116832f04cfc81b14219de54e3d320e0ab: Status 404 returned error can't find the container with id 700fcac554d5efbf8e6bd0c99e111e116832f04cfc81b14219de54e3d320e0ab Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.128786 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hpvzg"] Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.140986 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf"] Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.184737 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:11 crc kubenswrapper[4826]: E0129 06:45:11.185373 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:11.685353512 +0000 UTC m=+95.547146581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.186814 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz"] Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.199320 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2fdxd"] Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.223652 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q"] Jan 29 06:45:11 crc kubenswrapper[4826]: W0129 06:45:11.272375 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20f5d736_918d_438f_a419_e37ca4242df9.slice/crio-81f81d48db05f7c3ee7b1ee338ac80b29ea50b85b770fc97f4b75fc6dc7bd1c1 WatchSource:0}: Error finding container 81f81d48db05f7c3ee7b1ee338ac80b29ea50b85b770fc97f4b75fc6dc7bd1c1: Status 404 returned error can't find the container with id 81f81d48db05f7c3ee7b1ee338ac80b29ea50b85b770fc97f4b75fc6dc7bd1c1 Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.285607 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:11 crc kubenswrapper[4826]: E0129 06:45:11.285926 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:11.785903057 +0000 UTC m=+95.647696126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.317803 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kh889"] Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.335452 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bmqnb"] Jan 29 06:45:11 crc kubenswrapper[4826]: W0129 06:45:11.386736 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38b3e02d_b3b2_4ed2_90d3_0f495bf4a384.slice/crio-905f075d10f217863eaa3a59a1feddf424c6a3a12c37748dcc3d62369be8b16f WatchSource:0}: Error finding container 905f075d10f217863eaa3a59a1feddf424c6a3a12c37748dcc3d62369be8b16f: Status 404 returned error can't find the container with id 905f075d10f217863eaa3a59a1feddf424c6a3a12c37748dcc3d62369be8b16f Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.390974 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:11 crc kubenswrapper[4826]: E0129 06:45:11.391280 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:11.891267076 +0000 UTC m=+95.753060145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:11 crc kubenswrapper[4826]: W0129 06:45:11.403957 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb59ae2a8_2233_43a3_a1ac_cd5ead659130.slice/crio-d1388577ed786590050f4efec15cbc7e8c190f6d3418dbf87f0b23d6930f3819 WatchSource:0}: Error finding container d1388577ed786590050f4efec15cbc7e8c190f6d3418dbf87f0b23d6930f3819: Status 404 returned error can't find the container with id d1388577ed786590050f4efec15cbc7e8c190f6d3418dbf87f0b23d6930f3819 Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.414575 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7"] Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.417406 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-g4h97" podStartSLOduration=69.417390705 podStartE2EDuration="1m9.417390705s" podCreationTimestamp="2026-01-29 06:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:11.416372199 +0000 UTC m=+95.278165268" watchObservedRunningTime="2026-01-29 06:45:11.417390705 +0000 UTC m=+95.279183774" Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.421207 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t"] Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.492778 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:11 crc kubenswrapper[4826]: E0129 06:45:11.493149 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:11.993132715 +0000 UTC m=+95.854925784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.549809 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" event={"ID":"c6613300-6034-4de0-923f-9ed7ac56ddfa","Type":"ContainerStarted","Data":"1a292efb3f054160bde11551a8fdce49d7d233728f76d1516a56b3af4c429b02"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.570362 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" event={"ID":"30bc5222-e3c7-4cad-8e68-d39368e9d00d","Type":"ContainerStarted","Data":"8c9710e152f6d8b54e77c47ad302512a731be9b517006fa241a898f24fc14fec"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.594963 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:11 crc kubenswrapper[4826]: E0129 06:45:11.595270 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:12.09525887 +0000 UTC m=+95.957051939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.609026 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" event={"ID":"7ebf8342-e0f9-413f-811d-57ca9df94f2d","Type":"ContainerStarted","Data":"e42832eec1ee0c0543eb84741388e15eb1fce8d0d5d12443978a7d970704b984"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.612782 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" event={"ID":"20f5d736-918d-438f-a419-e37ca4242df9","Type":"ContainerStarted","Data":"81f81d48db05f7c3ee7b1ee338ac80b29ea50b85b770fc97f4b75fc6dc7bd1c1"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.622493 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" event={"ID":"98eff8b7-3e41-407f-880b-8cf237b9886f","Type":"ContainerStarted","Data":"5bc891367599ad845cca7488c5acb76ce26c1c9f4647967772414c0b14fe3742"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.635655 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2s49r" event={"ID":"516d6698-0690-4e9b-93dc-a65c873eac43","Type":"ContainerStarted","Data":"00171c0af3375db9f492476dbc8cab94bf64c12c34468c0b34ebd1a182c94967"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.667771 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r" event={"ID":"7da7e397-19dd-4eaa-86bc-44e555785978","Type":"ContainerStarted","Data":"1886b9ab436b41a4c99a513f31f2aae70d6323097c77c0d20c43675b6c91fec9"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.694614 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xnm5r" podStartSLOduration=68.694589024 podStartE2EDuration="1m8.694589024s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:11.692227633 +0000 UTC m=+95.554020712" watchObservedRunningTime="2026-01-29 06:45:11.694589024 +0000 UTC m=+95.556382093" Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.707580 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:11 crc kubenswrapper[4826]: E0129 06:45:11.708178 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:12.208151041 +0000 UTC m=+96.069944110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.709030 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" event={"ID":"12427837-cfa9-4d44-b78c-700aca3af676","Type":"ContainerStarted","Data":"826721532fcda025187142cc8faaf86b1b56301e101bb2e08f8cff0a06722933"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.710486 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:11 crc kubenswrapper[4826]: E0129 06:45:11.711448 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:12.211431725 +0000 UTC m=+96.073224794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.736606 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vp6lh"] Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.819580 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:11 crc kubenswrapper[4826]: E0129 06:45:11.820258 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:12.320222942 +0000 UTC m=+96.182016001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.823152 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns"] Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.826335 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2957f"] Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.826534 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" event={"ID":"b59ae2a8-2233-43a3-a1ac-cd5ead659130","Type":"ContainerStarted","Data":"d1388577ed786590050f4efec15cbc7e8c190f6d3418dbf87f0b23d6930f3819"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.834201 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" event={"ID":"53042c19-99ed-40d1-ac8c-7d07d03ec763","Type":"ContainerStarted","Data":"32cdfc84950426bb91277e95239489e3016a597a40a8eb69fdf32ac024da0b18"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.864759 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp" event={"ID":"5384f8de-a07b-4bec-9366-61227201eb43","Type":"ContainerStarted","Data":"d98c754be44723cd269830d6361f5b24cfbea03955a0670c943d52ea1d73c2e2"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.876653 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" event={"ID":"b31f55be-1122-4a3a-b381-469b9363c0d6","Type":"ContainerStarted","Data":"700fcac554d5efbf8e6bd0c99e111e116832f04cfc81b14219de54e3d320e0ab"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.880218 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl"] Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.881727 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" event={"ID":"b9b12f8e-c8d1-499f-99b5-a4c8970c55ab","Type":"ContainerStarted","Data":"f76369b212422831d896c8ded0451c603ce033bc743b39f3e3a57b94dab86659"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.884291 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz"] Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.922901 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:11 crc kubenswrapper[4826]: E0129 06:45:11.923256 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:12.42324483 +0000 UTC m=+96.285037899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.957782 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7wl5q" event={"ID":"adcf399e-d9c8-4495-9d73-458228398c5c","Type":"ContainerStarted","Data":"e1741677d509efe9f3fb873b235c57cbdf40a039a04338277754ab04d289ea5b"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.979081 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hpvzg" event={"ID":"8618dc83-b333-4237-b792-57114fffc127","Type":"ContainerStarted","Data":"fee3502f081c649c7a92f04113fd8a36e537df479de1b261d0bebedd8a29d872"} Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.985013 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4"] Jan 29 06:45:11 crc kubenswrapper[4826]: I0129 06:45:11.992851 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" event={"ID":"df9bd5fb-3887-44e9-8b90-b5a8611ff50a","Type":"ContainerStarted","Data":"ab0bfb095f302ec6a051eb09c7df51a451dfce87c73800d252713ff6f554fee3"} Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.024089 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:12 crc kubenswrapper[4826]: E0129 06:45:12.024490 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:12.524474482 +0000 UTC m=+96.386267551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.030762 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" event={"ID":"1a23bf6a-86ab-4319-8a5a-e447509ac03f","Type":"ContainerStarted","Data":"1e1db5e52c6027c8576d7a22ba92f9c3829cff619bb68c7c87560be852dfeabf"} Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.031220 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.035997 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t4qwq" event={"ID":"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47","Type":"ContainerStarted","Data":"ef5c124a1377eccc9cc02cd6a9365e50d5460c00116df17df61fa0d537e11254"} Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.041436 4826 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-sg4dq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.041474 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" podUID="1a23bf6a-86ab-4319-8a5a-e447509ac03f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.044651 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kh889" event={"ID":"38b3e02d-b3b2-4ed2-90d3-0f495bf4a384","Type":"ContainerStarted","Data":"905f075d10f217863eaa3a59a1feddf424c6a3a12c37748dcc3d62369be8b16f"} Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.057620 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c"] Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.080795 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" podStartSLOduration=69.080778784 podStartE2EDuration="1m9.080778784s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:12.078866116 +0000 UTC m=+95.940659185" watchObservedRunningTime="2026-01-29 06:45:12.080778784 +0000 UTC m=+95.942571853" Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.101326 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-t4qwq" podStartSLOduration=70.10128956 podStartE2EDuration="1m10.10128956s" podCreationTimestamp="2026-01-29 06:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:12.099910814 +0000 UTC m=+95.961703883" watchObservedRunningTime="2026-01-29 06:45:12.10128956 +0000 UTC m=+95.963082629" Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.125735 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:12 crc kubenswrapper[4826]: E0129 06:45:12.126650 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:12.626632009 +0000 UTC m=+96.488425088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.153230 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:12 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:12 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:12 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.153280 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.235288 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:12 crc kubenswrapper[4826]: E0129 06:45:12.236759 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:12.736736849 +0000 UTC m=+96.598530018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.337522 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:12 crc kubenswrapper[4826]: E0129 06:45:12.337947 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:12.83793257 +0000 UTC m=+96.699725629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.401007 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x"] Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.412140 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9hk7n"] Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.439123 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:12 crc kubenswrapper[4826]: E0129 06:45:12.439686 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:12.939657926 +0000 UTC m=+96.801450995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.484610 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4599s"] Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.541662 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:12 crc kubenswrapper[4826]: E0129 06:45:12.542003 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:13.041991876 +0000 UTC m=+96.903784945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.559201 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-r7682"] Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.582822 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq"] Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.642597 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:12 crc kubenswrapper[4826]: E0129 06:45:12.643153 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:13.143130357 +0000 UTC m=+97.004923426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.748551 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:12 crc kubenswrapper[4826]: E0129 06:45:12.749629 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:13.249604043 +0000 UTC m=+97.111397112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:12 crc kubenswrapper[4826]: W0129 06:45:12.811359 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0b4c2f9_cb2b_4b2a_a2f5_f36346de0295.slice/crio-80f7777421f0f30ffd116edea3088189485f36634c7c41f74fb5a96c17a3cedb WatchSource:0}: Error finding container 80f7777421f0f30ffd116edea3088189485f36634c7c41f74fb5a96c17a3cedb: Status 404 returned error can't find the container with id 80f7777421f0f30ffd116edea3088189485f36634c7c41f74fb5a96c17a3cedb Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.850601 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:12 crc kubenswrapper[4826]: E0129 06:45:12.850817 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:13.350775104 +0000 UTC m=+97.212568173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.851661 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:12 crc kubenswrapper[4826]: E0129 06:45:12.852014 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:13.351996096 +0000 UTC m=+97.213789165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.957354 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:12 crc kubenswrapper[4826]: E0129 06:45:12.957473 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:13.457448576 +0000 UTC m=+97.319241645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.958484 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:12 crc kubenswrapper[4826]: E0129 06:45:12.958802 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:13.458794691 +0000 UTC m=+97.320587760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.985177 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:12 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:12 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:12 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:12 crc kubenswrapper[4826]: I0129 06:45:12.985245 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.067249 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:13 crc kubenswrapper[4826]: E0129 06:45:13.067576 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:13.567549246 +0000 UTC m=+97.429342315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.067790 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:13 crc kubenswrapper[4826]: E0129 06:45:13.068051 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:13.568038969 +0000 UTC m=+97.429832038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.086008 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" event={"ID":"b9b12f8e-c8d1-499f-99b5-a4c8970c55ab","Type":"ContainerStarted","Data":"d4e17a5956954b2b0fc890ed04375a5fce2d3d7b9824896eab72087c91021297"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.097925 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" event={"ID":"7ba0dbb4-b520-4ff8-8235-f145b05583c7","Type":"ContainerStarted","Data":"679e186c7e9f5489985111eddc758a8b643756a9a829deb8b8ff59e9ce6d7437"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.110928 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwx5q" podStartSLOduration=70.110910967 podStartE2EDuration="1m10.110910967s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:13.110214949 +0000 UTC m=+96.972008018" watchObservedRunningTime="2026-01-29 06:45:13.110910967 +0000 UTC m=+96.972704036" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.139146 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" event={"ID":"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97","Type":"ContainerStarted","Data":"63af91bd396ad05b4c154b37f2831b3e27125ec8e61b7a3bcc2648b59550a730"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.170440 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:13 crc kubenswrapper[4826]: E0129 06:45:13.171048 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:13.671009696 +0000 UTC m=+97.532802765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.202472 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" event={"ID":"53042c19-99ed-40d1-ac8c-7d07d03ec763","Type":"ContainerStarted","Data":"dbf29a490a042f8f15c6dbc9a85809e44d4b0f1439f8bea7cacd51d77eae79c4"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.231663 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" event={"ID":"12427837-cfa9-4d44-b78c-700aca3af676","Type":"ContainerStarted","Data":"a2c12e0dcd6e2139fff36267cd5d483fb1bb6a16f0d532dd0881ff1f10bf253f"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.238327 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" event={"ID":"d400efde-7a55-4b6d-82ff-4d0f8bfbe86c","Type":"ContainerStarted","Data":"423c82fa5c8acf541eeff769e47ca0d7e845339021973ba416df2db87319e73c"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.253319 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" event={"ID":"228926c4-7c89-4225-b8e2-eb33491a90f4","Type":"ContainerStarted","Data":"0bdba2d60bbddf98d79b5f065286e752a50e6d39784648a79b3070ea475794b7"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.272098 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:13 crc kubenswrapper[4826]: E0129 06:45:13.273438 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:13.773424149 +0000 UTC m=+97.635217218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.308525 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp" event={"ID":"5384f8de-a07b-4bec-9366-61227201eb43","Type":"ContainerStarted","Data":"94abc4bbdd3f08fd00c91db5d6a5ed50de6081f18c588c883c98b019c3eff9f0"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.349023 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t4qwq" event={"ID":"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47","Type":"ContainerStarted","Data":"ee0c45e48dea9fff2c053aee583d2a09c66c1df7556eda297189cb018ed5adcb"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.368517 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4599s" event={"ID":"bc92fe41-b674-45b2-900d-99a1aa16e9c2","Type":"ContainerStarted","Data":"3888e4a58518e9ac3dde35b1c39543eee03d27c047f97233e9ee8ea4437945e1"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.373728 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:13 crc kubenswrapper[4826]: E0129 06:45:13.374051 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:13.874033725 +0000 UTC m=+97.735826794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.392071 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" event={"ID":"d3a19180-46ec-4d1c-9656-699f910a0fb1","Type":"ContainerStarted","Data":"1199c3315e2680f49a34c672117ce783681980e22e79baed692beeb32912f3d2"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.419989 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hpvzg" event={"ID":"8618dc83-b333-4237-b792-57114fffc127","Type":"ContainerStarted","Data":"c09b08b6964bfb93c8cef89f199c152cb359de20d8a6fd6f94a1b8d5d8b12258"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.421060 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.429429 4826 patch_prober.go:28] interesting pod/console-operator-58897d9998-hpvzg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.429490 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hpvzg" podUID="8618dc83-b333-4237-b792-57114fffc127" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.470921 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-hpvzg" podStartSLOduration=71.470905226 podStartE2EDuration="1m11.470905226s" podCreationTimestamp="2026-01-29 06:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:13.468337651 +0000 UTC m=+97.330130720" watchObservedRunningTime="2026-01-29 06:45:13.470905226 +0000 UTC m=+97.332698295" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.479516 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:13 crc kubenswrapper[4826]: E0129 06:45:13.483346 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:13.983331004 +0000 UTC m=+97.845124063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.501606 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" event={"ID":"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9","Type":"ContainerStarted","Data":"16bd16fc43032b0d5b9890111f7c3bd26d421e1466bb2f177673fb7d7738723e"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.550994 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2qc9p" event={"ID":"2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa","Type":"ContainerStarted","Data":"8887968244bc51a33e7ef412f207066a8524cea9577797a412bac424c7b556c3"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.569520 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2957f" event={"ID":"97682c6d-729b-4a9c-9377-cd34e69487e5","Type":"ContainerStarted","Data":"2adc7ac12ae58ede1015cf0fe807506cbebfa72b45f50641fd5978676ea77971"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.582850 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:13 crc kubenswrapper[4826]: E0129 06:45:13.584088 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:14.084067644 +0000 UTC m=+97.945860713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.626944 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" event={"ID":"df9bd5fb-3887-44e9-8b90-b5a8611ff50a","Type":"ContainerStarted","Data":"8d6b5779a1260d68cad7c614b512cbc91bb009df5f340d738dc0f8c5fc6b800e"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.634102 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-r7682" event={"ID":"c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295","Type":"ContainerStarted","Data":"80f7777421f0f30ffd116edea3088189485f36634c7c41f74fb5a96c17a3cedb"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.666614 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cj5mb" podStartSLOduration=70.666599038 podStartE2EDuration="1m10.666599038s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:13.665829308 +0000 UTC m=+97.527622387" watchObservedRunningTime="2026-01-29 06:45:13.666599038 +0000 UTC m=+97.528392107" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.679264 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-h9bff" event={"ID":"0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f","Type":"ContainerStarted","Data":"59cdd25deeda748b4cb03f685622ba355eb5a6d60d6bcbbaa924bebcb332f657"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.684154 4826 generic.go:334] "Generic (PLEG): container finished" podID="b59ae2a8-2233-43a3-a1ac-cd5ead659130" containerID="0c360d77a094a1485fbbdf0b677a04adc7e9872a304e9c2af860dfcb146b3bd8" exitCode=0 Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.684263 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" event={"ID":"b59ae2a8-2233-43a3-a1ac-cd5ead659130","Type":"ContainerDied","Data":"0c360d77a094a1485fbbdf0b677a04adc7e9872a304e9c2af860dfcb146b3bd8"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.685600 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:13 crc kubenswrapper[4826]: E0129 06:45:13.686288 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:14.186271812 +0000 UTC m=+98.048064871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.713673 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-h9bff" podStartSLOduration=7.713654603 podStartE2EDuration="7.713654603s" podCreationTimestamp="2026-01-29 06:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:13.712610157 +0000 UTC m=+97.574403226" watchObservedRunningTime="2026-01-29 06:45:13.713654603 +0000 UTC m=+97.575447672" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.756728 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7wl5q" event={"ID":"adcf399e-d9c8-4495-9d73-458228398c5c","Type":"ContainerStarted","Data":"3cbbce160269c0b069cbef42d1d4bf2b0fd6b9ec37ebac567aeb4704c6270920"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.757146 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7wl5q" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.759342 4826 patch_prober.go:28] interesting pod/downloads-7954f5f757-7wl5q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.759416 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7wl5q" podUID="adcf399e-d9c8-4495-9d73-458228398c5c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.779308 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" event={"ID":"8f8a7361-1ef3-4c88-9797-93c96329b8c1","Type":"ContainerStarted","Data":"315e5c2ed77df08790a7eeea8bc810583438b427f8793d78bfc456d204971029"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.784905 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" event={"ID":"98eff8b7-3e41-407f-880b-8cf237b9886f","Type":"ContainerStarted","Data":"bdc61e6f9480ecd18576d3806811d26c72c2077caff8a320d9a0602213ffa907"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.786643 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:13 crc kubenswrapper[4826]: E0129 06:45:13.788542 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:14.288510459 +0000 UTC m=+98.150303528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.791423 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2s49r" event={"ID":"516d6698-0690-4e9b-93dc-a65c873eac43","Type":"ContainerStarted","Data":"33b78548564ea1be05f3fe010e1df8a37afd35fc739b773220812d379479f290"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.801395 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" event={"ID":"6f396321-9263-49fa-9009-59a22725f914","Type":"ContainerStarted","Data":"b0c951bb216a28533d74180a70632cdf2f1d399c7d8bb582b4d904a6fdc42f26"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.809461 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" event={"ID":"68473dc9-0fc2-4fd7-a46b-63dd443cef79","Type":"ContainerStarted","Data":"09568f8fadf0eeb6c3df7dfaf16d6ddfcd7548a8d7a968c765eaeeff1d277bfe"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.810599 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-7wl5q" podStartSLOduration=70.810585185 podStartE2EDuration="1m10.810585185s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:13.779644662 +0000 UTC m=+97.641437731" watchObservedRunningTime="2026-01-29 06:45:13.810585185 +0000 UTC m=+97.672378254" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.811601 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" podStartSLOduration=70.811595891 podStartE2EDuration="1m10.811595891s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:13.809079056 +0000 UTC m=+97.670872125" watchObservedRunningTime="2026-01-29 06:45:13.811595891 +0000 UTC m=+97.673388960" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.815094 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tpz64" event={"ID":"55aa777b-0339-400d-bcdb-63f1d464b03b","Type":"ContainerStarted","Data":"eff6bdcf407412ef5a3c4e663f5f2fa4cc10b41b8f1557c1c615d60d73aab4fe"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.819830 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" event={"ID":"7ebf8342-e0f9-413f-811d-57ca9df94f2d","Type":"ContainerStarted","Data":"9d13a267560333a1d187c96aabdd7d32057bd22f83d5e3f44fb9d50b46b66260"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.820525 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.821743 4826 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bmqnb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.821782 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" podUID="7ebf8342-e0f9-413f-811d-57ca9df94f2d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.830542 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kh889" event={"ID":"38b3e02d-b3b2-4ed2-90d3-0f495bf4a384","Type":"ContainerStarted","Data":"b539e89c718973acaeb57ad92f9a6494c0ff7b6e619e2e5bd1fb33b6eb4952d3"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.842581 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" event={"ID":"f11d7042-6756-418a-9e37-6d9d8e5ce14b","Type":"ContainerStarted","Data":"7a1660d8e93711f496887bbcf0dbacfd9648db1ae62f07ffd254c0e0aaf14dfa"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.850824 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" podStartSLOduration=70.850778344 podStartE2EDuration="1m10.850778344s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:13.843022195 +0000 UTC m=+97.704815254" watchObservedRunningTime="2026-01-29 06:45:13.850778344 +0000 UTC m=+97.712571433" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.856489 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" event={"ID":"b31f55be-1122-4a3a-b381-469b9363c0d6","Type":"ContainerStarted","Data":"d341f0e4d256492aa8942a634b93bbaca90f9f11d0bf5d4a1e6d966405c3e6e6"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.860196 4826 generic.go:334] "Generic (PLEG): container finished" podID="20f5d736-918d-438f-a419-e37ca4242df9" containerID="07eadc5877883127869bdd08f2b26c217437429a9c147e59051e383f0a8e7146" exitCode=0 Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.860258 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" event={"ID":"20f5d736-918d-438f-a419-e37ca4242df9","Type":"ContainerDied","Data":"07eadc5877883127869bdd08f2b26c217437429a9c147e59051e383f0a8e7146"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.863970 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" podStartSLOduration=70.863954041 podStartE2EDuration="1m10.863954041s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:13.86271202 +0000 UTC m=+97.724505089" watchObservedRunningTime="2026-01-29 06:45:13.863954041 +0000 UTC m=+97.725747110" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.866039 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" event={"ID":"1a23bf6a-86ab-4319-8a5a-e447509ac03f","Type":"ContainerStarted","Data":"09296d0e5163240a0291d332a10c8e35f6a1e0007ac4678615888527cd484ca4"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.874592 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" event={"ID":"badc3672-f89f-4bc5-bb4b-7422fffc3c0c","Type":"ContainerStarted","Data":"8b0b477ffda2683aeca616dc29d3ea53101afb2b1a877627d20e3c868ee1b41d"} Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.881143 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.897958 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fk6kh" podStartSLOduration=70.89784638 podStartE2EDuration="1m10.89784638s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:13.893937749 +0000 UTC m=+97.755730818" watchObservedRunningTime="2026-01-29 06:45:13.89784638 +0000 UTC m=+97.759639469" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.904755 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:13 crc kubenswrapper[4826]: E0129 06:45:13.905387 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:14.405373042 +0000 UTC m=+98.267166111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.969164 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" podStartSLOduration=70.969145915 podStartE2EDuration="1m10.969145915s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:13.968814197 +0000 UTC m=+97.830607266" watchObservedRunningTime="2026-01-29 06:45:13.969145915 +0000 UTC m=+97.830938984" Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.974812 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:13 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:13 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:13 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:13 crc kubenswrapper[4826]: I0129 06:45:13.974861 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.009497 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.010854 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:14.510838253 +0000 UTC m=+98.372631322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.114088 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.114409 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:14.614394085 +0000 UTC m=+98.476187154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.218091 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.218894 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:14.718873361 +0000 UTC m=+98.580666430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.320232 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.320757 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:14.82073026 +0000 UTC m=+98.682523329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.422358 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.422530 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:14.922505486 +0000 UTC m=+98.784298555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.422770 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.423051 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:14.92304514 +0000 UTC m=+98.784838199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.523961 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.524242 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.024199771 +0000 UTC m=+98.885992840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.524600 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.524871 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.024863718 +0000 UTC m=+98.886656787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.625530 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.625668 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.125646229 +0000 UTC m=+98.987439298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.625806 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.626200 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.126182353 +0000 UTC m=+98.987975422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.727244 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.727927 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.227903438 +0000 UTC m=+99.089696507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.829663 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.830128 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.330086425 +0000 UTC m=+99.191879494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.881166 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2957f" event={"ID":"97682c6d-729b-4a9c-9377-cd34e69487e5","Type":"ContainerStarted","Data":"9dfb21e0f52f4df733211964eb6f5644e1cfc8c775b23e147f2d08cae9060f16"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.881209 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2957f" event={"ID":"97682c6d-729b-4a9c-9377-cd34e69487e5","Type":"ContainerStarted","Data":"410ce69ea45298d723b36dc6c1e6d406a5d6bc483e67ada10b39e9441b850c8a"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.881998 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2957f" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.883795 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" event={"ID":"f11d7042-6756-418a-9e37-6d9d8e5ce14b","Type":"ContainerStarted","Data":"00dbb534d72c5348ac250a898722f32eaafe9a35077b74df4f34f5061aa8715a"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.885114 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" event={"ID":"d400efde-7a55-4b6d-82ff-4d0f8bfbe86c","Type":"ContainerStarted","Data":"48500fe4d152e82eb771e2de96a7c9da2cf755c08bded444a8cee53f27a1d06c"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.887779 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" event={"ID":"c6613300-6034-4de0-923f-9ed7ac56ddfa","Type":"ContainerStarted","Data":"132c72aa53a0dab02d007f835ebfb8f44f2148ca57f49a4ca83d6884e42dbc9e"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.889580 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" event={"ID":"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9","Type":"ContainerStarted","Data":"d3ad29132ec253628533207eb0006a9734d96a886c878edc8148e899ebeab089"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.894634 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2qc9p" event={"ID":"2408cff0-08c6-4c02-8fe5-8a92a1ddb6fa","Type":"ContainerStarted","Data":"0191005937ebf865407f59a5d48337fcd891ad039e9a9fedcc3cab193ad90943"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.898162 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2s49r" event={"ID":"516d6698-0690-4e9b-93dc-a65c873eac43","Type":"ContainerStarted","Data":"84804f675a2cda2a420c3cce79bae0865b2f7298d62568f014b735217cef3287"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.901036 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tpz64" event={"ID":"55aa777b-0339-400d-bcdb-63f1d464b03b","Type":"ContainerStarted","Data":"58591ae7b944b90fd99c8d6107433324cb4352128186cddaa021ce00ea9dc13b"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.904396 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" event={"ID":"20f5d736-918d-438f-a419-e37ca4242df9","Type":"ContainerStarted","Data":"614c2a34aaa4f2afc8fb59cfcab6486487e670377806215363c14fcd5d5c90c5"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.907967 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" event={"ID":"b59ae2a8-2233-43a3-a1ac-cd5ead659130","Type":"ContainerStarted","Data":"dc66cbaaa3ea22792ae81366d7445d7981a0a67c8bc6fa308ba73d6bd54ea73d"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.908402 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.910762 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" event={"ID":"7ba0dbb4-b520-4ff8-8235-f145b05583c7","Type":"ContainerStarted","Data":"7399d0eda6b2691cbc742bb8b165fb4cbf8e1e6fb9f19965e428105a8e645a18"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.910791 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" event={"ID":"7ba0dbb4-b520-4ff8-8235-f145b05583c7","Type":"ContainerStarted","Data":"1d526dd20a4c02e00c483b3b0bf52882a172f065515af2e6f6a10db57549ef82"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.911138 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.912994 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" event={"ID":"badc3672-f89f-4bc5-bb4b-7422fffc3c0c","Type":"ContainerStarted","Data":"f76d0c0df7a17e5d7c68ee468355a37a9702912c23d11a6f4de6613ef6cbb264"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.913777 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.918662 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" event={"ID":"6f396321-9263-49fa-9009-59a22725f914","Type":"ContainerStarted","Data":"69921785154fd0b3d8445827e60cb1c5e9cb9d343b367f2e71213260884ded66"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.919733 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.920528 4826 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5nbf4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.920570 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" podUID="6f396321-9263-49fa-9009-59a22725f914" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.920940 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2957f" podStartSLOduration=8.920920551 podStartE2EDuration="8.920920551s" podCreationTimestamp="2026-01-29 06:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:14.917702669 +0000 UTC m=+98.779495738" watchObservedRunningTime="2026-01-29 06:45:14.920920551 +0000 UTC m=+98.782713630" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.926547 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" event={"ID":"53042c19-99ed-40d1-ac8c-7d07d03ec763","Type":"ContainerStarted","Data":"aa26bc2e2518a236d81dae45b8ca8afbb7352b0a7fb57399fa22db5d68dcbf94"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.930102 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" event={"ID":"d3a19180-46ec-4d1c-9656-699f910a0fb1","Type":"ContainerStarted","Data":"e60477e9a7111d1faddfe02f7c5855b7836aa604046b5fac58af32182f6292a9"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.930957 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.931557 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.431537343 +0000 UTC m=+99.293330412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.931666 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:14 crc kubenswrapper[4826]: E0129 06:45:14.931893 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.431887232 +0000 UTC m=+99.293680301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.935903 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vgz7" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.936162 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" event={"ID":"8f8a7361-1ef3-4c88-9797-93c96329b8c1","Type":"ContainerStarted","Data":"b26920f217c3eb61fcaf87ee579ebd754239ff49c30e435c076c9d599ec5a689"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.936828 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.938019 4826 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qrd8x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.938076 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" podUID="8f8a7361-1ef3-4c88-9797-93c96329b8c1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.941284 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-h9bff" event={"ID":"0cd7c9fb-ed3e-4b12-94f2-a450b9fad90f","Type":"ContainerStarted","Data":"4044f10d1ed4a16be0d175585cc918b09a6946b7afa9e230bdb85f6025e67c22"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.943381 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp" event={"ID":"5384f8de-a07b-4bec-9366-61227201eb43","Type":"ContainerStarted","Data":"c143c1307f3836a6555309d61355c163fe3602f04241a059c574090c887312dd"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.950726 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" event={"ID":"12427837-cfa9-4d44-b78c-700aca3af676","Type":"ContainerStarted","Data":"6772f2654257984313275d9ce77be2a5fe64f11899edf7df77e01fca81870e5c"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.960639 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4599s" event={"ID":"bc92fe41-b674-45b2-900d-99a1aa16e9c2","Type":"ContainerStarted","Data":"cee3cea36ebb16e518453d691b8750bcf230ee13f6557b5a0a5939434c34ac9e"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.964124 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:14 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:14 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:14 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.964174 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.976060 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" podStartSLOduration=71.976045903 podStartE2EDuration="1m11.976045903s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:14.955086106 +0000 UTC m=+98.816879175" watchObservedRunningTime="2026-01-29 06:45:14.976045903 +0000 UTC m=+98.837838972" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.976785 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" podStartSLOduration=71.976782012 podStartE2EDuration="1m11.976782012s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:14.974785491 +0000 UTC m=+98.836578560" watchObservedRunningTime="2026-01-29 06:45:14.976782012 +0000 UTC m=+98.838575081" Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.984093 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" event={"ID":"228926c4-7c89-4225-b8e2-eb33491a90f4","Type":"ContainerStarted","Data":"b8992262d7e5afc4684f36754acabf1bb5ac99eacb1fc6ce2ebace818ac104f3"} Jan 29 06:45:14 crc kubenswrapper[4826]: I0129 06:45:14.993651 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7j48t" event={"ID":"68473dc9-0fc2-4fd7-a46b-63dd443cef79","Type":"ContainerStarted","Data":"ba41fee3f16713394710c9c88e0db3a4c14bcc67613d87b822ed184e14ff91a8"} Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.001044 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxgts" podStartSLOduration=73.001027103 podStartE2EDuration="1m13.001027103s" podCreationTimestamp="2026-01-29 06:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.000840838 +0000 UTC m=+98.862633907" watchObservedRunningTime="2026-01-29 06:45:15.001027103 +0000 UTC m=+98.862820172" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.013595 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-r7682" event={"ID":"c0b4c2f9-cb2b-4b2a-a2f5-f36346de0295","Type":"ContainerStarted","Data":"d554cf0b823bd7d7f7edc03017aa8cf26773b2a1238ecda07f0235cd1cc68322"} Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.020274 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" event={"ID":"30bc5222-e3c7-4cad-8e68-d39368e9d00d","Type":"ContainerStarted","Data":"1514f7bf0c18e0fa17f17ff3623544dd54f00df0d156ae5df25342ca9ce38396"} Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.020922 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.023640 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" event={"ID":"3fbd21b6-12eb-4b9e-96a3-e0b8f7219a97","Type":"ContainerStarted","Data":"03f5bd1c7e51b1fd5dbacffa2c9bdca85bb8fd60d2f6c82b2d8ca830ce9097e9"} Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.034205 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:15 crc kubenswrapper[4826]: E0129 06:45:15.034368 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.534343876 +0000 UTC m=+99.396136945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.034802 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:15 crc kubenswrapper[4826]: E0129 06:45:15.036489 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.53647741 +0000 UTC m=+99.398270469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.043133 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kh889" event={"ID":"38b3e02d-b3b2-4ed2-90d3-0f495bf4a384","Type":"ContainerStarted","Data":"bb7dad361fd76b8a0e8d3de5f4bac32ce7167519a4ef48679bc16175266c7392"} Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.046122 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2s49r" podStartSLOduration=72.046106957 podStartE2EDuration="1m12.046106957s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.044900696 +0000 UTC m=+98.906693765" watchObservedRunningTime="2026-01-29 06:45:15.046106957 +0000 UTC m=+98.907900036" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.075426 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-m4cbp" event={"ID":"98eff8b7-3e41-407f-880b-8cf237b9886f","Type":"ContainerStarted","Data":"cb33f093e5bbee194edc975976ec7eb0335e0706b57c21a4a43dfb2e598a9524"} Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.077078 4826 patch_prober.go:28] interesting pod/downloads-7954f5f757-7wl5q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.077148 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7wl5q" podUID="adcf399e-d9c8-4495-9d73-458228398c5c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.077608 4826 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bmqnb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.077640 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" podUID="7ebf8342-e0f9-413f-811d-57ca9df94f2d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.085731 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tpz64" podStartSLOduration=73.085686761 podStartE2EDuration="1m13.085686761s" podCreationTimestamp="2026-01-29 06:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.079925203 +0000 UTC m=+98.941718272" watchObservedRunningTime="2026-01-29 06:45:15.085686761 +0000 UTC m=+98.947479830" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.092329 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-hpvzg" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.145903 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:15 crc kubenswrapper[4826]: E0129 06:45:15.147786 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.6477526 +0000 UTC m=+99.509545669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.148489 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:15 crc kubenswrapper[4826]: E0129 06:45:15.150749 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.650738467 +0000 UTC m=+99.512531536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.176804 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" podStartSLOduration=73.176780904 podStartE2EDuration="1m13.176780904s" podCreationTimestamp="2026-01-29 06:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.124782992 +0000 UTC m=+98.986576061" watchObservedRunningTime="2026-01-29 06:45:15.176780904 +0000 UTC m=+99.038573973" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.219797 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xq9wq" podStartSLOduration=72.219782745 podStartE2EDuration="1m12.219782745s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.217717722 +0000 UTC m=+99.079510791" watchObservedRunningTime="2026-01-29 06:45:15.219782745 +0000 UTC m=+99.081575814" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.220235 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2qc9p" podStartSLOduration=72.220229636 podStartE2EDuration="1m12.220229636s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.188614217 +0000 UTC m=+99.050407286" watchObservedRunningTime="2026-01-29 06:45:15.220229636 +0000 UTC m=+99.082022705" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.244769 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" podStartSLOduration=15.244751474 podStartE2EDuration="15.244751474s" podCreationTimestamp="2026-01-29 06:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.243470602 +0000 UTC m=+99.105263671" watchObservedRunningTime="2026-01-29 06:45:15.244751474 +0000 UTC m=+99.106544543" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.258752 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:15 crc kubenswrapper[4826]: E0129 06:45:15.259136 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.759120972 +0000 UTC m=+99.620914041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.272989 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fjvjz" podStartSLOduration=72.272971837 podStartE2EDuration="1m12.272971837s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.270645578 +0000 UTC m=+99.132438647" watchObservedRunningTime="2026-01-29 06:45:15.272971837 +0000 UTC m=+99.134764906" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.348768 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tv2tp" podStartSLOduration=73.348751058 podStartE2EDuration="1m13.348751058s" podCreationTimestamp="2026-01-29 06:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.305439859 +0000 UTC m=+99.167232928" watchObservedRunningTime="2026-01-29 06:45:15.348751058 +0000 UTC m=+99.210544127" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.362884 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:15 crc kubenswrapper[4826]: E0129 06:45:15.363218 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.863207528 +0000 UTC m=+99.725000587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.387403 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hnfnz" podStartSLOduration=72.387385817 podStartE2EDuration="1m12.387385817s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.358509358 +0000 UTC m=+99.220302417" watchObservedRunningTime="2026-01-29 06:45:15.387385817 +0000 UTC m=+99.249178886" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.387643 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-r7682" podStartSLOduration=72.387638654 podStartE2EDuration="1m12.387638654s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.386571777 +0000 UTC m=+99.248364846" watchObservedRunningTime="2026-01-29 06:45:15.387638654 +0000 UTC m=+99.249431723" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.465834 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:15 crc kubenswrapper[4826]: E0129 06:45:15.466139 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:15.966123774 +0000 UTC m=+99.827916843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.495994 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lg7nk" podStartSLOduration=72.495977039 podStartE2EDuration="1m12.495977039s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.495203449 +0000 UTC m=+99.356996518" watchObservedRunningTime="2026-01-29 06:45:15.495977039 +0000 UTC m=+99.357770108" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.571149 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:15 crc kubenswrapper[4826]: E0129 06:45:15.571797 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:16.07178564 +0000 UTC m=+99.933578709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.672459 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:15 crc kubenswrapper[4826]: E0129 06:45:15.672922 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:16.17290087 +0000 UTC m=+100.034693939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.688249 4826 csr.go:261] certificate signing request csr-dtqv2 is approved, waiting to be issued Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.706613 4826 csr.go:257] certificate signing request csr-dtqv2 is issued Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.711915 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kh889" podStartSLOduration=72.711896428 podStartE2EDuration="1m12.711896428s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.598642898 +0000 UTC m=+99.460435967" watchObservedRunningTime="2026-01-29 06:45:15.711896428 +0000 UTC m=+99.573689497" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.712271 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5lrns" podStartSLOduration=72.712266638 podStartE2EDuration="1m12.712266638s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.710459571 +0000 UTC m=+99.572252640" watchObservedRunningTime="2026-01-29 06:45:15.712266638 +0000 UTC m=+99.574059707" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.756037 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4599s" podStartSLOduration=9.756022118 podStartE2EDuration="9.756022118s" podCreationTimestamp="2026-01-29 06:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.753657008 +0000 UTC m=+99.615450077" watchObservedRunningTime="2026-01-29 06:45:15.756022118 +0000 UTC m=+99.617815187" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.778424 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:15 crc kubenswrapper[4826]: E0129 06:45:15.778745 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:16.27873375 +0000 UTC m=+100.140526819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.840586 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" podStartSLOduration=72.840570254 podStartE2EDuration="1m12.840570254s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.78731415 +0000 UTC m=+99.649107219" watchObservedRunningTime="2026-01-29 06:45:15.840570254 +0000 UTC m=+99.702363323" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.842551 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" podStartSLOduration=72.842544354 podStartE2EDuration="1m12.842544354s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.840129842 +0000 UTC m=+99.701922911" watchObservedRunningTime="2026-01-29 06:45:15.842544354 +0000 UTC m=+99.704337423" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.879744 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:15 crc kubenswrapper[4826]: E0129 06:45:15.880139 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:16.380125347 +0000 UTC m=+100.241918416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.917288 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" podStartSLOduration=73.917275048 podStartE2EDuration="1m13.917275048s" podCreationTimestamp="2026-01-29 06:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.916509368 +0000 UTC m=+99.778302437" watchObservedRunningTime="2026-01-29 06:45:15.917275048 +0000 UTC m=+99.779068117" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.944891 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vp6lh" podStartSLOduration=72.944877125 podStartE2EDuration="1m12.944877125s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:15.943695395 +0000 UTC m=+99.805488464" watchObservedRunningTime="2026-01-29 06:45:15.944877125 +0000 UTC m=+99.806670194" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.966478 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:15 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:15 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:15 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.966531 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:15 crc kubenswrapper[4826]: I0129 06:45:15.981672 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:15 crc kubenswrapper[4826]: E0129 06:45:15.981993 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:16.481982415 +0000 UTC m=+100.343775484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.021149 4826 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2fdxd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.021217 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" podUID="30bc5222-e3c7-4cad-8e68-d39368e9d00d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.080853 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" event={"ID":"d3a19180-46ec-4d1c-9656-699f910a0fb1","Type":"ContainerStarted","Data":"8be2ed5a50fc419f51056ddf4031d0665380443adb4ed76a0c314cd2bd93e3b9"} Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.081465 4826 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bmqnb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.081510 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" podUID="7ebf8342-e0f9-413f-811d-57ca9df94f2d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.082234 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:16 crc kubenswrapper[4826]: E0129 06:45:16.082862 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:16.582846669 +0000 UTC m=+100.444639738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.083019 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:16 crc kubenswrapper[4826]: E0129 06:45:16.083343 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:16.583333261 +0000 UTC m=+100.445126320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.087545 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrd8x" Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.184894 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:16 crc kubenswrapper[4826]: E0129 06:45:16.186785 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:16.6867661 +0000 UTC m=+100.548559169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.287397 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:16 crc kubenswrapper[4826]: E0129 06:45:16.287774 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:16.787761727 +0000 UTC m=+100.649554796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.367284 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.387999 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:16 crc kubenswrapper[4826]: E0129 06:45:16.388160 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:16.888136037 +0000 UTC m=+100.749929106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.388249 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:16 crc kubenswrapper[4826]: E0129 06:45:16.388615 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:16.888604559 +0000 UTC m=+100.750397628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.489075 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:16 crc kubenswrapper[4826]: E0129 06:45:16.489265 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:16.989239356 +0000 UTC m=+100.851032415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.489449 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:16 crc kubenswrapper[4826]: E0129 06:45:16.489838 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:16.989823171 +0000 UTC m=+100.851616240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.536195 4826 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.558765 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5nbf4" Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.590960 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:16 crc kubenswrapper[4826]: E0129 06:45:16.591283 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:17.091254499 +0000 UTC m=+100.953047568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.692964 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:16 crc kubenswrapper[4826]: E0129 06:45:16.693282 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:17.193269352 +0000 UTC m=+101.055062421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.707951 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 06:40:15 +0000 UTC, rotation deadline is 2026-11-04 15:09:02.018281113 +0000 UTC Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.707979 4826 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6704h23m45.310304395s for next certificate rotation Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.794034 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:16 crc kubenswrapper[4826]: E0129 06:45:16.794255 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:17.294215427 +0000 UTC m=+101.156008496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.794359 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:16 crc kubenswrapper[4826]: E0129 06:45:16.794661 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 06:45:17.294646438 +0000 UTC m=+101.156439507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddt5t" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.896333 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:16 crc kubenswrapper[4826]: E0129 06:45:16.896685 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 06:45:17.396668271 +0000 UTC m=+101.258461340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.953417 4826 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T06:45:16.53622217Z","Handler":null,"Name":""} Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.957739 4826 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.957773 4826 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.965481 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:16 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:16 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:16 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.965582 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.980892 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dmn6q"] Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.982320 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.985203 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.992234 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmn6q"] Jan 29 06:45:16 crc kubenswrapper[4826]: I0129 06:45:16.998465 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.032154 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.032213 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.073312 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddt5t\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.097290 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" event={"ID":"d3a19180-46ec-4d1c-9656-699f910a0fb1","Type":"ContainerStarted","Data":"56e5bb59d2fae42edd2984af077898933106c39d815e0aaa2b688a6bb39ee472"} Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.097361 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" event={"ID":"d3a19180-46ec-4d1c-9656-699f910a0fb1","Type":"ContainerStarted","Data":"4c20e903fbfac3d9ce398614cf22d41d9a3e02887dae0227e5bb04d0cccfade6"} Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.099197 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.099432 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84323bb-bf74-4538-8cff-b507cb9b261d-utilities\") pod \"community-operators-dmn6q\" (UID: \"d84323bb-bf74-4538-8cff-b507cb9b261d\") " pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.099499 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84323bb-bf74-4538-8cff-b507cb9b261d-catalog-content\") pod \"community-operators-dmn6q\" (UID: \"d84323bb-bf74-4538-8cff-b507cb9b261d\") " pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.099545 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djcrp\" (UniqueName: \"kubernetes.io/projected/d84323bb-bf74-4538-8cff-b507cb9b261d-kube-api-access-djcrp\") pod \"community-operators-dmn6q\" (UID: \"d84323bb-bf74-4538-8cff-b507cb9b261d\") " pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.103127 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4j5q" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.108508 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.119457 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9hk7n" podStartSLOduration=11.119440336 podStartE2EDuration="11.119440336s" podCreationTimestamp="2026-01-29 06:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:17.117865956 +0000 UTC m=+100.979659025" watchObservedRunningTime="2026-01-29 06:45:17.119440336 +0000 UTC m=+100.981233405" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.177446 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-brqmx"] Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.178343 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.182803 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.197668 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-brqmx"] Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.201417 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb065b7c-e4d6-4607-aa51-e8acf00117fa-utilities\") pod \"certified-operators-brqmx\" (UID: \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\") " pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.201522 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84323bb-bf74-4538-8cff-b507cb9b261d-catalog-content\") pod \"community-operators-dmn6q\" (UID: \"d84323bb-bf74-4538-8cff-b507cb9b261d\") " pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.201569 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crv66\" (UniqueName: \"kubernetes.io/projected/eb065b7c-e4d6-4607-aa51-e8acf00117fa-kube-api-access-crv66\") pod \"certified-operators-brqmx\" (UID: \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\") " pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.201797 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djcrp\" (UniqueName: \"kubernetes.io/projected/d84323bb-bf74-4538-8cff-b507cb9b261d-kube-api-access-djcrp\") pod \"community-operators-dmn6q\" (UID: \"d84323bb-bf74-4538-8cff-b507cb9b261d\") " pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.202028 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84323bb-bf74-4538-8cff-b507cb9b261d-utilities\") pod \"community-operators-dmn6q\" (UID: \"d84323bb-bf74-4538-8cff-b507cb9b261d\") " pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.202102 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb065b7c-e4d6-4607-aa51-e8acf00117fa-catalog-content\") pod \"certified-operators-brqmx\" (UID: \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\") " pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.203894 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84323bb-bf74-4538-8cff-b507cb9b261d-catalog-content\") pod \"community-operators-dmn6q\" (UID: \"d84323bb-bf74-4538-8cff-b507cb9b261d\") " pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.203941 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84323bb-bf74-4538-8cff-b507cb9b261d-utilities\") pod \"community-operators-dmn6q\" (UID: \"d84323bb-bf74-4538-8cff-b507cb9b261d\") " pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.231413 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djcrp\" (UniqueName: \"kubernetes.io/projected/d84323bb-bf74-4538-8cff-b507cb9b261d-kube-api-access-djcrp\") pod \"community-operators-dmn6q\" (UID: \"d84323bb-bf74-4538-8cff-b507cb9b261d\") " pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.303818 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb065b7c-e4d6-4607-aa51-e8acf00117fa-catalog-content\") pod \"certified-operators-brqmx\" (UID: \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\") " pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.303902 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb065b7c-e4d6-4607-aa51-e8acf00117fa-utilities\") pod \"certified-operators-brqmx\" (UID: \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\") " pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.303946 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crv66\" (UniqueName: \"kubernetes.io/projected/eb065b7c-e4d6-4607-aa51-e8acf00117fa-kube-api-access-crv66\") pod \"certified-operators-brqmx\" (UID: \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\") " pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.304650 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb065b7c-e4d6-4607-aa51-e8acf00117fa-catalog-content\") pod \"certified-operators-brqmx\" (UID: \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\") " pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.304705 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb065b7c-e4d6-4607-aa51-e8acf00117fa-utilities\") pod \"certified-operators-brqmx\" (UID: \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\") " pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.344465 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.347368 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crv66\" (UniqueName: \"kubernetes.io/projected/eb065b7c-e4d6-4607-aa51-e8acf00117fa-kube-api-access-crv66\") pod \"certified-operators-brqmx\" (UID: \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\") " pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.374572 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.399354 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dbqfz"] Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.400216 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.400943 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dbqfz"] Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.490757 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.509685 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7087f9c0-baee-4f8f-a24d-14e337c4f32e-utilities\") pod \"community-operators-dbqfz\" (UID: \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\") " pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.509731 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7087f9c0-baee-4f8f-a24d-14e337c4f32e-catalog-content\") pod \"community-operators-dbqfz\" (UID: \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\") " pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.509785 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlh5t\" (UniqueName: \"kubernetes.io/projected/7087f9c0-baee-4f8f-a24d-14e337c4f32e-kube-api-access-qlh5t\") pod \"community-operators-dbqfz\" (UID: \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\") " pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.586152 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sfmlt"] Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.587377 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.594113 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfmlt"] Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.625775 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7087f9c0-baee-4f8f-a24d-14e337c4f32e-utilities\") pod \"community-operators-dbqfz\" (UID: \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\") " pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.625835 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7087f9c0-baee-4f8f-a24d-14e337c4f32e-catalog-content\") pod \"community-operators-dbqfz\" (UID: \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\") " pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.625886 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlh5t\" (UniqueName: \"kubernetes.io/projected/7087f9c0-baee-4f8f-a24d-14e337c4f32e-kube-api-access-qlh5t\") pod \"community-operators-dbqfz\" (UID: \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\") " pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.626797 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7087f9c0-baee-4f8f-a24d-14e337c4f32e-utilities\") pod \"community-operators-dbqfz\" (UID: \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\") " pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.627050 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7087f9c0-baee-4f8f-a24d-14e337c4f32e-catalog-content\") pod \"community-operators-dbqfz\" (UID: \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\") " pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.656588 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlh5t\" (UniqueName: \"kubernetes.io/projected/7087f9c0-baee-4f8f-a24d-14e337c4f32e-kube-api-access-qlh5t\") pod \"community-operators-dbqfz\" (UID: \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\") " pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.721331 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ddt5t"] Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.727224 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqh8x\" (UniqueName: \"kubernetes.io/projected/87b64822-a98d-4129-946c-b073cffe4f6c-kube-api-access-kqh8x\") pod \"certified-operators-sfmlt\" (UID: \"87b64822-a98d-4129-946c-b073cffe4f6c\") " pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.727329 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b64822-a98d-4129-946c-b073cffe4f6c-catalog-content\") pod \"certified-operators-sfmlt\" (UID: \"87b64822-a98d-4129-946c-b073cffe4f6c\") " pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.727378 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b64822-a98d-4129-946c-b073cffe4f6c-utilities\") pod \"certified-operators-sfmlt\" (UID: \"87b64822-a98d-4129-946c-b073cffe4f6c\") " pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.756285 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.830083 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b64822-a98d-4129-946c-b073cffe4f6c-catalog-content\") pod \"certified-operators-sfmlt\" (UID: \"87b64822-a98d-4129-946c-b073cffe4f6c\") " pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.830158 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b64822-a98d-4129-946c-b073cffe4f6c-utilities\") pod \"certified-operators-sfmlt\" (UID: \"87b64822-a98d-4129-946c-b073cffe4f6c\") " pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.830214 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqh8x\" (UniqueName: \"kubernetes.io/projected/87b64822-a98d-4129-946c-b073cffe4f6c-kube-api-access-kqh8x\") pod \"certified-operators-sfmlt\" (UID: \"87b64822-a98d-4129-946c-b073cffe4f6c\") " pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.830974 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b64822-a98d-4129-946c-b073cffe4f6c-catalog-content\") pod \"certified-operators-sfmlt\" (UID: \"87b64822-a98d-4129-946c-b073cffe4f6c\") " pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.831567 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b64822-a98d-4129-946c-b073cffe4f6c-utilities\") pod \"certified-operators-sfmlt\" (UID: \"87b64822-a98d-4129-946c-b073cffe4f6c\") " pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.833829 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmn6q"] Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.856256 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqh8x\" (UniqueName: \"kubernetes.io/projected/87b64822-a98d-4129-946c-b073cffe4f6c-kube-api-access-kqh8x\") pod \"certified-operators-sfmlt\" (UID: \"87b64822-a98d-4129-946c-b073cffe4f6c\") " pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:45:17 crc kubenswrapper[4826]: W0129 06:45:17.867881 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd84323bb_bf74_4538_8cff_b507cb9b261d.slice/crio-c5a5916dc5412383ba1b1fb5534ee99b9ebeb7a738b3ec00020ccaa0e876b3ea WatchSource:0}: Error finding container c5a5916dc5412383ba1b1fb5534ee99b9ebeb7a738b3ec00020ccaa0e876b3ea: Status 404 returned error can't find the container with id c5a5916dc5412383ba1b1fb5534ee99b9ebeb7a738b3ec00020ccaa0e876b3ea Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.939935 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.968790 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-brqmx"] Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.968897 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:17 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:17 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:17 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:17 crc kubenswrapper[4826]: I0129 06:45:17.968972 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:17 crc kubenswrapper[4826]: W0129 06:45:17.986723 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb065b7c_e4d6_4607_aa51_e8acf00117fa.slice/crio-82f3de28f4e602bc6bb07666df61a67355bc9d8820b62606c2eef062d2776766 WatchSource:0}: Error finding container 82f3de28f4e602bc6bb07666df61a67355bc9d8820b62606c2eef062d2776766: Status 404 returned error can't find the container with id 82f3de28f4e602bc6bb07666df61a67355bc9d8820b62606c2eef062d2776766 Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.086585 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.115185 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dbqfz"] Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.135408 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmn6q" event={"ID":"d84323bb-bf74-4538-8cff-b507cb9b261d","Type":"ContainerStarted","Data":"9382e8aa609a0a4b952fd37c5764f24f87a328c8fb04035fe80cd225ee14f2ab"} Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.135450 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmn6q" event={"ID":"d84323bb-bf74-4538-8cff-b507cb9b261d","Type":"ContainerStarted","Data":"c5a5916dc5412383ba1b1fb5534ee99b9ebeb7a738b3ec00020ccaa0e876b3ea"} Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.138034 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" event={"ID":"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1","Type":"ContainerStarted","Data":"4cd3af2bccdd7658508f9d8bb94ab6156559fb89c7a83b3d1a5535d79018b9fc"} Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.138056 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" event={"ID":"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1","Type":"ContainerStarted","Data":"b9a4fdee7181884fd7de12b0fd4f3b11ae2a0d024a96d5c82f7b349686274d48"} Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.138182 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.138943 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brqmx" event={"ID":"eb065b7c-e4d6-4607-aa51-e8acf00117fa","Type":"ContainerStarted","Data":"82f3de28f4e602bc6bb07666df61a67355bc9d8820b62606c2eef062d2776766"} Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.179197 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" podStartSLOduration=75.179179976 podStartE2EDuration="1m15.179179976s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:18.175885171 +0000 UTC m=+102.037678240" watchObservedRunningTime="2026-01-29 06:45:18.179179976 +0000 UTC m=+102.040973055" Jan 29 06:45:18 crc kubenswrapper[4826]: W0129 06:45:18.194541 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7087f9c0_baee_4f8f_a24d_14e337c4f32e.slice/crio-756641ebd5cea20c8173f30dadf2dc9ecddb9d493b96159e19a2a86ab9dd6114 WatchSource:0}: Error finding container 756641ebd5cea20c8173f30dadf2dc9ecddb9d493b96159e19a2a86ab9dd6114: Status 404 returned error can't find the container with id 756641ebd5cea20c8173f30dadf2dc9ecddb9d493b96159e19a2a86ab9dd6114 Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.293271 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfmlt"] Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.312522 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.313202 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.316471 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.316505 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.319402 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 06:45:18 crc kubenswrapper[4826]: W0129 06:45:18.348149 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87b64822_a98d_4129_946c_b073cffe4f6c.slice/crio-d7177835ae17005e0f7ccad7d8a9ab6466ef052514f24eaf0a2758a46175091c WatchSource:0}: Error finding container d7177835ae17005e0f7ccad7d8a9ab6466ef052514f24eaf0a2758a46175091c: Status 404 returned error can't find the container with id d7177835ae17005e0f7ccad7d8a9ab6466ef052514f24eaf0a2758a46175091c Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.442486 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2853e091-b8ae-45ca-9a9a-ed30f2916d98-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2853e091-b8ae-45ca-9a9a-ed30f2916d98\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.442687 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2853e091-b8ae-45ca-9a9a-ed30f2916d98-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2853e091-b8ae-45ca-9a9a-ed30f2916d98\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.544327 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2853e091-b8ae-45ca-9a9a-ed30f2916d98-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2853e091-b8ae-45ca-9a9a-ed30f2916d98\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.545011 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2853e091-b8ae-45ca-9a9a-ed30f2916d98-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2853e091-b8ae-45ca-9a9a-ed30f2916d98\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.545256 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2853e091-b8ae-45ca-9a9a-ed30f2916d98-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2853e091-b8ae-45ca-9a9a-ed30f2916d98\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.584817 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2853e091-b8ae-45ca-9a9a-ed30f2916d98-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2853e091-b8ae-45ca-9a9a-ed30f2916d98\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.823933 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.827431 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.855985 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.856050 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.867264 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.961574 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.965354 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:18 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:18 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:18 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.965761 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.980902 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.980971 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.983043 4826 patch_prober.go:28] interesting pod/console-f9d7485db-t4qwq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.983114 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-t4qwq" podUID="9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.983481 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vvbnx"] Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.985105 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.987269 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.994988 4826 patch_prober.go:28] interesting pod/downloads-7954f5f757-7wl5q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.995053 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7wl5q" podUID="adcf399e-d9c8-4495-9d73-458228398c5c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.995547 4826 patch_prober.go:28] interesting pod/downloads-7954f5f757-7wl5q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 29 06:45:18 crc kubenswrapper[4826]: I0129 06:45:18.995625 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7wl5q" podUID="adcf399e-d9c8-4495-9d73-458228398c5c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.000840 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vvbnx"] Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.053144 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/775bf475-9e63-49e0-9bde-bef34dce79c9-utilities\") pod \"redhat-marketplace-vvbnx\" (UID: \"775bf475-9e63-49e0-9bde-bef34dce79c9\") " pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.053251 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/775bf475-9e63-49e0-9bde-bef34dce79c9-catalog-content\") pod \"redhat-marketplace-vvbnx\" (UID: \"775bf475-9e63-49e0-9bde-bef34dce79c9\") " pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.053364 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjtzd\" (UniqueName: \"kubernetes.io/projected/775bf475-9e63-49e0-9bde-bef34dce79c9-kube-api-access-gjtzd\") pod \"redhat-marketplace-vvbnx\" (UID: \"775bf475-9e63-49e0-9bde-bef34dce79c9\") " pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.089460 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.153918 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjtzd\" (UniqueName: \"kubernetes.io/projected/775bf475-9e63-49e0-9bde-bef34dce79c9-kube-api-access-gjtzd\") pod \"redhat-marketplace-vvbnx\" (UID: \"775bf475-9e63-49e0-9bde-bef34dce79c9\") " pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.153984 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/775bf475-9e63-49e0-9bde-bef34dce79c9-utilities\") pod \"redhat-marketplace-vvbnx\" (UID: \"775bf475-9e63-49e0-9bde-bef34dce79c9\") " pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.154032 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/775bf475-9e63-49e0-9bde-bef34dce79c9-catalog-content\") pod \"redhat-marketplace-vvbnx\" (UID: \"775bf475-9e63-49e0-9bde-bef34dce79c9\") " pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.154947 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/775bf475-9e63-49e0-9bde-bef34dce79c9-catalog-content\") pod \"redhat-marketplace-vvbnx\" (UID: \"775bf475-9e63-49e0-9bde-bef34dce79c9\") " pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.155142 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/775bf475-9e63-49e0-9bde-bef34dce79c9-utilities\") pod \"redhat-marketplace-vvbnx\" (UID: \"775bf475-9e63-49e0-9bde-bef34dce79c9\") " pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.186621 4826 generic.go:334] "Generic (PLEG): container finished" podID="18a5f5c9-454a-476e-9bfb-1fe5abcf95f9" containerID="d3ad29132ec253628533207eb0006a9734d96a886c878edc8148e899ebeab089" exitCode=0 Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.186704 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" event={"ID":"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9","Type":"ContainerDied","Data":"d3ad29132ec253628533207eb0006a9734d96a886c878edc8148e899ebeab089"} Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.189269 4826 generic.go:334] "Generic (PLEG): container finished" podID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" containerID="c55cff804a612afac504c9d39496fadbef4898ffcb86b0e1c7800a3ae82d9c5d" exitCode=0 Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.189346 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brqmx" event={"ID":"eb065b7c-e4d6-4607-aa51-e8acf00117fa","Type":"ContainerDied","Data":"c55cff804a612afac504c9d39496fadbef4898ffcb86b0e1c7800a3ae82d9c5d"} Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.193349 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.193358 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2853e091-b8ae-45ca-9a9a-ed30f2916d98","Type":"ContainerStarted","Data":"a3456477cad715ac6685d7077da996402379cc1cd2c9fec63c99760b7b788b36"} Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.197718 4826 generic.go:334] "Generic (PLEG): container finished" podID="87b64822-a98d-4129-946c-b073cffe4f6c" containerID="fb5a11be0225127a995d30d63e988d63a493795f05d4bc9c94a8bdca101092e2" exitCode=0 Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.197781 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfmlt" event={"ID":"87b64822-a98d-4129-946c-b073cffe4f6c","Type":"ContainerDied","Data":"fb5a11be0225127a995d30d63e988d63a493795f05d4bc9c94a8bdca101092e2"} Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.197808 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfmlt" event={"ID":"87b64822-a98d-4129-946c-b073cffe4f6c","Type":"ContainerStarted","Data":"d7177835ae17005e0f7ccad7d8a9ab6466ef052514f24eaf0a2758a46175091c"} Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.206603 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjtzd\" (UniqueName: \"kubernetes.io/projected/775bf475-9e63-49e0-9bde-bef34dce79c9-kube-api-access-gjtzd\") pod \"redhat-marketplace-vvbnx\" (UID: \"775bf475-9e63-49e0-9bde-bef34dce79c9\") " pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.214731 4826 generic.go:334] "Generic (PLEG): container finished" podID="7087f9c0-baee-4f8f-a24d-14e337c4f32e" containerID="a49bfc02e453a1f533c1a1ac1ae34fb60a1f1ed85bfc173bf8413e6bfa448a19" exitCode=0 Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.214783 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dbqfz" event={"ID":"7087f9c0-baee-4f8f-a24d-14e337c4f32e","Type":"ContainerDied","Data":"a49bfc02e453a1f533c1a1ac1ae34fb60a1f1ed85bfc173bf8413e6bfa448a19"} Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.214832 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dbqfz" event={"ID":"7087f9c0-baee-4f8f-a24d-14e337c4f32e","Type":"ContainerStarted","Data":"756641ebd5cea20c8173f30dadf2dc9ecddb9d493b96159e19a2a86ab9dd6114"} Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.217056 4826 generic.go:334] "Generic (PLEG): container finished" podID="d84323bb-bf74-4538-8cff-b507cb9b261d" containerID="9382e8aa609a0a4b952fd37c5764f24f87a328c8fb04035fe80cd225ee14f2ab" exitCode=0 Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.217196 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmn6q" event={"ID":"d84323bb-bf74-4538-8cff-b507cb9b261d","Type":"ContainerDied","Data":"9382e8aa609a0a4b952fd37c5764f24f87a328c8fb04035fe80cd225ee14f2ab"} Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.223226 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tpz64" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.307669 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.380593 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rsfgw"] Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.382653 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.396509 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsfgw"] Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.411145 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.411195 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.426108 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.459465 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737f9828-15a3-401e-aa29-e9467773637f-utilities\") pod \"redhat-marketplace-rsfgw\" (UID: \"737f9828-15a3-401e-aa29-e9467773637f\") " pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.459568 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j79v\" (UniqueName: \"kubernetes.io/projected/737f9828-15a3-401e-aa29-e9467773637f-kube-api-access-2j79v\") pod \"redhat-marketplace-rsfgw\" (UID: \"737f9828-15a3-401e-aa29-e9467773637f\") " pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.460128 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737f9828-15a3-401e-aa29-e9467773637f-catalog-content\") pod \"redhat-marketplace-rsfgw\" (UID: \"737f9828-15a3-401e-aa29-e9467773637f\") " pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.561259 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j79v\" (UniqueName: \"kubernetes.io/projected/737f9828-15a3-401e-aa29-e9467773637f-kube-api-access-2j79v\") pod \"redhat-marketplace-rsfgw\" (UID: \"737f9828-15a3-401e-aa29-e9467773637f\") " pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.561351 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737f9828-15a3-401e-aa29-e9467773637f-catalog-content\") pod \"redhat-marketplace-rsfgw\" (UID: \"737f9828-15a3-401e-aa29-e9467773637f\") " pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.561463 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737f9828-15a3-401e-aa29-e9467773637f-utilities\") pod \"redhat-marketplace-rsfgw\" (UID: \"737f9828-15a3-401e-aa29-e9467773637f\") " pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.562242 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737f9828-15a3-401e-aa29-e9467773637f-utilities\") pod \"redhat-marketplace-rsfgw\" (UID: \"737f9828-15a3-401e-aa29-e9467773637f\") " pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.562611 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737f9828-15a3-401e-aa29-e9467773637f-catalog-content\") pod \"redhat-marketplace-rsfgw\" (UID: \"737f9828-15a3-401e-aa29-e9467773637f\") " pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.579426 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j79v\" (UniqueName: \"kubernetes.io/projected/737f9828-15a3-401e-aa29-e9467773637f-kube-api-access-2j79v\") pod \"redhat-marketplace-rsfgw\" (UID: \"737f9828-15a3-401e-aa29-e9467773637f\") " pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.709542 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.738387 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.822018 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vvbnx"] Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.952234 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsfgw"] Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.964470 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:19 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:19 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:19 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:19 crc kubenswrapper[4826]: I0129 06:45:19.964527 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.181184 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xt89w"] Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.182725 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.188467 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.196328 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xt89w"] Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.224136 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsfgw" event={"ID":"737f9828-15a3-401e-aa29-e9467773637f","Type":"ContainerStarted","Data":"c0259a3ef32ec3d52105bb14d5ccd2878dc174e6630f0496243cd7aa7078a161"} Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.226410 4826 generic.go:334] "Generic (PLEG): container finished" podID="775bf475-9e63-49e0-9bde-bef34dce79c9" containerID="898ac7973139c03c3de08a31213d18800bb6c9037147321302205d22532e7482" exitCode=0 Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.226484 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vvbnx" event={"ID":"775bf475-9e63-49e0-9bde-bef34dce79c9","Type":"ContainerDied","Data":"898ac7973139c03c3de08a31213d18800bb6c9037147321302205d22532e7482"} Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.226541 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vvbnx" event={"ID":"775bf475-9e63-49e0-9bde-bef34dce79c9","Type":"ContainerStarted","Data":"5c22df299b9076d31cec4b7cd353ba23386b074977940ccced5af207703f1646"} Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.229920 4826 generic.go:334] "Generic (PLEG): container finished" podID="2853e091-b8ae-45ca-9a9a-ed30f2916d98" containerID="b3f2db46681b8f5051181f40e5a6069793399067f950e5d82fb341335d820178" exitCode=0 Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.230374 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2853e091-b8ae-45ca-9a9a-ed30f2916d98","Type":"ContainerDied","Data":"b3f2db46681b8f5051181f40e5a6069793399067f950e5d82fb341335d820178"} Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.240737 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hjdjf" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.280138 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-catalog-content\") pod \"redhat-operators-xt89w\" (UID: \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\") " pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.280232 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2r6w\" (UniqueName: \"kubernetes.io/projected/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-kube-api-access-f2r6w\") pod \"redhat-operators-xt89w\" (UID: \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\") " pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.280255 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-utilities\") pod \"redhat-operators-xt89w\" (UID: \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\") " pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.384891 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-catalog-content\") pod \"redhat-operators-xt89w\" (UID: \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\") " pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.384944 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2r6w\" (UniqueName: \"kubernetes.io/projected/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-kube-api-access-f2r6w\") pod \"redhat-operators-xt89w\" (UID: \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\") " pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.384964 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-utilities\") pod \"redhat-operators-xt89w\" (UID: \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\") " pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.385488 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-utilities\") pod \"redhat-operators-xt89w\" (UID: \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\") " pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.385706 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-catalog-content\") pod \"redhat-operators-xt89w\" (UID: \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\") " pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.433211 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2r6w\" (UniqueName: \"kubernetes.io/projected/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-kube-api-access-f2r6w\") pod \"redhat-operators-xt89w\" (UID: \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\") " pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.568412 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.605571 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qb44z"] Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.606767 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.606840 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qb44z"] Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.690677 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832cf5b0-2b5a-4975-b147-8a1f08f08456-utilities\") pod \"redhat-operators-qb44z\" (UID: \"832cf5b0-2b5a-4975-b147-8a1f08f08456\") " pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.690728 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832cf5b0-2b5a-4975-b147-8a1f08f08456-catalog-content\") pod \"redhat-operators-qb44z\" (UID: \"832cf5b0-2b5a-4975-b147-8a1f08f08456\") " pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.690744 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjf7n\" (UniqueName: \"kubernetes.io/projected/832cf5b0-2b5a-4975-b147-8a1f08f08456-kube-api-access-pjf7n\") pod \"redhat-operators-qb44z\" (UID: \"832cf5b0-2b5a-4975-b147-8a1f08f08456\") " pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.734047 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.791849 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-secret-volume\") pod \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\" (UID: \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\") " Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.791954 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-config-volume\") pod \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\" (UID: \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\") " Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.791997 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh669\" (UniqueName: \"kubernetes.io/projected/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-kube-api-access-nh669\") pod \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\" (UID: \"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9\") " Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.792195 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832cf5b0-2b5a-4975-b147-8a1f08f08456-utilities\") pod \"redhat-operators-qb44z\" (UID: \"832cf5b0-2b5a-4975-b147-8a1f08f08456\") " pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.792235 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832cf5b0-2b5a-4975-b147-8a1f08f08456-catalog-content\") pod \"redhat-operators-qb44z\" (UID: \"832cf5b0-2b5a-4975-b147-8a1f08f08456\") " pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.792250 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjf7n\" (UniqueName: \"kubernetes.io/projected/832cf5b0-2b5a-4975-b147-8a1f08f08456-kube-api-access-pjf7n\") pod \"redhat-operators-qb44z\" (UID: \"832cf5b0-2b5a-4975-b147-8a1f08f08456\") " pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.793051 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-config-volume" (OuterVolumeSpecName: "config-volume") pod "18a5f5c9-454a-476e-9bfb-1fe5abcf95f9" (UID: "18a5f5c9-454a-476e-9bfb-1fe5abcf95f9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.793220 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832cf5b0-2b5a-4975-b147-8a1f08f08456-utilities\") pod \"redhat-operators-qb44z\" (UID: \"832cf5b0-2b5a-4975-b147-8a1f08f08456\") " pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.793259 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832cf5b0-2b5a-4975-b147-8a1f08f08456-catalog-content\") pod \"redhat-operators-qb44z\" (UID: \"832cf5b0-2b5a-4975-b147-8a1f08f08456\") " pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.820453 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-kube-api-access-nh669" (OuterVolumeSpecName: "kube-api-access-nh669") pod "18a5f5c9-454a-476e-9bfb-1fe5abcf95f9" (UID: "18a5f5c9-454a-476e-9bfb-1fe5abcf95f9"). InnerVolumeSpecName "kube-api-access-nh669". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.820566 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjf7n\" (UniqueName: \"kubernetes.io/projected/832cf5b0-2b5a-4975-b147-8a1f08f08456-kube-api-access-pjf7n\") pod \"redhat-operators-qb44z\" (UID: \"832cf5b0-2b5a-4975-b147-8a1f08f08456\") " pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.824968 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "18a5f5c9-454a-476e-9bfb-1fe5abcf95f9" (UID: "18a5f5c9-454a-476e-9bfb-1fe5abcf95f9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.894041 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh669\" (UniqueName: \"kubernetes.io/projected/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-kube-api-access-nh669\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.894075 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.894086 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.951560 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.967100 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:20 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:20 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:20 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:20 crc kubenswrapper[4826]: I0129 06:45:20.967193 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.140752 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xt89w"] Jan 29 06:45:21 crc kubenswrapper[4826]: W0129 06:45:21.170683 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b7e32bd_0e0a_49fd_a29e_4c8087218b7a.slice/crio-b720454e3f0bbda6945fcfa0dd7596f225953bdbd979ebb31dbbb1d426288bc0 WatchSource:0}: Error finding container b720454e3f0bbda6945fcfa0dd7596f225953bdbd979ebb31dbbb1d426288bc0: Status 404 returned error can't find the container with id b720454e3f0bbda6945fcfa0dd7596f225953bdbd979ebb31dbbb1d426288bc0 Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.280414 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.280916 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl" event={"ID":"18a5f5c9-454a-476e-9bfb-1fe5abcf95f9","Type":"ContainerDied","Data":"16bd16fc43032b0d5b9890111f7c3bd26d421e1466bb2f177673fb7d7738723e"} Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.281154 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16bd16fc43032b0d5b9890111f7c3bd26d421e1466bb2f177673fb7d7738723e" Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.286740 4826 generic.go:334] "Generic (PLEG): container finished" podID="737f9828-15a3-401e-aa29-e9467773637f" containerID="c4ece3877797c7a29cb8ce3a8da7e6a22d1f80a63d7e8cd6cba2cb07223f2a46" exitCode=0 Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.287425 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsfgw" event={"ID":"737f9828-15a3-401e-aa29-e9467773637f","Type":"ContainerDied","Data":"c4ece3877797c7a29cb8ce3a8da7e6a22d1f80a63d7e8cd6cba2cb07223f2a46"} Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.299173 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xt89w" event={"ID":"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a","Type":"ContainerStarted","Data":"b720454e3f0bbda6945fcfa0dd7596f225953bdbd979ebb31dbbb1d426288bc0"} Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.401942 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qb44z"] Jan 29 06:45:21 crc kubenswrapper[4826]: W0129 06:45:21.416586 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod832cf5b0_2b5a_4975_b147_8a1f08f08456.slice/crio-28db21333d28ca42ce84cdd16cba78d9d5e29a16f4c00c9231fe8bb51c621b05 WatchSource:0}: Error finding container 28db21333d28ca42ce84cdd16cba78d9d5e29a16f4c00c9231fe8bb51c621b05: Status 404 returned error can't find the container with id 28db21333d28ca42ce84cdd16cba78d9d5e29a16f4c00c9231fe8bb51c621b05 Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.597138 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.718123 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2853e091-b8ae-45ca-9a9a-ed30f2916d98-kubelet-dir\") pod \"2853e091-b8ae-45ca-9a9a-ed30f2916d98\" (UID: \"2853e091-b8ae-45ca-9a9a-ed30f2916d98\") " Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.718244 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2853e091-b8ae-45ca-9a9a-ed30f2916d98-kube-api-access\") pod \"2853e091-b8ae-45ca-9a9a-ed30f2916d98\" (UID: \"2853e091-b8ae-45ca-9a9a-ed30f2916d98\") " Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.719625 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2853e091-b8ae-45ca-9a9a-ed30f2916d98-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2853e091-b8ae-45ca-9a9a-ed30f2916d98" (UID: "2853e091-b8ae-45ca-9a9a-ed30f2916d98"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.720208 4826 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2853e091-b8ae-45ca-9a9a-ed30f2916d98-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.728911 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2853e091-b8ae-45ca-9a9a-ed30f2916d98-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2853e091-b8ae-45ca-9a9a-ed30f2916d98" (UID: "2853e091-b8ae-45ca-9a9a-ed30f2916d98"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.824064 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs\") pod \"network-metrics-daemon-6qxzb\" (UID: \"11d649f8-dcd0-4c52-96f1-f5c229546376\") " pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.830127 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2853e091-b8ae-45ca-9a9a-ed30f2916d98-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.831936 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11d649f8-dcd0-4c52-96f1-f5c229546376-metrics-certs\") pod \"network-metrics-daemon-6qxzb\" (UID: \"11d649f8-dcd0-4c52-96f1-f5c229546376\") " pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.949208 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6qxzb" Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.966377 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:21 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:21 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:21 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:21 crc kubenswrapper[4826]: I0129 06:45:21.966474 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.251746 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6qxzb"] Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.332692 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" event={"ID":"11d649f8-dcd0-4c52-96f1-f5c229546376","Type":"ContainerStarted","Data":"287db1531f428cb513f95e131e17f1fe287b1c4baf8685f3d70012b8ed2f14b1"} Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.335494 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qb44z" event={"ID":"832cf5b0-2b5a-4975-b147-8a1f08f08456","Type":"ContainerDied","Data":"1f80edadedbb32c8fb5132c32eef7b9c460e928ced345bdd987c1cc167d3326c"} Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.335352 4826 generic.go:334] "Generic (PLEG): container finished" podID="832cf5b0-2b5a-4975-b147-8a1f08f08456" containerID="1f80edadedbb32c8fb5132c32eef7b9c460e928ced345bdd987c1cc167d3326c" exitCode=0 Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.335952 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qb44z" event={"ID":"832cf5b0-2b5a-4975-b147-8a1f08f08456","Type":"ContainerStarted","Data":"28db21333d28ca42ce84cdd16cba78d9d5e29a16f4c00c9231fe8bb51c621b05"} Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.339779 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.339788 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2853e091-b8ae-45ca-9a9a-ed30f2916d98","Type":"ContainerDied","Data":"a3456477cad715ac6685d7077da996402379cc1cd2c9fec63c99760b7b788b36"} Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.339918 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3456477cad715ac6685d7077da996402379cc1cd2c9fec63c99760b7b788b36" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.349032 4826 generic.go:334] "Generic (PLEG): container finished" podID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" containerID="b8a10173bc4e7821db0e0fb791583089acb8b1c14a3d19219aa89c620b683a17" exitCode=0 Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.349074 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xt89w" event={"ID":"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a","Type":"ContainerDied","Data":"b8a10173bc4e7821db0e0fb791583089acb8b1c14a3d19219aa89c620b683a17"} Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.393899 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 06:45:22 crc kubenswrapper[4826]: E0129 06:45:22.394154 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2853e091-b8ae-45ca-9a9a-ed30f2916d98" containerName="pruner" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.394170 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2853e091-b8ae-45ca-9a9a-ed30f2916d98" containerName="pruner" Jan 29 06:45:22 crc kubenswrapper[4826]: E0129 06:45:22.394202 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a5f5c9-454a-476e-9bfb-1fe5abcf95f9" containerName="collect-profiles" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.394209 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a5f5c9-454a-476e-9bfb-1fe5abcf95f9" containerName="collect-profiles" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.394341 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2853e091-b8ae-45ca-9a9a-ed30f2916d98" containerName="pruner" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.394362 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a5f5c9-454a-476e-9bfb-1fe5abcf95f9" containerName="collect-profiles" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.396216 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.397731 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.398201 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.411380 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.441425 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e27ccc28-dd7d-4763-a4e2-136a863e875a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e27ccc28-dd7d-4763-a4e2-136a863e875a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.441704 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e27ccc28-dd7d-4763-a4e2-136a863e875a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e27ccc28-dd7d-4763-a4e2-136a863e875a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.542430 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e27ccc28-dd7d-4763-a4e2-136a863e875a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e27ccc28-dd7d-4763-a4e2-136a863e875a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.542481 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e27ccc28-dd7d-4763-a4e2-136a863e875a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e27ccc28-dd7d-4763-a4e2-136a863e875a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.542743 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e27ccc28-dd7d-4763-a4e2-136a863e875a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e27ccc28-dd7d-4763-a4e2-136a863e875a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.560923 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e27ccc28-dd7d-4763-a4e2-136a863e875a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e27ccc28-dd7d-4763-a4e2-136a863e875a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.724035 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.964955 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:22 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:22 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:22 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:22 crc kubenswrapper[4826]: I0129 06:45:22.965359 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:23 crc kubenswrapper[4826]: I0129 06:45:23.361939 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 06:45:23 crc kubenswrapper[4826]: I0129 06:45:23.385797 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" event={"ID":"11d649f8-dcd0-4c52-96f1-f5c229546376","Type":"ContainerStarted","Data":"f2ae22d9aa7e81150deca622763399d1ac13d3cb977b66179963b337046cf2c3"} Jan 29 06:45:23 crc kubenswrapper[4826]: W0129 06:45:23.400706 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode27ccc28_dd7d_4763_a4e2_136a863e875a.slice/crio-48c988e8ef449f5ecffe76c1f36a130e51576e65d29b69e9afbd339a5ba435bb WatchSource:0}: Error finding container 48c988e8ef449f5ecffe76c1f36a130e51576e65d29b69e9afbd339a5ba435bb: Status 404 returned error can't find the container with id 48c988e8ef449f5ecffe76c1f36a130e51576e65d29b69e9afbd339a5ba435bb Jan 29 06:45:23 crc kubenswrapper[4826]: I0129 06:45:23.597699 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:45:23 crc kubenswrapper[4826]: I0129 06:45:23.965074 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:23 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:23 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:23 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:23 crc kubenswrapper[4826]: I0129 06:45:23.965474 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:24 crc kubenswrapper[4826]: I0129 06:45:24.419525 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6qxzb" event={"ID":"11d649f8-dcd0-4c52-96f1-f5c229546376","Type":"ContainerStarted","Data":"e4666d3b6266a8e73acec4b885373851cfb27c6e174d1e0898718c0304ca1f45"} Jan 29 06:45:24 crc kubenswrapper[4826]: I0129 06:45:24.453039 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e27ccc28-dd7d-4763-a4e2-136a863e875a","Type":"ContainerStarted","Data":"a086d594cfb75cb410a6abc001d814a1d5efe6b85072a0895ba9464cf6fd86f8"} Jan 29 06:45:24 crc kubenswrapper[4826]: I0129 06:45:24.453089 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e27ccc28-dd7d-4763-a4e2-136a863e875a","Type":"ContainerStarted","Data":"48c988e8ef449f5ecffe76c1f36a130e51576e65d29b69e9afbd339a5ba435bb"} Jan 29 06:45:24 crc kubenswrapper[4826]: I0129 06:45:24.480557 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-6qxzb" podStartSLOduration=81.480532405 podStartE2EDuration="1m21.480532405s" podCreationTimestamp="2026-01-29 06:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:24.445795556 +0000 UTC m=+108.307588625" watchObservedRunningTime="2026-01-29 06:45:24.480532405 +0000 UTC m=+108.342325474" Jan 29 06:45:24 crc kubenswrapper[4826]: I0129 06:45:24.830192 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2957f" Jan 29 06:45:24 crc kubenswrapper[4826]: I0129 06:45:24.853679 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.853654921 podStartE2EDuration="2.853654921s" podCreationTimestamp="2026-01-29 06:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:24.483334737 +0000 UTC m=+108.345127806" watchObservedRunningTime="2026-01-29 06:45:24.853654921 +0000 UTC m=+108.715447990" Jan 29 06:45:24 crc kubenswrapper[4826]: I0129 06:45:24.964371 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:24 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:24 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:24 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:24 crc kubenswrapper[4826]: I0129 06:45:24.964446 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:25 crc kubenswrapper[4826]: I0129 06:45:25.467248 4826 generic.go:334] "Generic (PLEG): container finished" podID="e27ccc28-dd7d-4763-a4e2-136a863e875a" containerID="a086d594cfb75cb410a6abc001d814a1d5efe6b85072a0895ba9464cf6fd86f8" exitCode=0 Jan 29 06:45:25 crc kubenswrapper[4826]: I0129 06:45:25.467359 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e27ccc28-dd7d-4763-a4e2-136a863e875a","Type":"ContainerDied","Data":"a086d594cfb75cb410a6abc001d814a1d5efe6b85072a0895ba9464cf6fd86f8"} Jan 29 06:45:25 crc kubenswrapper[4826]: I0129 06:45:25.965095 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:25 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:25 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:25 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:25 crc kubenswrapper[4826]: I0129 06:45:25.965158 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:26 crc kubenswrapper[4826]: I0129 06:45:26.965512 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:26 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:26 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:26 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:26 crc kubenswrapper[4826]: I0129 06:45:26.966144 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:27 crc kubenswrapper[4826]: I0129 06:45:27.965331 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:27 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:27 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:27 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:27 crc kubenswrapper[4826]: I0129 06:45:27.966442 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:28 crc kubenswrapper[4826]: I0129 06:45:28.969157 4826 patch_prober.go:28] interesting pod/router-default-5444994796-nbsns container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 06:45:28 crc kubenswrapper[4826]: [-]has-synced failed: reason withheld Jan 29 06:45:28 crc kubenswrapper[4826]: [+]process-running ok Jan 29 06:45:28 crc kubenswrapper[4826]: healthz check failed Jan 29 06:45:28 crc kubenswrapper[4826]: I0129 06:45:28.969610 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nbsns" podUID="618ab45f-146f-4e1a-a92b-aaa531cede89" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 06:45:28 crc kubenswrapper[4826]: I0129 06:45:28.981569 4826 patch_prober.go:28] interesting pod/console-f9d7485db-t4qwq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 29 06:45:28 crc kubenswrapper[4826]: I0129 06:45:28.981649 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-t4qwq" podUID="9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 29 06:45:29 crc kubenswrapper[4826]: I0129 06:45:29.001641 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-7wl5q" Jan 29 06:45:29 crc kubenswrapper[4826]: I0129 06:45:29.964154 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:29 crc kubenswrapper[4826]: I0129 06:45:29.971870 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nbsns" Jan 29 06:45:34 crc kubenswrapper[4826]: I0129 06:45:34.047042 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:45:34 crc kubenswrapper[4826]: I0129 06:45:34.206722 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e27ccc28-dd7d-4763-a4e2-136a863e875a-kube-api-access\") pod \"e27ccc28-dd7d-4763-a4e2-136a863e875a\" (UID: \"e27ccc28-dd7d-4763-a4e2-136a863e875a\") " Jan 29 06:45:34 crc kubenswrapper[4826]: I0129 06:45:34.206852 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e27ccc28-dd7d-4763-a4e2-136a863e875a-kubelet-dir\") pod \"e27ccc28-dd7d-4763-a4e2-136a863e875a\" (UID: \"e27ccc28-dd7d-4763-a4e2-136a863e875a\") " Jan 29 06:45:34 crc kubenswrapper[4826]: I0129 06:45:34.207051 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e27ccc28-dd7d-4763-a4e2-136a863e875a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e27ccc28-dd7d-4763-a4e2-136a863e875a" (UID: "e27ccc28-dd7d-4763-a4e2-136a863e875a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:45:34 crc kubenswrapper[4826]: I0129 06:45:34.207879 4826 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e27ccc28-dd7d-4763-a4e2-136a863e875a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:34 crc kubenswrapper[4826]: I0129 06:45:34.214435 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27ccc28-dd7d-4763-a4e2-136a863e875a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e27ccc28-dd7d-4763-a4e2-136a863e875a" (UID: "e27ccc28-dd7d-4763-a4e2-136a863e875a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:45:34 crc kubenswrapper[4826]: I0129 06:45:34.309116 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e27ccc28-dd7d-4763-a4e2-136a863e875a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:34 crc kubenswrapper[4826]: I0129 06:45:34.633385 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e27ccc28-dd7d-4763-a4e2-136a863e875a","Type":"ContainerDied","Data":"48c988e8ef449f5ecffe76c1f36a130e51576e65d29b69e9afbd339a5ba435bb"} Jan 29 06:45:34 crc kubenswrapper[4826]: I0129 06:45:34.633821 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48c988e8ef449f5ecffe76c1f36a130e51576e65d29b69e9afbd339a5ba435bb" Jan 29 06:45:34 crc kubenswrapper[4826]: I0129 06:45:34.633468 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 06:45:36 crc kubenswrapper[4826]: I0129 06:45:36.162152 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lzzb6"] Jan 29 06:45:36 crc kubenswrapper[4826]: I0129 06:45:36.162507 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" podUID="41546796-854f-46bf-9b24-e2b51d6890a5" containerName="controller-manager" containerID="cri-o://1a508eba68d65eb48fd7e47fbff16b8c1b72ae13f3bca726438510453cc47a3b" gracePeriod=30 Jan 29 06:45:36 crc kubenswrapper[4826]: I0129 06:45:36.206426 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq"] Jan 29 06:45:36 crc kubenswrapper[4826]: I0129 06:45:36.206682 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" podUID="1a23bf6a-86ab-4319-8a5a-e447509ac03f" containerName="route-controller-manager" containerID="cri-o://09296d0e5163240a0291d332a10c8e35f6a1e0007ac4678615888527cd484ca4" gracePeriod=30 Jan 29 06:45:37 crc kubenswrapper[4826]: I0129 06:45:37.387378 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:45:37 crc kubenswrapper[4826]: I0129 06:45:37.655930 4826 generic.go:334] "Generic (PLEG): container finished" podID="41546796-854f-46bf-9b24-e2b51d6890a5" containerID="1a508eba68d65eb48fd7e47fbff16b8c1b72ae13f3bca726438510453cc47a3b" exitCode=0 Jan 29 06:45:37 crc kubenswrapper[4826]: I0129 06:45:37.656006 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" event={"ID":"41546796-854f-46bf-9b24-e2b51d6890a5","Type":"ContainerDied","Data":"1a508eba68d65eb48fd7e47fbff16b8c1b72ae13f3bca726438510453cc47a3b"} Jan 29 06:45:37 crc kubenswrapper[4826]: I0129 06:45:37.657956 4826 generic.go:334] "Generic (PLEG): container finished" podID="1a23bf6a-86ab-4319-8a5a-e447509ac03f" containerID="09296d0e5163240a0291d332a10c8e35f6a1e0007ac4678615888527cd484ca4" exitCode=0 Jan 29 06:45:37 crc kubenswrapper[4826]: I0129 06:45:37.657993 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" event={"ID":"1a23bf6a-86ab-4319-8a5a-e447509ac03f","Type":"ContainerDied","Data":"09296d0e5163240a0291d332a10c8e35f6a1e0007ac4678615888527cd484ca4"} Jan 29 06:45:38 crc kubenswrapper[4826]: I0129 06:45:38.082450 4826 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lzzb6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 06:45:38 crc kubenswrapper[4826]: I0129 06:45:38.082533 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" podUID="41546796-854f-46bf-9b24-e2b51d6890a5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 06:45:38 crc kubenswrapper[4826]: I0129 06:45:38.993801 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:39 crc kubenswrapper[4826]: I0129 06:45:39.005437 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:45:40 crc kubenswrapper[4826]: I0129 06:45:40.065345 4826 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-sg4dq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 06:45:40 crc kubenswrapper[4826]: I0129 06:45:40.065407 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" podUID="1a23bf6a-86ab-4319-8a5a-e447509ac03f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.153921 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.208704 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c58f68dfb-swjwl"] Jan 29 06:45:43 crc kubenswrapper[4826]: E0129 06:45:43.209818 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41546796-854f-46bf-9b24-e2b51d6890a5" containerName="controller-manager" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.209856 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="41546796-854f-46bf-9b24-e2b51d6890a5" containerName="controller-manager" Jan 29 06:45:43 crc kubenswrapper[4826]: E0129 06:45:43.209888 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27ccc28-dd7d-4763-a4e2-136a863e875a" containerName="pruner" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.209902 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27ccc28-dd7d-4763-a4e2-136a863e875a" containerName="pruner" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.210085 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27ccc28-dd7d-4763-a4e2-136a863e875a" containerName="pruner" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.210244 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="41546796-854f-46bf-9b24-e2b51d6890a5" containerName="controller-manager" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.210943 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.211488 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c58f68dfb-swjwl"] Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.287600 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2svg\" (UniqueName: \"kubernetes.io/projected/41546796-854f-46bf-9b24-e2b51d6890a5-kube-api-access-p2svg\") pod \"41546796-854f-46bf-9b24-e2b51d6890a5\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.287675 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41546796-854f-46bf-9b24-e2b51d6890a5-serving-cert\") pod \"41546796-854f-46bf-9b24-e2b51d6890a5\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.287724 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-config\") pod \"41546796-854f-46bf-9b24-e2b51d6890a5\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.287787 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-client-ca\") pod \"41546796-854f-46bf-9b24-e2b51d6890a5\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.287881 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-proxy-ca-bundles\") pod \"41546796-854f-46bf-9b24-e2b51d6890a5\" (UID: \"41546796-854f-46bf-9b24-e2b51d6890a5\") " Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.289157 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-client-ca" (OuterVolumeSpecName: "client-ca") pod "41546796-854f-46bf-9b24-e2b51d6890a5" (UID: "41546796-854f-46bf-9b24-e2b51d6890a5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.289814 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "41546796-854f-46bf-9b24-e2b51d6890a5" (UID: "41546796-854f-46bf-9b24-e2b51d6890a5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.289626 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-config" (OuterVolumeSpecName: "config") pod "41546796-854f-46bf-9b24-e2b51d6890a5" (UID: "41546796-854f-46bf-9b24-e2b51d6890a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.296615 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41546796-854f-46bf-9b24-e2b51d6890a5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "41546796-854f-46bf-9b24-e2b51d6890a5" (UID: "41546796-854f-46bf-9b24-e2b51d6890a5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.318735 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41546796-854f-46bf-9b24-e2b51d6890a5-kube-api-access-p2svg" (OuterVolumeSpecName: "kube-api-access-p2svg") pod "41546796-854f-46bf-9b24-e2b51d6890a5" (UID: "41546796-854f-46bf-9b24-e2b51d6890a5"). InnerVolumeSpecName "kube-api-access-p2svg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.390119 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-client-ca\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.390246 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-proxy-ca-bundles\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.390422 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-config\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.390476 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tgzc\" (UniqueName: \"kubernetes.io/projected/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-kube-api-access-9tgzc\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.390565 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-serving-cert\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.390653 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2svg\" (UniqueName: \"kubernetes.io/projected/41546796-854f-46bf-9b24-e2b51d6890a5-kube-api-access-p2svg\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.390670 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41546796-854f-46bf-9b24-e2b51d6890a5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.390685 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.390698 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.390710 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41546796-854f-46bf-9b24-e2b51d6890a5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.492047 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-config\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.492129 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tgzc\" (UniqueName: \"kubernetes.io/projected/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-kube-api-access-9tgzc\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.492243 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-serving-cert\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.492393 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-client-ca\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.492439 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-proxy-ca-bundles\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.493956 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-proxy-ca-bundles\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.494159 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-client-ca\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.494544 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-config\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.512991 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-serving-cert\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.520787 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tgzc\" (UniqueName: \"kubernetes.io/projected/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-kube-api-access-9tgzc\") pod \"controller-manager-7c58f68dfb-swjwl\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.528348 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.702574 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" event={"ID":"41546796-854f-46bf-9b24-e2b51d6890a5","Type":"ContainerDied","Data":"f1b0ed153339c219a77124bca0ebce991432a946bd6fb917282bc677fbc5d7a2"} Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.702689 4826 scope.go:117] "RemoveContainer" containerID="1a508eba68d65eb48fd7e47fbff16b8c1b72ae13f3bca726438510453cc47a3b" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.702709 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lzzb6" Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.765528 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lzzb6"] Jan 29 06:45:43 crc kubenswrapper[4826]: I0129 06:45:43.775599 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lzzb6"] Jan 29 06:45:44 crc kubenswrapper[4826]: I0129 06:45:44.817835 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41546796-854f-46bf-9b24-e2b51d6890a5" path="/var/lib/kubelet/pods/41546796-854f-46bf-9b24-e2b51d6890a5/volumes" Jan 29 06:45:50 crc kubenswrapper[4826]: I0129 06:45:50.065807 4826 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-sg4dq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 06:45:50 crc kubenswrapper[4826]: I0129 06:45:50.066340 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" podUID="1a23bf6a-86ab-4319-8a5a-e447509ac03f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 06:45:50 crc kubenswrapper[4826]: E0129 06:45:50.117427 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 06:45:50 crc kubenswrapper[4826]: E0129 06:45:50.117647 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqh8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-sfmlt_openshift-marketplace(87b64822-a98d-4129-946c-b073cffe4f6c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 06:45:50 crc kubenswrapper[4826]: E0129 06:45:50.118858 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-sfmlt" podUID="87b64822-a98d-4129-946c-b073cffe4f6c" Jan 29 06:45:50 crc kubenswrapper[4826]: I0129 06:45:50.692125 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cwg5c" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.131788 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-sfmlt" podUID="87b64822-a98d-4129-946c-b073cffe4f6c" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.203117 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.245595 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k"] Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.246030 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a23bf6a-86ab-4319-8a5a-e447509ac03f" containerName="route-controller-manager" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.246056 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a23bf6a-86ab-4319-8a5a-e447509ac03f" containerName="route-controller-manager" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.246217 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a23bf6a-86ab-4319-8a5a-e447509ac03f" containerName="route-controller-manager" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.246839 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.263888 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k"] Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.364328 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.364873 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjf7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qb44z_openshift-marketplace(832cf5b0-2b5a-4975-b147-8a1f08f08456): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.366251 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qb44z" podUID="832cf5b0-2b5a-4975-b147-8a1f08f08456" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.370590 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a23bf6a-86ab-4319-8a5a-e447509ac03f-client-ca\") pod \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.370662 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a23bf6a-86ab-4319-8a5a-e447509ac03f-serving-cert\") pod \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.370772 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r9qj\" (UniqueName: \"kubernetes.io/projected/1a23bf6a-86ab-4319-8a5a-e447509ac03f-kube-api-access-8r9qj\") pod \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.370843 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a23bf6a-86ab-4319-8a5a-e447509ac03f-config\") pod \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\" (UID: \"1a23bf6a-86ab-4319-8a5a-e447509ac03f\") " Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.371137 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b5569b-b83d-4395-b5ac-ee4a97bc913a-serving-cert\") pod \"route-controller-manager-5ff8755c47-p7z4k\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.371216 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b5569b-b83d-4395-b5ac-ee4a97bc913a-client-ca\") pod \"route-controller-manager-5ff8755c47-p7z4k\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.371269 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzxtc\" (UniqueName: \"kubernetes.io/projected/36b5569b-b83d-4395-b5ac-ee4a97bc913a-kube-api-access-fzxtc\") pod \"route-controller-manager-5ff8755c47-p7z4k\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.371351 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b5569b-b83d-4395-b5ac-ee4a97bc913a-config\") pod \"route-controller-manager-5ff8755c47-p7z4k\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.371987 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a23bf6a-86ab-4319-8a5a-e447509ac03f-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a23bf6a-86ab-4319-8a5a-e447509ac03f" (UID: "1a23bf6a-86ab-4319-8a5a-e447509ac03f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.372815 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a23bf6a-86ab-4319-8a5a-e447509ac03f-config" (OuterVolumeSpecName: "config") pod "1a23bf6a-86ab-4319-8a5a-e447509ac03f" (UID: "1a23bf6a-86ab-4319-8a5a-e447509ac03f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.374509 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.374711 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2r6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xt89w_openshift-marketplace(2b7e32bd-0e0a-49fd-a29e-4c8087218b7a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.375964 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xt89w" podUID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.379819 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a23bf6a-86ab-4319-8a5a-e447509ac03f-kube-api-access-8r9qj" (OuterVolumeSpecName: "kube-api-access-8r9qj") pod "1a23bf6a-86ab-4319-8a5a-e447509ac03f" (UID: "1a23bf6a-86ab-4319-8a5a-e447509ac03f"). InnerVolumeSpecName "kube-api-access-8r9qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.380254 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a23bf6a-86ab-4319-8a5a-e447509ac03f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a23bf6a-86ab-4319-8a5a-e447509ac03f" (UID: "1a23bf6a-86ab-4319-8a5a-e447509ac03f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.381315 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.381476 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2j79v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rsfgw_openshift-marketplace(737f9828-15a3-401e-aa29-e9467773637f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.383818 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rsfgw" podUID="737f9828-15a3-401e-aa29-e9467773637f" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.399515 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.399698 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crv66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-brqmx_openshift-marketplace(eb065b7c-e4d6-4607-aa51-e8acf00117fa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.400923 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-brqmx" podUID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.472367 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b5569b-b83d-4395-b5ac-ee4a97bc913a-client-ca\") pod \"route-controller-manager-5ff8755c47-p7z4k\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.472450 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzxtc\" (UniqueName: \"kubernetes.io/projected/36b5569b-b83d-4395-b5ac-ee4a97bc913a-kube-api-access-fzxtc\") pod \"route-controller-manager-5ff8755c47-p7z4k\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.472514 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b5569b-b83d-4395-b5ac-ee4a97bc913a-config\") pod \"route-controller-manager-5ff8755c47-p7z4k\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.472549 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b5569b-b83d-4395-b5ac-ee4a97bc913a-serving-cert\") pod \"route-controller-manager-5ff8755c47-p7z4k\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.472610 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a23bf6a-86ab-4319-8a5a-e447509ac03f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.473810 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b5569b-b83d-4395-b5ac-ee4a97bc913a-config\") pod \"route-controller-manager-5ff8755c47-p7z4k\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.474033 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b5569b-b83d-4395-b5ac-ee4a97bc913a-client-ca\") pod \"route-controller-manager-5ff8755c47-p7z4k\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.474500 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a23bf6a-86ab-4319-8a5a-e447509ac03f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.474958 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r9qj\" (UniqueName: \"kubernetes.io/projected/1a23bf6a-86ab-4319-8a5a-e447509ac03f-kube-api-access-8r9qj\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.474987 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a23bf6a-86ab-4319-8a5a-e447509ac03f-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.481717 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b5569b-b83d-4395-b5ac-ee4a97bc913a-serving-cert\") pod \"route-controller-manager-5ff8755c47-p7z4k\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.492927 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzxtc\" (UniqueName: \"kubernetes.io/projected/36b5569b-b83d-4395-b5ac-ee4a97bc913a-kube-api-access-fzxtc\") pod \"route-controller-manager-5ff8755c47-p7z4k\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.577222 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.674258 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c58f68dfb-swjwl"] Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.761725 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k"] Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.816892 4826 generic.go:334] "Generic (PLEG): container finished" podID="7087f9c0-baee-4f8f-a24d-14e337c4f32e" containerID="831cee523f1372dc992301b96ce7d9f5811b99907e982923d6fa4bb33941dbea" exitCode=0 Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.830658 4826 generic.go:334] "Generic (PLEG): container finished" podID="d84323bb-bf74-4538-8cff-b507cb9b261d" containerID="ef83680496d460896de72721893d3b78d6d9ca46519d0631cc6a3f2f70a719ed" exitCode=0 Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.832488 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dbqfz" event={"ID":"7087f9c0-baee-4f8f-a24d-14e337c4f32e","Type":"ContainerDied","Data":"831cee523f1372dc992301b96ce7d9f5811b99907e982923d6fa4bb33941dbea"} Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.832513 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmn6q" event={"ID":"d84323bb-bf74-4538-8cff-b507cb9b261d","Type":"ContainerDied","Data":"ef83680496d460896de72721893d3b78d6d9ca46519d0631cc6a3f2f70a719ed"} Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.848975 4826 generic.go:334] "Generic (PLEG): container finished" podID="775bf475-9e63-49e0-9bde-bef34dce79c9" containerID="c9ef883ef9db777685b949347e7f0bc07c24255c673366692dc014d0673ec2b9" exitCode=0 Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.849023 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vvbnx" event={"ID":"775bf475-9e63-49e0-9bde-bef34dce79c9","Type":"ContainerDied","Data":"c9ef883ef9db777685b949347e7f0bc07c24255c673366692dc014d0673ec2b9"} Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.862331 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" event={"ID":"1a23bf6a-86ab-4319-8a5a-e447509ac03f","Type":"ContainerDied","Data":"1e1db5e52c6027c8576d7a22ba92f9c3829cff619bb68c7c87560be852dfeabf"} Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.862923 4826 scope.go:117] "RemoveContainer" containerID="09296d0e5163240a0291d332a10c8e35f6a1e0007ac4678615888527cd484ca4" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.862757 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.880490 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" event={"ID":"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75","Type":"ContainerStarted","Data":"ba5324b04eaa919cc3056cc60240f38be2837a9cff41ea8ad29b7be2333b2490"} Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.880550 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.881954 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rsfgw" podUID="737f9828-15a3-401e-aa29-e9467773637f" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.882901 4826 patch_prober.go:28] interesting pod/controller-manager-7c58f68dfb-swjwl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.882936 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" podUID="1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.884269 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xt89w" podUID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.885358 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-brqmx" podUID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" Jan 29 06:45:54 crc kubenswrapper[4826]: E0129 06:45:54.885538 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qb44z" podUID="832cf5b0-2b5a-4975-b147-8a1f08f08456" Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.963598 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq"] Jan 29 06:45:54 crc kubenswrapper[4826]: I0129 06:45:54.966894 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sg4dq"] Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.003858 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" podStartSLOduration=19.003833755 podStartE2EDuration="19.003833755s" podCreationTimestamp="2026-01-29 06:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:54.995540718 +0000 UTC m=+138.857333787" watchObservedRunningTime="2026-01-29 06:45:55.003833755 +0000 UTC m=+138.865626824" Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.886917 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" event={"ID":"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75","Type":"ContainerStarted","Data":"e896a5da1fa3cdef35225909e0d6a508c5441a4d72b41cda43b316ece9139412"} Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.889112 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dbqfz" event={"ID":"7087f9c0-baee-4f8f-a24d-14e337c4f32e","Type":"ContainerStarted","Data":"7dba4dd068334c8f3f4f9c22e8d107fae776e7a95bd29ef94282c44ae828dc8c"} Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.891922 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.893238 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" event={"ID":"36b5569b-b83d-4395-b5ac-ee4a97bc913a","Type":"ContainerStarted","Data":"6888e3c4f1d20a776d99aa34ed4e0c8ff20148c4a4c8d77c85e39d70d5308ccb"} Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.893290 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" event={"ID":"36b5569b-b83d-4395-b5ac-ee4a97bc913a","Type":"ContainerStarted","Data":"c3258dfefe2bf4f057f7a17bf4c2f45c4167a50ec14499556b1b68173bd98082"} Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.893461 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.895565 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmn6q" event={"ID":"d84323bb-bf74-4538-8cff-b507cb9b261d","Type":"ContainerStarted","Data":"78b8cbdb5ea73d6bbd4b1371a4d7f2a056ce489f4cda0047214a0673ccadc7c6"} Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.897871 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vvbnx" event={"ID":"775bf475-9e63-49e0-9bde-bef34dce79c9","Type":"ContainerStarted","Data":"4a9f2e7beae972051dca2ae472c02d8a51bbb70ccee034b2e3b68cae66197bfd"} Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.908258 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.909896 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dbqfz" podStartSLOduration=2.859451558 podStartE2EDuration="38.90987398s" podCreationTimestamp="2026-01-29 06:45:17 +0000 UTC" firstStartedPulling="2026-01-29 06:45:19.217009765 +0000 UTC m=+103.078802834" lastFinishedPulling="2026-01-29 06:45:55.267432187 +0000 UTC m=+139.129225256" observedRunningTime="2026-01-29 06:45:55.904614876 +0000 UTC m=+139.766407945" watchObservedRunningTime="2026-01-29 06:45:55.90987398 +0000 UTC m=+139.771667049" Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.928053 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vvbnx" podStartSLOduration=2.837561495 podStartE2EDuration="37.928028232s" podCreationTimestamp="2026-01-29 06:45:18 +0000 UTC" firstStartedPulling="2026-01-29 06:45:20.236633038 +0000 UTC m=+104.098426107" lastFinishedPulling="2026-01-29 06:45:55.327099775 +0000 UTC m=+139.188892844" observedRunningTime="2026-01-29 06:45:55.924945889 +0000 UTC m=+139.786738958" watchObservedRunningTime="2026-01-29 06:45:55.928028232 +0000 UTC m=+139.789821301" Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.963834 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" podStartSLOduration=19.963810152 podStartE2EDuration="19.963810152s" podCreationTimestamp="2026-01-29 06:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:55.958203919 +0000 UTC m=+139.819996988" watchObservedRunningTime="2026-01-29 06:45:55.963810152 +0000 UTC m=+139.825603221" Jan 29 06:45:55 crc kubenswrapper[4826]: I0129 06:45:55.985637 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dmn6q" podStartSLOduration=3.887275765 podStartE2EDuration="39.98561579s" podCreationTimestamp="2026-01-29 06:45:16 +0000 UTC" firstStartedPulling="2026-01-29 06:45:19.219680164 +0000 UTC m=+103.081473233" lastFinishedPulling="2026-01-29 06:45:55.318020189 +0000 UTC m=+139.179813258" observedRunningTime="2026-01-29 06:45:55.984207396 +0000 UTC m=+139.846000465" watchObservedRunningTime="2026-01-29 06:45:55.98561579 +0000 UTC m=+139.847408859" Jan 29 06:45:56 crc kubenswrapper[4826]: I0129 06:45:56.138239 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c58f68dfb-swjwl"] Jan 29 06:45:56 crc kubenswrapper[4826]: I0129 06:45:56.246704 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k"] Jan 29 06:45:56 crc kubenswrapper[4826]: I0129 06:45:56.817478 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a23bf6a-86ab-4319-8a5a-e447509ac03f" path="/var/lib/kubelet/pods/1a23bf6a-86ab-4319-8a5a-e447509ac03f/volumes" Jan 29 06:45:57 crc kubenswrapper[4826]: I0129 06:45:57.345734 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:45:57 crc kubenswrapper[4826]: I0129 06:45:57.345799 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:45:57 crc kubenswrapper[4826]: I0129 06:45:57.757669 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:45:57 crc kubenswrapper[4826]: I0129 06:45:57.758065 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:45:57 crc kubenswrapper[4826]: I0129 06:45:57.909770 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" podUID="1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75" containerName="controller-manager" containerID="cri-o://e896a5da1fa3cdef35225909e0d6a508c5441a4d72b41cda43b316ece9139412" gracePeriod=30 Jan 29 06:45:57 crc kubenswrapper[4826]: I0129 06:45:57.912412 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" podUID="36b5569b-b83d-4395-b5ac-ee4a97bc913a" containerName="route-controller-manager" containerID="cri-o://6888e3c4f1d20a776d99aa34ed4e0c8ff20148c4a4c8d77c85e39d70d5308ccb" gracePeriod=30 Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.382437 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.383675 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.384766 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.385503 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.385907 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.393953 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.418275 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.432563 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr"] Jan 29 06:45:58 crc kubenswrapper[4826]: E0129 06:45:58.432908 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75" containerName="controller-manager" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.432921 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75" containerName="controller-manager" Jan 29 06:45:58 crc kubenswrapper[4826]: E0129 06:45:58.432940 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36b5569b-b83d-4395-b5ac-ee4a97bc913a" containerName="route-controller-manager" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.432947 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b5569b-b83d-4395-b5ac-ee4a97bc913a" containerName="route-controller-manager" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.433067 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75" containerName="controller-manager" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.433077 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="36b5569b-b83d-4395-b5ac-ee4a97bc913a" containerName="route-controller-manager" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.434325 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.448688 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr"] Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.495598 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dmn6q" podUID="d84323bb-bf74-4538-8cff-b507cb9b261d" containerName="registry-server" probeResult="failure" output=< Jan 29 06:45:58 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 06:45:58 crc kubenswrapper[4826]: > Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549138 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tgzc\" (UniqueName: \"kubernetes.io/projected/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-kube-api-access-9tgzc\") pod \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549179 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b5569b-b83d-4395-b5ac-ee4a97bc913a-client-ca\") pod \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549204 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzxtc\" (UniqueName: \"kubernetes.io/projected/36b5569b-b83d-4395-b5ac-ee4a97bc913a-kube-api-access-fzxtc\") pod \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549236 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-serving-cert\") pod \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549267 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-proxy-ca-bundles\") pod \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549351 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-client-ca\") pod \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549379 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b5569b-b83d-4395-b5ac-ee4a97bc913a-config\") pod \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549399 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b5569b-b83d-4395-b5ac-ee4a97bc913a-serving-cert\") pod \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\" (UID: \"36b5569b-b83d-4395-b5ac-ee4a97bc913a\") " Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549419 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-config\") pod \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\" (UID: \"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75\") " Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549533 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aae8684-210d-4247-bdb7-483ec7bd6684-client-ca\") pod \"route-controller-manager-77bf6848c-4tntr\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549557 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aae8684-210d-4247-bdb7-483ec7bd6684-config\") pod \"route-controller-manager-77bf6848c-4tntr\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549578 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxsc8\" (UniqueName: \"kubernetes.io/projected/9aae8684-210d-4247-bdb7-483ec7bd6684-kube-api-access-cxsc8\") pod \"route-controller-manager-77bf6848c-4tntr\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549600 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/401827cd-ae09-49af-a66e-2c94c7d00d01-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"401827cd-ae09-49af-a66e-2c94c7d00d01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549638 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aae8684-210d-4247-bdb7-483ec7bd6684-serving-cert\") pod \"route-controller-manager-77bf6848c-4tntr\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.549656 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/401827cd-ae09-49af-a66e-2c94c7d00d01-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"401827cd-ae09-49af-a66e-2c94c7d00d01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.550204 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-client-ca" (OuterVolumeSpecName: "client-ca") pod "1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75" (UID: "1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.550353 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b5569b-b83d-4395-b5ac-ee4a97bc913a-config" (OuterVolumeSpecName: "config") pod "36b5569b-b83d-4395-b5ac-ee4a97bc913a" (UID: "36b5569b-b83d-4395-b5ac-ee4a97bc913a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.550719 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75" (UID: "1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.550963 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-config" (OuterVolumeSpecName: "config") pod "1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75" (UID: "1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.551099 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36b5569b-b83d-4395-b5ac-ee4a97bc913a-client-ca" (OuterVolumeSpecName: "client-ca") pod "36b5569b-b83d-4395-b5ac-ee4a97bc913a" (UID: "36b5569b-b83d-4395-b5ac-ee4a97bc913a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.555379 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36b5569b-b83d-4395-b5ac-ee4a97bc913a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "36b5569b-b83d-4395-b5ac-ee4a97bc913a" (UID: "36b5569b-b83d-4395-b5ac-ee4a97bc913a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.555636 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-kube-api-access-9tgzc" (OuterVolumeSpecName: "kube-api-access-9tgzc") pod "1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75" (UID: "1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75"). InnerVolumeSpecName "kube-api-access-9tgzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.556411 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b5569b-b83d-4395-b5ac-ee4a97bc913a-kube-api-access-fzxtc" (OuterVolumeSpecName: "kube-api-access-fzxtc") pod "36b5569b-b83d-4395-b5ac-ee4a97bc913a" (UID: "36b5569b-b83d-4395-b5ac-ee4a97bc913a"). InnerVolumeSpecName "kube-api-access-fzxtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.556758 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75" (UID: "1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651498 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aae8684-210d-4247-bdb7-483ec7bd6684-client-ca\") pod \"route-controller-manager-77bf6848c-4tntr\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651567 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aae8684-210d-4247-bdb7-483ec7bd6684-config\") pod \"route-controller-manager-77bf6848c-4tntr\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651587 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxsc8\" (UniqueName: \"kubernetes.io/projected/9aae8684-210d-4247-bdb7-483ec7bd6684-kube-api-access-cxsc8\") pod \"route-controller-manager-77bf6848c-4tntr\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651613 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/401827cd-ae09-49af-a66e-2c94c7d00d01-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"401827cd-ae09-49af-a66e-2c94c7d00d01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651641 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aae8684-210d-4247-bdb7-483ec7bd6684-serving-cert\") pod \"route-controller-manager-77bf6848c-4tntr\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651657 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/401827cd-ae09-49af-a66e-2c94c7d00d01-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"401827cd-ae09-49af-a66e-2c94c7d00d01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651723 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36b5569b-b83d-4395-b5ac-ee4a97bc913a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651734 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzxtc\" (UniqueName: \"kubernetes.io/projected/36b5569b-b83d-4395-b5ac-ee4a97bc913a-kube-api-access-fzxtc\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651745 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651754 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651766 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651775 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b5569b-b83d-4395-b5ac-ee4a97bc913a-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651782 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b5569b-b83d-4395-b5ac-ee4a97bc913a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651790 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651801 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tgzc\" (UniqueName: \"kubernetes.io/projected/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75-kube-api-access-9tgzc\") on node \"crc\" DevicePath \"\"" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.651847 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/401827cd-ae09-49af-a66e-2c94c7d00d01-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"401827cd-ae09-49af-a66e-2c94c7d00d01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.654133 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aae8684-210d-4247-bdb7-483ec7bd6684-config\") pod \"route-controller-manager-77bf6848c-4tntr\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.654240 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aae8684-210d-4247-bdb7-483ec7bd6684-client-ca\") pod \"route-controller-manager-77bf6848c-4tntr\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.666246 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aae8684-210d-4247-bdb7-483ec7bd6684-serving-cert\") pod \"route-controller-manager-77bf6848c-4tntr\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.669148 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/401827cd-ae09-49af-a66e-2c94c7d00d01-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"401827cd-ae09-49af-a66e-2c94c7d00d01\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.680012 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxsc8\" (UniqueName: \"kubernetes.io/projected/9aae8684-210d-4247-bdb7-483ec7bd6684-kube-api-access-cxsc8\") pod \"route-controller-manager-77bf6848c-4tntr\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.733918 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.749250 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.805627 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dbqfz" podUID="7087f9c0-baee-4f8f-a24d-14e337c4f32e" containerName="registry-server" probeResult="failure" output=< Jan 29 06:45:58 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 06:45:58 crc kubenswrapper[4826]: > Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.918641 4826 generic.go:334] "Generic (PLEG): container finished" podID="1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75" containerID="e896a5da1fa3cdef35225909e0d6a508c5441a4d72b41cda43b316ece9139412" exitCode=0 Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.918739 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" event={"ID":"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75","Type":"ContainerDied","Data":"e896a5da1fa3cdef35225909e0d6a508c5441a4d72b41cda43b316ece9139412"} Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.919092 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" event={"ID":"1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75","Type":"ContainerDied","Data":"ba5324b04eaa919cc3056cc60240f38be2837a9cff41ea8ad29b7be2333b2490"} Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.919134 4826 scope.go:117] "RemoveContainer" containerID="e896a5da1fa3cdef35225909e0d6a508c5441a4d72b41cda43b316ece9139412" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.918764 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c58f68dfb-swjwl" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.927467 4826 generic.go:334] "Generic (PLEG): container finished" podID="36b5569b-b83d-4395-b5ac-ee4a97bc913a" containerID="6888e3c4f1d20a776d99aa34ed4e0c8ff20148c4a4c8d77c85e39d70d5308ccb" exitCode=0 Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.927512 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" event={"ID":"36b5569b-b83d-4395-b5ac-ee4a97bc913a","Type":"ContainerDied","Data":"6888e3c4f1d20a776d99aa34ed4e0c8ff20148c4a4c8d77c85e39d70d5308ccb"} Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.927541 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" event={"ID":"36b5569b-b83d-4395-b5ac-ee4a97bc913a","Type":"ContainerDied","Data":"c3258dfefe2bf4f057f7a17bf4c2f45c4167a50ec14499556b1b68173bd98082"} Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.927643 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.949713 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c58f68dfb-swjwl"] Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.951924 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c58f68dfb-swjwl"] Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.957445 4826 scope.go:117] "RemoveContainer" containerID="e896a5da1fa3cdef35225909e0d6a508c5441a4d72b41cda43b316ece9139412" Jan 29 06:45:58 crc kubenswrapper[4826]: E0129 06:45:58.958015 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e896a5da1fa3cdef35225909e0d6a508c5441a4d72b41cda43b316ece9139412\": container with ID starting with e896a5da1fa3cdef35225909e0d6a508c5441a4d72b41cda43b316ece9139412 not found: ID does not exist" containerID="e896a5da1fa3cdef35225909e0d6a508c5441a4d72b41cda43b316ece9139412" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.958090 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e896a5da1fa3cdef35225909e0d6a508c5441a4d72b41cda43b316ece9139412"} err="failed to get container status \"e896a5da1fa3cdef35225909e0d6a508c5441a4d72b41cda43b316ece9139412\": rpc error: code = NotFound desc = could not find container \"e896a5da1fa3cdef35225909e0d6a508c5441a4d72b41cda43b316ece9139412\": container with ID starting with e896a5da1fa3cdef35225909e0d6a508c5441a4d72b41cda43b316ece9139412 not found: ID does not exist" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.958178 4826 scope.go:117] "RemoveContainer" containerID="6888e3c4f1d20a776d99aa34ed4e0c8ff20148c4a4c8d77c85e39d70d5308ccb" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.967031 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k"] Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.972904 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff8755c47-p7z4k"] Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.990061 4826 scope.go:117] "RemoveContainer" containerID="6888e3c4f1d20a776d99aa34ed4e0c8ff20148c4a4c8d77c85e39d70d5308ccb" Jan 29 06:45:58 crc kubenswrapper[4826]: E0129 06:45:58.991970 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6888e3c4f1d20a776d99aa34ed4e0c8ff20148c4a4c8d77c85e39d70d5308ccb\": container with ID starting with 6888e3c4f1d20a776d99aa34ed4e0c8ff20148c4a4c8d77c85e39d70d5308ccb not found: ID does not exist" containerID="6888e3c4f1d20a776d99aa34ed4e0c8ff20148c4a4c8d77c85e39d70d5308ccb" Jan 29 06:45:58 crc kubenswrapper[4826]: I0129 06:45:58.992010 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6888e3c4f1d20a776d99aa34ed4e0c8ff20148c4a4c8d77c85e39d70d5308ccb"} err="failed to get container status \"6888e3c4f1d20a776d99aa34ed4e0c8ff20148c4a4c8d77c85e39d70d5308ccb\": rpc error: code = NotFound desc = could not find container \"6888e3c4f1d20a776d99aa34ed4e0c8ff20148c4a4c8d77c85e39d70d5308ccb\": container with ID starting with 6888e3c4f1d20a776d99aa34ed4e0c8ff20148c4a4c8d77c85e39d70d5308ccb not found: ID does not exist" Jan 29 06:45:59 crc kubenswrapper[4826]: I0129 06:45:59.059249 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 06:45:59 crc kubenswrapper[4826]: W0129 06:45:59.072125 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod401827cd_ae09_49af_a66e_2c94c7d00d01.slice/crio-4301c571b5b59a54f6428eec9720110489c65f156c91b949e1a70a98338f483c WatchSource:0}: Error finding container 4301c571b5b59a54f6428eec9720110489c65f156c91b949e1a70a98338f483c: Status 404 returned error can't find the container with id 4301c571b5b59a54f6428eec9720110489c65f156c91b949e1a70a98338f483c Jan 29 06:45:59 crc kubenswrapper[4826]: I0129 06:45:59.248865 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr"] Jan 29 06:45:59 crc kubenswrapper[4826]: I0129 06:45:59.309949 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:59 crc kubenswrapper[4826]: I0129 06:45:59.310323 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:59 crc kubenswrapper[4826]: I0129 06:45:59.364202 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:45:59 crc kubenswrapper[4826]: I0129 06:45:59.935866 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"401827cd-ae09-49af-a66e-2c94c7d00d01","Type":"ContainerStarted","Data":"265c4f70e469ccc22d7ac45e8314e6a3709f54679e161a2c1ef75172d3e73d7a"} Jan 29 06:45:59 crc kubenswrapper[4826]: I0129 06:45:59.935916 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"401827cd-ae09-49af-a66e-2c94c7d00d01","Type":"ContainerStarted","Data":"4301c571b5b59a54f6428eec9720110489c65f156c91b949e1a70a98338f483c"} Jan 29 06:45:59 crc kubenswrapper[4826]: I0129 06:45:59.938227 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" event={"ID":"9aae8684-210d-4247-bdb7-483ec7bd6684","Type":"ContainerStarted","Data":"cd5592cee0ea7a2ac49ab51855975e59f99f723ec45ea72d0cf5861eac4d5eb9"} Jan 29 06:45:59 crc kubenswrapper[4826]: I0129 06:45:59.938423 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:59 crc kubenswrapper[4826]: I0129 06:45:59.938445 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" event={"ID":"9aae8684-210d-4247-bdb7-483ec7bd6684","Type":"ContainerStarted","Data":"614942fe570c79e9b8013394191267ee07f2d8bd0d9d3db4f81cf29e436ad95d"} Jan 29 06:45:59 crc kubenswrapper[4826]: I0129 06:45:59.944215 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:45:59 crc kubenswrapper[4826]: I0129 06:45:59.964242 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" podStartSLOduration=3.964228813 podStartE2EDuration="3.964228813s" podCreationTimestamp="2026-01-29 06:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:45:59.962579863 +0000 UTC m=+143.824372952" watchObservedRunningTime="2026-01-29 06:45:59.964228813 +0000 UTC m=+143.826021882" Jan 29 06:46:00 crc kubenswrapper[4826]: I0129 06:46:00.820327 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75" path="/var/lib/kubelet/pods/1d6ebb9e-2e38-4c95-bd62-7bf1c1afea75/volumes" Jan 29 06:46:00 crc kubenswrapper[4826]: I0129 06:46:00.821618 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b5569b-b83d-4395-b5ac-ee4a97bc913a" path="/var/lib/kubelet/pods/36b5569b-b83d-4395-b5ac-ee4a97bc913a/volumes" Jan 29 06:46:00 crc kubenswrapper[4826]: I0129 06:46:00.953003 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"401827cd-ae09-49af-a66e-2c94c7d00d01","Type":"ContainerDied","Data":"265c4f70e469ccc22d7ac45e8314e6a3709f54679e161a2c1ef75172d3e73d7a"} Jan 29 06:46:00 crc kubenswrapper[4826]: I0129 06:46:00.952811 4826 generic.go:334] "Generic (PLEG): container finished" podID="401827cd-ae09-49af-a66e-2c94c7d00d01" containerID="265c4f70e469ccc22d7ac45e8314e6a3709f54679e161a2c1ef75172d3e73d7a" exitCode=0 Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.424591 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8688b58bd8-t8l2l"] Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.425902 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.430732 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.430923 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.430945 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.431425 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.435354 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.435368 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.445830 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.447592 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8688b58bd8-t8l2l"] Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.504372 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpbkr\" (UniqueName: \"kubernetes.io/projected/a29767ca-fa74-4744-a422-12d6faa87740-kube-api-access-dpbkr\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.504670 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a29767ca-fa74-4744-a422-12d6faa87740-serving-cert\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.504762 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-config\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.504833 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-client-ca\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.504912 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-proxy-ca-bundles\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.606629 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a29767ca-fa74-4744-a422-12d6faa87740-serving-cert\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.606709 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-config\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.606746 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-client-ca\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.606797 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-proxy-ca-bundles\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.606910 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpbkr\" (UniqueName: \"kubernetes.io/projected/a29767ca-fa74-4744-a422-12d6faa87740-kube-api-access-dpbkr\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.607963 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-client-ca\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.608642 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-proxy-ca-bundles\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.609933 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-config\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.614558 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a29767ca-fa74-4744-a422-12d6faa87740-serving-cert\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.629433 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpbkr\" (UniqueName: \"kubernetes.io/projected/a29767ca-fa74-4744-a422-12d6faa87740-kube-api-access-dpbkr\") pod \"controller-manager-8688b58bd8-t8l2l\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.743458 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:01 crc kubenswrapper[4826]: I0129 06:46:01.986074 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8688b58bd8-t8l2l"] Jan 29 06:46:01 crc kubenswrapper[4826]: W0129 06:46:01.991005 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda29767ca_fa74_4744_a422_12d6faa87740.slice/crio-c6247ba6544581d10c653c33f23f2592e83ac54db615e5dabd35d9f438f5057d WatchSource:0}: Error finding container c6247ba6544581d10c653c33f23f2592e83ac54db615e5dabd35d9f438f5057d: Status 404 returned error can't find the container with id c6247ba6544581d10c653c33f23f2592e83ac54db615e5dabd35d9f438f5057d Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.208562 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.221876 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/401827cd-ae09-49af-a66e-2c94c7d00d01-kubelet-dir\") pod \"401827cd-ae09-49af-a66e-2c94c7d00d01\" (UID: \"401827cd-ae09-49af-a66e-2c94c7d00d01\") " Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.222075 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/401827cd-ae09-49af-a66e-2c94c7d00d01-kube-api-access\") pod \"401827cd-ae09-49af-a66e-2c94c7d00d01\" (UID: \"401827cd-ae09-49af-a66e-2c94c7d00d01\") " Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.224115 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401827cd-ae09-49af-a66e-2c94c7d00d01-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "401827cd-ae09-49af-a66e-2c94c7d00d01" (UID: "401827cd-ae09-49af-a66e-2c94c7d00d01"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.233479 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/401827cd-ae09-49af-a66e-2c94c7d00d01-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "401827cd-ae09-49af-a66e-2c94c7d00d01" (UID: "401827cd-ae09-49af-a66e-2c94c7d00d01"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.324628 4826 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/401827cd-ae09-49af-a66e-2c94c7d00d01-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.324663 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/401827cd-ae09-49af-a66e-2c94c7d00d01-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.972797 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"401827cd-ae09-49af-a66e-2c94c7d00d01","Type":"ContainerDied","Data":"4301c571b5b59a54f6428eec9720110489c65f156c91b949e1a70a98338f483c"} Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.972864 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4301c571b5b59a54f6428eec9720110489c65f156c91b949e1a70a98338f483c" Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.972817 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.975798 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" event={"ID":"a29767ca-fa74-4744-a422-12d6faa87740","Type":"ContainerStarted","Data":"e6e5f2ef0d8686dcf48f85742b4737123e636dbb25c525d938018ab4a6074510"} Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.975827 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" event={"ID":"a29767ca-fa74-4744-a422-12d6faa87740","Type":"ContainerStarted","Data":"c6247ba6544581d10c653c33f23f2592e83ac54db615e5dabd35d9f438f5057d"} Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.976597 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.984123 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:02 crc kubenswrapper[4826]: I0129 06:46:02.996001 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" podStartSLOduration=6.99597799 podStartE2EDuration="6.99597799s" podCreationTimestamp="2026-01-29 06:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:46:02.993838449 +0000 UTC m=+146.855631518" watchObservedRunningTime="2026-01-29 06:46:02.99597799 +0000 UTC m=+146.857771059" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.255217 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.255766 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.258408 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.258935 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.267780 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.277506 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.357843 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.357945 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.361291 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.370084 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.383764 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.386123 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.437076 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.445276 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.557125 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 06:46:04 crc kubenswrapper[4826]: W0129 06:46:04.686272 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-de4fae2062051045260feba0b77e9ea58d1113342b1a7854afd68200f624331e WatchSource:0}: Error finding container de4fae2062051045260feba0b77e9ea58d1113342b1a7854afd68200f624331e: Status 404 returned error can't find the container with id de4fae2062051045260feba0b77e9ea58d1113342b1a7854afd68200f624331e Jan 29 06:46:04 crc kubenswrapper[4826]: W0129 06:46:04.754200 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-a3af5c42397564af16ecdfecdae47a3f8489653f49b8811fccb1623457d9be74 WatchSource:0}: Error finding container a3af5c42397564af16ecdfecdae47a3f8489653f49b8811fccb1623457d9be74: Status 404 returned error can't find the container with id a3af5c42397564af16ecdfecdae47a3f8489653f49b8811fccb1623457d9be74 Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.993020 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d68ed7212e361d4e28b9de50d4362174cf80efd1816dd8c7aedf6567927353b2"} Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.993066 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"de4fae2062051045260feba0b77e9ea58d1113342b1a7854afd68200f624331e"} Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.995106 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c645ebeade5b51352c5e926492047cc04709869660a4fd7955f2674ed8d3e21f"} Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.995135 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a3af5c42397564af16ecdfecdae47a3f8489653f49b8811fccb1623457d9be74"} Jan 29 06:46:04 crc kubenswrapper[4826]: I0129 06:46:04.995535 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.001113 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e4bb7ab1b49a920531e4cffa385dd455f94bb6bdd02c165da3ca3586952261b3"} Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.001229 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"64515a38a9dc0f8b1f32c8ed3702ffa30801d4e85de830c58a8b7a3b0d9d4a3b"} Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.587741 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 06:46:05 crc kubenswrapper[4826]: E0129 06:46:05.588539 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401827cd-ae09-49af-a66e-2c94c7d00d01" containerName="pruner" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.588557 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="401827cd-ae09-49af-a66e-2c94c7d00d01" containerName="pruner" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.588720 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="401827cd-ae09-49af-a66e-2c94c7d00d01" containerName="pruner" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.589393 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.591654 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.591930 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.593148 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.656967 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.657045 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.682715 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da70e922-2407-45cd-a1d2-6f3d9129de21-kubelet-dir\") pod \"installer-9-crc\" (UID: \"da70e922-2407-45cd-a1d2-6f3d9129de21\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.683156 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da70e922-2407-45cd-a1d2-6f3d9129de21-var-lock\") pod \"installer-9-crc\" (UID: \"da70e922-2407-45cd-a1d2-6f3d9129de21\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.683353 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da70e922-2407-45cd-a1d2-6f3d9129de21-kube-api-access\") pod \"installer-9-crc\" (UID: \"da70e922-2407-45cd-a1d2-6f3d9129de21\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.784667 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da70e922-2407-45cd-a1d2-6f3d9129de21-var-lock\") pod \"installer-9-crc\" (UID: \"da70e922-2407-45cd-a1d2-6f3d9129de21\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.784783 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da70e922-2407-45cd-a1d2-6f3d9129de21-kube-api-access\") pod \"installer-9-crc\" (UID: \"da70e922-2407-45cd-a1d2-6f3d9129de21\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.784838 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da70e922-2407-45cd-a1d2-6f3d9129de21-kubelet-dir\") pod \"installer-9-crc\" (UID: \"da70e922-2407-45cd-a1d2-6f3d9129de21\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.784871 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da70e922-2407-45cd-a1d2-6f3d9129de21-var-lock\") pod \"installer-9-crc\" (UID: \"da70e922-2407-45cd-a1d2-6f3d9129de21\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.784943 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da70e922-2407-45cd-a1d2-6f3d9129de21-kubelet-dir\") pod \"installer-9-crc\" (UID: \"da70e922-2407-45cd-a1d2-6f3d9129de21\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.807285 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da70e922-2407-45cd-a1d2-6f3d9129de21-kube-api-access\") pod \"installer-9-crc\" (UID: \"da70e922-2407-45cd-a1d2-6f3d9129de21\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:46:05 crc kubenswrapper[4826]: I0129 06:46:05.936955 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:46:06 crc kubenswrapper[4826]: I0129 06:46:06.379241 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 06:46:06 crc kubenswrapper[4826]: I0129 06:46:06.592238 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2fdxd"] Jan 29 06:46:07 crc kubenswrapper[4826]: I0129 06:46:07.009127 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"da70e922-2407-45cd-a1d2-6f3d9129de21","Type":"ContainerStarted","Data":"d31d53fd81fcdd67cade79b1e6a45ee0ba7abdbcc3bb2d5fa4b3484691e577c2"} Jan 29 06:46:07 crc kubenswrapper[4826]: I0129 06:46:07.398826 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:46:07 crc kubenswrapper[4826]: I0129 06:46:07.454762 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:46:07 crc kubenswrapper[4826]: I0129 06:46:07.798643 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:46:07 crc kubenswrapper[4826]: I0129 06:46:07.859148 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:46:08 crc kubenswrapper[4826]: I0129 06:46:08.018653 4826 generic.go:334] "Generic (PLEG): container finished" podID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" containerID="0a938e388c8c25ec14c47bf893ba63529c38b0c2acff9e379bab2a5505fc5bf3" exitCode=0 Jan 29 06:46:08 crc kubenswrapper[4826]: I0129 06:46:08.018700 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brqmx" event={"ID":"eb065b7c-e4d6-4607-aa51-e8acf00117fa","Type":"ContainerDied","Data":"0a938e388c8c25ec14c47bf893ba63529c38b0c2acff9e379bab2a5505fc5bf3"} Jan 29 06:46:08 crc kubenswrapper[4826]: I0129 06:46:08.022041 4826 generic.go:334] "Generic (PLEG): container finished" podID="832cf5b0-2b5a-4975-b147-8a1f08f08456" containerID="056329e99688daf07a6a5c6a9bf7d8e8df10a4bb5ed8896364009e8f49a92062" exitCode=0 Jan 29 06:46:08 crc kubenswrapper[4826]: I0129 06:46:08.022126 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qb44z" event={"ID":"832cf5b0-2b5a-4975-b147-8a1f08f08456","Type":"ContainerDied","Data":"056329e99688daf07a6a5c6a9bf7d8e8df10a4bb5ed8896364009e8f49a92062"} Jan 29 06:46:08 crc kubenswrapper[4826]: I0129 06:46:08.028318 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"da70e922-2407-45cd-a1d2-6f3d9129de21","Type":"ContainerStarted","Data":"046a494a51071db8912277eee22adf85896be4e6f5bbcb9648dfd3c6837efd63"} Jan 29 06:46:08 crc kubenswrapper[4826]: I0129 06:46:08.067190 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.067171479 podStartE2EDuration="3.067171479s" podCreationTimestamp="2026-01-29 06:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:46:08.049763805 +0000 UTC m=+151.911556874" watchObservedRunningTime="2026-01-29 06:46:08.067171479 +0000 UTC m=+151.928964538" Jan 29 06:46:09 crc kubenswrapper[4826]: I0129 06:46:09.039002 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brqmx" event={"ID":"eb065b7c-e4d6-4607-aa51-e8acf00117fa","Type":"ContainerStarted","Data":"c0bc02934618b56670e02c41ece2d294ccddb2ef03d666340615b38efcc32cc5"} Jan 29 06:46:09 crc kubenswrapper[4826]: I0129 06:46:09.361160 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:46:10 crc kubenswrapper[4826]: I0129 06:46:10.047318 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qb44z" event={"ID":"832cf5b0-2b5a-4975-b147-8a1f08f08456","Type":"ContainerStarted","Data":"5265233da520fc41c05f0a788a63a097be3248223d4b7ac01ac28754e033e4c1"} Jan 29 06:46:10 crc kubenswrapper[4826]: I0129 06:46:10.060175 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dbqfz"] Jan 29 06:46:10 crc kubenswrapper[4826]: I0129 06:46:10.060556 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dbqfz" podUID="7087f9c0-baee-4f8f-a24d-14e337c4f32e" containerName="registry-server" containerID="cri-o://7dba4dd068334c8f3f4f9c22e8d107fae776e7a95bd29ef94282c44ae828dc8c" gracePeriod=2 Jan 29 06:46:10 crc kubenswrapper[4826]: I0129 06:46:10.071248 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-brqmx" podStartSLOduration=3.8039110320000002 podStartE2EDuration="53.071217151s" podCreationTimestamp="2026-01-29 06:45:17 +0000 UTC" firstStartedPulling="2026-01-29 06:45:19.193072912 +0000 UTC m=+103.054865981" lastFinishedPulling="2026-01-29 06:46:08.460379031 +0000 UTC m=+152.322172100" observedRunningTime="2026-01-29 06:46:10.068449305 +0000 UTC m=+153.930242374" watchObservedRunningTime="2026-01-29 06:46:10.071217151 +0000 UTC m=+153.933010220" Jan 29 06:46:10 crc kubenswrapper[4826]: I0129 06:46:10.092183 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qb44z" podStartSLOduration=3.0625053859999998 podStartE2EDuration="50.092150318s" podCreationTimestamp="2026-01-29 06:45:20 +0000 UTC" firstStartedPulling="2026-01-29 06:45:22.337513882 +0000 UTC m=+106.199306951" lastFinishedPulling="2026-01-29 06:46:09.367158814 +0000 UTC m=+153.228951883" observedRunningTime="2026-01-29 06:46:10.089090846 +0000 UTC m=+153.950883915" watchObservedRunningTime="2026-01-29 06:46:10.092150318 +0000 UTC m=+153.953943377" Jan 29 06:46:10 crc kubenswrapper[4826]: I0129 06:46:10.954443 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:46:10 crc kubenswrapper[4826]: I0129 06:46:10.954552 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:46:12 crc kubenswrapper[4826]: I0129 06:46:12.011774 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qb44z" podUID="832cf5b0-2b5a-4975-b147-8a1f08f08456" containerName="registry-server" probeResult="failure" output=< Jan 29 06:46:12 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 06:46:12 crc kubenswrapper[4826]: > Jan 29 06:46:12 crc kubenswrapper[4826]: I0129 06:46:12.080860 4826 generic.go:334] "Generic (PLEG): container finished" podID="7087f9c0-baee-4f8f-a24d-14e337c4f32e" containerID="7dba4dd068334c8f3f4f9c22e8d107fae776e7a95bd29ef94282c44ae828dc8c" exitCode=0 Jan 29 06:46:12 crc kubenswrapper[4826]: I0129 06:46:12.080921 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dbqfz" event={"ID":"7087f9c0-baee-4f8f-a24d-14e337c4f32e","Type":"ContainerDied","Data":"7dba4dd068334c8f3f4f9c22e8d107fae776e7a95bd29ef94282c44ae828dc8c"} Jan 29 06:46:12 crc kubenswrapper[4826]: I0129 06:46:12.307956 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:46:12 crc kubenswrapper[4826]: I0129 06:46:12.393122 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7087f9c0-baee-4f8f-a24d-14e337c4f32e-utilities\") pod \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\" (UID: \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\") " Jan 29 06:46:12 crc kubenswrapper[4826]: I0129 06:46:12.393226 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7087f9c0-baee-4f8f-a24d-14e337c4f32e-catalog-content\") pod \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\" (UID: \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\") " Jan 29 06:46:12 crc kubenswrapper[4826]: I0129 06:46:12.393362 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlh5t\" (UniqueName: \"kubernetes.io/projected/7087f9c0-baee-4f8f-a24d-14e337c4f32e-kube-api-access-qlh5t\") pod \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\" (UID: \"7087f9c0-baee-4f8f-a24d-14e337c4f32e\") " Jan 29 06:46:12 crc kubenswrapper[4826]: I0129 06:46:12.394306 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7087f9c0-baee-4f8f-a24d-14e337c4f32e-utilities" (OuterVolumeSpecName: "utilities") pod "7087f9c0-baee-4f8f-a24d-14e337c4f32e" (UID: "7087f9c0-baee-4f8f-a24d-14e337c4f32e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:46:12 crc kubenswrapper[4826]: I0129 06:46:12.400559 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7087f9c0-baee-4f8f-a24d-14e337c4f32e-kube-api-access-qlh5t" (OuterVolumeSpecName: "kube-api-access-qlh5t") pod "7087f9c0-baee-4f8f-a24d-14e337c4f32e" (UID: "7087f9c0-baee-4f8f-a24d-14e337c4f32e"). InnerVolumeSpecName "kube-api-access-qlh5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:46:12 crc kubenswrapper[4826]: I0129 06:46:12.452396 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7087f9c0-baee-4f8f-a24d-14e337c4f32e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7087f9c0-baee-4f8f-a24d-14e337c4f32e" (UID: "7087f9c0-baee-4f8f-a24d-14e337c4f32e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:46:12 crc kubenswrapper[4826]: I0129 06:46:12.495329 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7087f9c0-baee-4f8f-a24d-14e337c4f32e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:12 crc kubenswrapper[4826]: I0129 06:46:12.495382 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7087f9c0-baee-4f8f-a24d-14e337c4f32e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:12 crc kubenswrapper[4826]: I0129 06:46:12.495400 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlh5t\" (UniqueName: \"kubernetes.io/projected/7087f9c0-baee-4f8f-a24d-14e337c4f32e-kube-api-access-qlh5t\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:13 crc kubenswrapper[4826]: I0129 06:46:13.107735 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dbqfz" event={"ID":"7087f9c0-baee-4f8f-a24d-14e337c4f32e","Type":"ContainerDied","Data":"756641ebd5cea20c8173f30dadf2dc9ecddb9d493b96159e19a2a86ab9dd6114"} Jan 29 06:46:13 crc kubenswrapper[4826]: I0129 06:46:13.107832 4826 scope.go:117] "RemoveContainer" containerID="7dba4dd068334c8f3f4f9c22e8d107fae776e7a95bd29ef94282c44ae828dc8c" Jan 29 06:46:13 crc kubenswrapper[4826]: I0129 06:46:13.107970 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dbqfz" Jan 29 06:46:13 crc kubenswrapper[4826]: I0129 06:46:13.137208 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dbqfz"] Jan 29 06:46:13 crc kubenswrapper[4826]: I0129 06:46:13.139570 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dbqfz"] Jan 29 06:46:13 crc kubenswrapper[4826]: I0129 06:46:13.891289 4826 scope.go:117] "RemoveContainer" containerID="831cee523f1372dc992301b96ce7d9f5811b99907e982923d6fa4bb33941dbea" Jan 29 06:46:14 crc kubenswrapper[4826]: I0129 06:46:14.115120 4826 scope.go:117] "RemoveContainer" containerID="a49bfc02e453a1f533c1a1ac1ae34fb60a1f1ed85bfc173bf8413e6bfa448a19" Jan 29 06:46:14 crc kubenswrapper[4826]: I0129 06:46:14.821850 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7087f9c0-baee-4f8f-a24d-14e337c4f32e" path="/var/lib/kubelet/pods/7087f9c0-baee-4f8f-a24d-14e337c4f32e/volumes" Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.115140 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8688b58bd8-t8l2l"] Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.115953 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" podUID="a29767ca-fa74-4744-a422-12d6faa87740" containerName="controller-manager" containerID="cri-o://e6e5f2ef0d8686dcf48f85742b4737123e636dbb25c525d938018ab4a6074510" gracePeriod=30 Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.136374 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr"] Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.136700 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" podUID="9aae8684-210d-4247-bdb7-483ec7bd6684" containerName="route-controller-manager" containerID="cri-o://cd5592cee0ea7a2ac49ab51855975e59f99f723ec45ea72d0cf5861eac4d5eb9" gracePeriod=30 Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.136743 4826 generic.go:334] "Generic (PLEG): container finished" podID="737f9828-15a3-401e-aa29-e9467773637f" containerID="b3ae2002e115149a8d69e5c57f5cf4b96847e97174841301ed61c5f7b726443c" exitCode=0 Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.136843 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsfgw" event={"ID":"737f9828-15a3-401e-aa29-e9467773637f","Type":"ContainerDied","Data":"b3ae2002e115149a8d69e5c57f5cf4b96847e97174841301ed61c5f7b726443c"} Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.145260 4826 generic.go:334] "Generic (PLEG): container finished" podID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" containerID="b06d4a57a2fdc6d072a97c3e703a28492b72a415b68fa6d36b0745a41643ca2e" exitCode=0 Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.145432 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xt89w" event={"ID":"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a","Type":"ContainerDied","Data":"b06d4a57a2fdc6d072a97c3e703a28492b72a415b68fa6d36b0745a41643ca2e"} Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.156434 4826 generic.go:334] "Generic (PLEG): container finished" podID="87b64822-a98d-4129-946c-b073cffe4f6c" containerID="1c52936c59779c76a4ac1a0c1dce5e0f842fe3ba6cb25ecca0d495b9ecdbff02" exitCode=0 Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.156505 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfmlt" event={"ID":"87b64822-a98d-4129-946c-b073cffe4f6c","Type":"ContainerDied","Data":"1c52936c59779c76a4ac1a0c1dce5e0f842fe3ba6cb25ecca0d495b9ecdbff02"} Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.782441 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.788121 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.897969 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-client-ca\") pod \"a29767ca-fa74-4744-a422-12d6faa87740\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.898033 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-proxy-ca-bundles\") pod \"a29767ca-fa74-4744-a422-12d6faa87740\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.898095 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a29767ca-fa74-4744-a422-12d6faa87740-serving-cert\") pod \"a29767ca-fa74-4744-a422-12d6faa87740\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.898189 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpbkr\" (UniqueName: \"kubernetes.io/projected/a29767ca-fa74-4744-a422-12d6faa87740-kube-api-access-dpbkr\") pod \"a29767ca-fa74-4744-a422-12d6faa87740\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.898232 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aae8684-210d-4247-bdb7-483ec7bd6684-serving-cert\") pod \"9aae8684-210d-4247-bdb7-483ec7bd6684\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.898278 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-config\") pod \"a29767ca-fa74-4744-a422-12d6faa87740\" (UID: \"a29767ca-fa74-4744-a422-12d6faa87740\") " Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.898393 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aae8684-210d-4247-bdb7-483ec7bd6684-config\") pod \"9aae8684-210d-4247-bdb7-483ec7bd6684\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.898454 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxsc8\" (UniqueName: \"kubernetes.io/projected/9aae8684-210d-4247-bdb7-483ec7bd6684-kube-api-access-cxsc8\") pod \"9aae8684-210d-4247-bdb7-483ec7bd6684\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.898517 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aae8684-210d-4247-bdb7-483ec7bd6684-client-ca\") pod \"9aae8684-210d-4247-bdb7-483ec7bd6684\" (UID: \"9aae8684-210d-4247-bdb7-483ec7bd6684\") " Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.898996 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-client-ca" (OuterVolumeSpecName: "client-ca") pod "a29767ca-fa74-4744-a422-12d6faa87740" (UID: "a29767ca-fa74-4744-a422-12d6faa87740"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.899555 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-config" (OuterVolumeSpecName: "config") pod "a29767ca-fa74-4744-a422-12d6faa87740" (UID: "a29767ca-fa74-4744-a422-12d6faa87740"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.900160 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aae8684-210d-4247-bdb7-483ec7bd6684-client-ca" (OuterVolumeSpecName: "client-ca") pod "9aae8684-210d-4247-bdb7-483ec7bd6684" (UID: "9aae8684-210d-4247-bdb7-483ec7bd6684"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.900572 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a29767ca-fa74-4744-a422-12d6faa87740" (UID: "a29767ca-fa74-4744-a422-12d6faa87740"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.900639 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aae8684-210d-4247-bdb7-483ec7bd6684-config" (OuterVolumeSpecName: "config") pod "9aae8684-210d-4247-bdb7-483ec7bd6684" (UID: "9aae8684-210d-4247-bdb7-483ec7bd6684"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.908654 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aae8684-210d-4247-bdb7-483ec7bd6684-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9aae8684-210d-4247-bdb7-483ec7bd6684" (UID: "9aae8684-210d-4247-bdb7-483ec7bd6684"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.912524 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aae8684-210d-4247-bdb7-483ec7bd6684-kube-api-access-cxsc8" (OuterVolumeSpecName: "kube-api-access-cxsc8") pod "9aae8684-210d-4247-bdb7-483ec7bd6684" (UID: "9aae8684-210d-4247-bdb7-483ec7bd6684"). InnerVolumeSpecName "kube-api-access-cxsc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.912572 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a29767ca-fa74-4744-a422-12d6faa87740-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a29767ca-fa74-4744-a422-12d6faa87740" (UID: "a29767ca-fa74-4744-a422-12d6faa87740"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:46:16 crc kubenswrapper[4826]: I0129 06:46:16.912597 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a29767ca-fa74-4744-a422-12d6faa87740-kube-api-access-dpbkr" (OuterVolumeSpecName: "kube-api-access-dpbkr") pod "a29767ca-fa74-4744-a422-12d6faa87740" (UID: "a29767ca-fa74-4744-a422-12d6faa87740"). InnerVolumeSpecName "kube-api-access-dpbkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.000145 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aae8684-210d-4247-bdb7-483ec7bd6684-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.000664 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxsc8\" (UniqueName: \"kubernetes.io/projected/9aae8684-210d-4247-bdb7-483ec7bd6684-kube-api-access-cxsc8\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.000681 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9aae8684-210d-4247-bdb7-483ec7bd6684-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.000695 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.000705 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.000715 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a29767ca-fa74-4744-a422-12d6faa87740-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.000724 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpbkr\" (UniqueName: \"kubernetes.io/projected/a29767ca-fa74-4744-a422-12d6faa87740-kube-api-access-dpbkr\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.000734 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aae8684-210d-4247-bdb7-483ec7bd6684-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.000744 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a29767ca-fa74-4744-a422-12d6faa87740-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.165066 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsfgw" event={"ID":"737f9828-15a3-401e-aa29-e9467773637f","Type":"ContainerStarted","Data":"e0e6b099c6cb9f1fef6f62f7763a45a8485c335ef23f5803fc15ea941e20ec51"} Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.168313 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xt89w" event={"ID":"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a","Type":"ContainerStarted","Data":"3d895f90b224c8fe8be2213ad186be47562803ea80d2eac16cf0c25ec2d4f99c"} Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.170238 4826 generic.go:334] "Generic (PLEG): container finished" podID="9aae8684-210d-4247-bdb7-483ec7bd6684" containerID="cd5592cee0ea7a2ac49ab51855975e59f99f723ec45ea72d0cf5861eac4d5eb9" exitCode=0 Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.170351 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" event={"ID":"9aae8684-210d-4247-bdb7-483ec7bd6684","Type":"ContainerDied","Data":"cd5592cee0ea7a2ac49ab51855975e59f99f723ec45ea72d0cf5861eac4d5eb9"} Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.170444 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" event={"ID":"9aae8684-210d-4247-bdb7-483ec7bd6684","Type":"ContainerDied","Data":"614942fe570c79e9b8013394191267ee07f2d8bd0d9d3db4f81cf29e436ad95d"} Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.170472 4826 scope.go:117] "RemoveContainer" containerID="cd5592cee0ea7a2ac49ab51855975e59f99f723ec45ea72d0cf5861eac4d5eb9" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.170304 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.173419 4826 generic.go:334] "Generic (PLEG): container finished" podID="a29767ca-fa74-4744-a422-12d6faa87740" containerID="e6e5f2ef0d8686dcf48f85742b4737123e636dbb25c525d938018ab4a6074510" exitCode=0 Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.173524 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" event={"ID":"a29767ca-fa74-4744-a422-12d6faa87740","Type":"ContainerDied","Data":"e6e5f2ef0d8686dcf48f85742b4737123e636dbb25c525d938018ab4a6074510"} Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.173559 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" event={"ID":"a29767ca-fa74-4744-a422-12d6faa87740","Type":"ContainerDied","Data":"c6247ba6544581d10c653c33f23f2592e83ac54db615e5dabd35d9f438f5057d"} Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.173724 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8688b58bd8-t8l2l" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.175969 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfmlt" event={"ID":"87b64822-a98d-4129-946c-b073cffe4f6c","Type":"ContainerStarted","Data":"269df3dbb3eb18fbdf76d41511df8f9f5f5ace37a61755d649d2c6d7d448ed0a"} Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.190506 4826 scope.go:117] "RemoveContainer" containerID="cd5592cee0ea7a2ac49ab51855975e59f99f723ec45ea72d0cf5861eac4d5eb9" Jan 29 06:46:17 crc kubenswrapper[4826]: E0129 06:46:17.190974 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd5592cee0ea7a2ac49ab51855975e59f99f723ec45ea72d0cf5861eac4d5eb9\": container with ID starting with cd5592cee0ea7a2ac49ab51855975e59f99f723ec45ea72d0cf5861eac4d5eb9 not found: ID does not exist" containerID="cd5592cee0ea7a2ac49ab51855975e59f99f723ec45ea72d0cf5861eac4d5eb9" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.191021 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd5592cee0ea7a2ac49ab51855975e59f99f723ec45ea72d0cf5861eac4d5eb9"} err="failed to get container status \"cd5592cee0ea7a2ac49ab51855975e59f99f723ec45ea72d0cf5861eac4d5eb9\": rpc error: code = NotFound desc = could not find container \"cd5592cee0ea7a2ac49ab51855975e59f99f723ec45ea72d0cf5861eac4d5eb9\": container with ID starting with cd5592cee0ea7a2ac49ab51855975e59f99f723ec45ea72d0cf5861eac4d5eb9 not found: ID does not exist" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.191049 4826 scope.go:117] "RemoveContainer" containerID="e6e5f2ef0d8686dcf48f85742b4737123e636dbb25c525d938018ab4a6074510" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.210408 4826 scope.go:117] "RemoveContainer" containerID="e6e5f2ef0d8686dcf48f85742b4737123e636dbb25c525d938018ab4a6074510" Jan 29 06:46:17 crc kubenswrapper[4826]: E0129 06:46:17.210922 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6e5f2ef0d8686dcf48f85742b4737123e636dbb25c525d938018ab4a6074510\": container with ID starting with e6e5f2ef0d8686dcf48f85742b4737123e636dbb25c525d938018ab4a6074510 not found: ID does not exist" containerID="e6e5f2ef0d8686dcf48f85742b4737123e636dbb25c525d938018ab4a6074510" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.210977 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e5f2ef0d8686dcf48f85742b4737123e636dbb25c525d938018ab4a6074510"} err="failed to get container status \"e6e5f2ef0d8686dcf48f85742b4737123e636dbb25c525d938018ab4a6074510\": rpc error: code = NotFound desc = could not find container \"e6e5f2ef0d8686dcf48f85742b4737123e636dbb25c525d938018ab4a6074510\": container with ID starting with e6e5f2ef0d8686dcf48f85742b4737123e636dbb25c525d938018ab4a6074510 not found: ID does not exist" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.212763 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xt89w" podStartSLOduration=2.977312315 podStartE2EDuration="57.212744567s" podCreationTimestamp="2026-01-29 06:45:20 +0000 UTC" firstStartedPulling="2026-01-29 06:45:22.35033394 +0000 UTC m=+106.212127009" lastFinishedPulling="2026-01-29 06:46:16.585766202 +0000 UTC m=+160.447559261" observedRunningTime="2026-01-29 06:46:17.212152753 +0000 UTC m=+161.073945832" watchObservedRunningTime="2026-01-29 06:46:17.212744567 +0000 UTC m=+161.074537636" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.214722 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rsfgw" podStartSLOduration=2.857521294 podStartE2EDuration="58.214715794s" podCreationTimestamp="2026-01-29 06:45:19 +0000 UTC" firstStartedPulling="2026-01-29 06:45:21.2933368 +0000 UTC m=+105.155129869" lastFinishedPulling="2026-01-29 06:46:16.6505313 +0000 UTC m=+160.512324369" observedRunningTime="2026-01-29 06:46:17.190523039 +0000 UTC m=+161.052316098" watchObservedRunningTime="2026-01-29 06:46:17.214715794 +0000 UTC m=+161.076508863" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.237744 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sfmlt" podStartSLOduration=2.622725703 podStartE2EDuration="1m0.237735741s" podCreationTimestamp="2026-01-29 06:45:17 +0000 UTC" firstStartedPulling="2026-01-29 06:45:19.20548003 +0000 UTC m=+103.067273099" lastFinishedPulling="2026-01-29 06:46:16.820490068 +0000 UTC m=+160.682283137" observedRunningTime="2026-01-29 06:46:17.235814875 +0000 UTC m=+161.097607954" watchObservedRunningTime="2026-01-29 06:46:17.237735741 +0000 UTC m=+161.099528810" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.255660 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr"] Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.258139 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77bf6848c-4tntr"] Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.266721 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8688b58bd8-t8l2l"] Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.269684 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8688b58bd8-t8l2l"] Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.436868 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67f47f66d7-p2plj"] Jan 29 06:46:17 crc kubenswrapper[4826]: E0129 06:46:17.437252 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7087f9c0-baee-4f8f-a24d-14e337c4f32e" containerName="registry-server" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.437278 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7087f9c0-baee-4f8f-a24d-14e337c4f32e" containerName="registry-server" Jan 29 06:46:17 crc kubenswrapper[4826]: E0129 06:46:17.437325 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a29767ca-fa74-4744-a422-12d6faa87740" containerName="controller-manager" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.437339 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a29767ca-fa74-4744-a422-12d6faa87740" containerName="controller-manager" Jan 29 06:46:17 crc kubenswrapper[4826]: E0129 06:46:17.437356 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7087f9c0-baee-4f8f-a24d-14e337c4f32e" containerName="extract-utilities" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.437366 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7087f9c0-baee-4f8f-a24d-14e337c4f32e" containerName="extract-utilities" Jan 29 06:46:17 crc kubenswrapper[4826]: E0129 06:46:17.437379 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aae8684-210d-4247-bdb7-483ec7bd6684" containerName="route-controller-manager" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.437388 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aae8684-210d-4247-bdb7-483ec7bd6684" containerName="route-controller-manager" Jan 29 06:46:17 crc kubenswrapper[4826]: E0129 06:46:17.437398 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7087f9c0-baee-4f8f-a24d-14e337c4f32e" containerName="extract-content" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.437408 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7087f9c0-baee-4f8f-a24d-14e337c4f32e" containerName="extract-content" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.437539 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a29767ca-fa74-4744-a422-12d6faa87740" containerName="controller-manager" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.437553 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7087f9c0-baee-4f8f-a24d-14e337c4f32e" containerName="registry-server" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.437569 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aae8684-210d-4247-bdb7-483ec7bd6684" containerName="route-controller-manager" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.438112 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.439431 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw"] Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.440058 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.442525 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.442566 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.442826 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.443102 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.443230 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.443532 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.443667 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.443782 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.444327 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.444462 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.446715 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.448597 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.458493 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw"] Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.459677 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.461011 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67f47f66d7-p2plj"] Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.491115 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.491176 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.538227 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.608051 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhbnp\" (UniqueName: \"kubernetes.io/projected/86f3ea64-461e-47d1-bf4e-ec9018c2e384-kube-api-access-vhbnp\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.608126 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61692392-7bde-4832-ba20-3729bd0473c0-config\") pod \"route-controller-manager-84c68cb56-78tzw\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.608199 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-config\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.608706 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhgb6\" (UniqueName: \"kubernetes.io/projected/61692392-7bde-4832-ba20-3729bd0473c0-kube-api-access-rhgb6\") pod \"route-controller-manager-84c68cb56-78tzw\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.608774 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61692392-7bde-4832-ba20-3729bd0473c0-serving-cert\") pod \"route-controller-manager-84c68cb56-78tzw\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.608917 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-proxy-ca-bundles\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.608968 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86f3ea64-461e-47d1-bf4e-ec9018c2e384-serving-cert\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.609001 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61692392-7bde-4832-ba20-3729bd0473c0-client-ca\") pod \"route-controller-manager-84c68cb56-78tzw\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.609022 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-client-ca\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.709919 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61692392-7bde-4832-ba20-3729bd0473c0-client-ca\") pod \"route-controller-manager-84c68cb56-78tzw\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.709971 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-client-ca\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.709995 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhbnp\" (UniqueName: \"kubernetes.io/projected/86f3ea64-461e-47d1-bf4e-ec9018c2e384-kube-api-access-vhbnp\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.710016 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61692392-7bde-4832-ba20-3729bd0473c0-config\") pod \"route-controller-manager-84c68cb56-78tzw\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.710056 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-config\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.710072 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhgb6\" (UniqueName: \"kubernetes.io/projected/61692392-7bde-4832-ba20-3729bd0473c0-kube-api-access-rhgb6\") pod \"route-controller-manager-84c68cb56-78tzw\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.710090 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61692392-7bde-4832-ba20-3729bd0473c0-serving-cert\") pod \"route-controller-manager-84c68cb56-78tzw\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.710130 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-proxy-ca-bundles\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.710153 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86f3ea64-461e-47d1-bf4e-ec9018c2e384-serving-cert\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.711086 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-client-ca\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.711638 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61692392-7bde-4832-ba20-3729bd0473c0-client-ca\") pod \"route-controller-manager-84c68cb56-78tzw\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.711849 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-config\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.711924 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61692392-7bde-4832-ba20-3729bd0473c0-config\") pod \"route-controller-manager-84c68cb56-78tzw\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.712019 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-proxy-ca-bundles\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.714647 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61692392-7bde-4832-ba20-3729bd0473c0-serving-cert\") pod \"route-controller-manager-84c68cb56-78tzw\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.715189 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86f3ea64-461e-47d1-bf4e-ec9018c2e384-serving-cert\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.727991 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhbnp\" (UniqueName: \"kubernetes.io/projected/86f3ea64-461e-47d1-bf4e-ec9018c2e384-kube-api-access-vhbnp\") pod \"controller-manager-67f47f66d7-p2plj\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.732119 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhgb6\" (UniqueName: \"kubernetes.io/projected/61692392-7bde-4832-ba20-3729bd0473c0-kube-api-access-rhgb6\") pod \"route-controller-manager-84c68cb56-78tzw\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.758156 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.767324 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.942595 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:46:17 crc kubenswrapper[4826]: I0129 06:46:17.942968 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:46:18 crc kubenswrapper[4826]: I0129 06:46:18.017831 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw"] Jan 29 06:46:18 crc kubenswrapper[4826]: I0129 06:46:18.183746 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" event={"ID":"61692392-7bde-4832-ba20-3729bd0473c0","Type":"ContainerStarted","Data":"5ffca51764ef2f509e0147f7704122f8e4225ef2291996573165bd9a03590518"} Jan 29 06:46:18 crc kubenswrapper[4826]: I0129 06:46:18.183792 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" event={"ID":"61692392-7bde-4832-ba20-3729bd0473c0","Type":"ContainerStarted","Data":"f087438f1bbc888c3f1263a7aa7e8f4317add7fc9a8b2593c8f24cc6b449834e"} Jan 29 06:46:18 crc kubenswrapper[4826]: I0129 06:46:18.183936 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:18 crc kubenswrapper[4826]: I0129 06:46:18.186333 4826 patch_prober.go:28] interesting pod/route-controller-manager-84c68cb56-78tzw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Jan 29 06:46:18 crc kubenswrapper[4826]: I0129 06:46:18.186377 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" podUID="61692392-7bde-4832-ba20-3729bd0473c0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Jan 29 06:46:18 crc kubenswrapper[4826]: I0129 06:46:18.251346 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:46:18 crc kubenswrapper[4826]: I0129 06:46:18.260897 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" podStartSLOduration=2.260882648 podStartE2EDuration="2.260882648s" podCreationTimestamp="2026-01-29 06:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:46:18.206620589 +0000 UTC m=+162.068413658" watchObservedRunningTime="2026-01-29 06:46:18.260882648 +0000 UTC m=+162.122675717" Jan 29 06:46:18 crc kubenswrapper[4826]: I0129 06:46:18.264850 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67f47f66d7-p2plj"] Jan 29 06:46:18 crc kubenswrapper[4826]: W0129 06:46:18.268342 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86f3ea64_461e_47d1_bf4e_ec9018c2e384.slice/crio-b2f8f20a7e951dd1af55248bcf4b8f976fc099fe1ce5dafbbe121816548f0f76 WatchSource:0}: Error finding container b2f8f20a7e951dd1af55248bcf4b8f976fc099fe1ce5dafbbe121816548f0f76: Status 404 returned error can't find the container with id b2f8f20a7e951dd1af55248bcf4b8f976fc099fe1ce5dafbbe121816548f0f76 Jan 29 06:46:18 crc kubenswrapper[4826]: I0129 06:46:18.823580 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aae8684-210d-4247-bdb7-483ec7bd6684" path="/var/lib/kubelet/pods/9aae8684-210d-4247-bdb7-483ec7bd6684/volumes" Jan 29 06:46:18 crc kubenswrapper[4826]: I0129 06:46:18.824126 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a29767ca-fa74-4744-a422-12d6faa87740" path="/var/lib/kubelet/pods/a29767ca-fa74-4744-a422-12d6faa87740/volumes" Jan 29 06:46:18 crc kubenswrapper[4826]: I0129 06:46:18.990149 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-sfmlt" podUID="87b64822-a98d-4129-946c-b073cffe4f6c" containerName="registry-server" probeResult="failure" output=< Jan 29 06:46:18 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 06:46:18 crc kubenswrapper[4826]: > Jan 29 06:46:19 crc kubenswrapper[4826]: I0129 06:46:19.189850 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" event={"ID":"86f3ea64-461e-47d1-bf4e-ec9018c2e384","Type":"ContainerStarted","Data":"ebd23a58ec635eb08189a4eca7f19ad4eb9efcd48b45b0d2e2102afaf70ab7e0"} Jan 29 06:46:19 crc kubenswrapper[4826]: I0129 06:46:19.190245 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" event={"ID":"86f3ea64-461e-47d1-bf4e-ec9018c2e384","Type":"ContainerStarted","Data":"b2f8f20a7e951dd1af55248bcf4b8f976fc099fe1ce5dafbbe121816548f0f76"} Jan 29 06:46:19 crc kubenswrapper[4826]: I0129 06:46:19.190512 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:19 crc kubenswrapper[4826]: I0129 06:46:19.197375 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:19 crc kubenswrapper[4826]: I0129 06:46:19.198403 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:19 crc kubenswrapper[4826]: I0129 06:46:19.210638 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" podStartSLOduration=3.210619422 podStartE2EDuration="3.210619422s" podCreationTimestamp="2026-01-29 06:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:46:19.206088524 +0000 UTC m=+163.067881593" watchObservedRunningTime="2026-01-29 06:46:19.210619422 +0000 UTC m=+163.072412521" Jan 29 06:46:19 crc kubenswrapper[4826]: I0129 06:46:19.710009 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:46:19 crc kubenswrapper[4826]: I0129 06:46:19.710879 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:46:19 crc kubenswrapper[4826]: I0129 06:46:19.780564 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:46:20 crc kubenswrapper[4826]: I0129 06:46:20.569186 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:46:20 crc kubenswrapper[4826]: I0129 06:46:20.569495 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:46:21 crc kubenswrapper[4826]: I0129 06:46:21.010723 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:46:21 crc kubenswrapper[4826]: I0129 06:46:21.079819 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:46:21 crc kubenswrapper[4826]: I0129 06:46:21.633489 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xt89w" podUID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" containerName="registry-server" probeResult="failure" output=< Jan 29 06:46:21 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 06:46:21 crc kubenswrapper[4826]: > Jan 29 06:46:24 crc kubenswrapper[4826]: I0129 06:46:24.263290 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qb44z"] Jan 29 06:46:24 crc kubenswrapper[4826]: I0129 06:46:24.264052 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qb44z" podUID="832cf5b0-2b5a-4975-b147-8a1f08f08456" containerName="registry-server" containerID="cri-o://5265233da520fc41c05f0a788a63a097be3248223d4b7ac01ac28754e033e4c1" gracePeriod=2 Jan 29 06:46:24 crc kubenswrapper[4826]: I0129 06:46:24.825259 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:46:24 crc kubenswrapper[4826]: I0129 06:46:24.928731 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832cf5b0-2b5a-4975-b147-8a1f08f08456-catalog-content\") pod \"832cf5b0-2b5a-4975-b147-8a1f08f08456\" (UID: \"832cf5b0-2b5a-4975-b147-8a1f08f08456\") " Jan 29 06:46:24 crc kubenswrapper[4826]: I0129 06:46:24.928846 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832cf5b0-2b5a-4975-b147-8a1f08f08456-utilities\") pod \"832cf5b0-2b5a-4975-b147-8a1f08f08456\" (UID: \"832cf5b0-2b5a-4975-b147-8a1f08f08456\") " Jan 29 06:46:24 crc kubenswrapper[4826]: I0129 06:46:24.929011 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjf7n\" (UniqueName: \"kubernetes.io/projected/832cf5b0-2b5a-4975-b147-8a1f08f08456-kube-api-access-pjf7n\") pod \"832cf5b0-2b5a-4975-b147-8a1f08f08456\" (UID: \"832cf5b0-2b5a-4975-b147-8a1f08f08456\") " Jan 29 06:46:24 crc kubenswrapper[4826]: I0129 06:46:24.930935 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/832cf5b0-2b5a-4975-b147-8a1f08f08456-utilities" (OuterVolumeSpecName: "utilities") pod "832cf5b0-2b5a-4975-b147-8a1f08f08456" (UID: "832cf5b0-2b5a-4975-b147-8a1f08f08456"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:46:24 crc kubenswrapper[4826]: I0129 06:46:24.945575 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832cf5b0-2b5a-4975-b147-8a1f08f08456-kube-api-access-pjf7n" (OuterVolumeSpecName: "kube-api-access-pjf7n") pod "832cf5b0-2b5a-4975-b147-8a1f08f08456" (UID: "832cf5b0-2b5a-4975-b147-8a1f08f08456"). InnerVolumeSpecName "kube-api-access-pjf7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.030261 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832cf5b0-2b5a-4975-b147-8a1f08f08456-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.030339 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjf7n\" (UniqueName: \"kubernetes.io/projected/832cf5b0-2b5a-4975-b147-8a1f08f08456-kube-api-access-pjf7n\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.114243 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/832cf5b0-2b5a-4975-b147-8a1f08f08456-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "832cf5b0-2b5a-4975-b147-8a1f08f08456" (UID: "832cf5b0-2b5a-4975-b147-8a1f08f08456"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.132104 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832cf5b0-2b5a-4975-b147-8a1f08f08456-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.245168 4826 generic.go:334] "Generic (PLEG): container finished" podID="832cf5b0-2b5a-4975-b147-8a1f08f08456" containerID="5265233da520fc41c05f0a788a63a097be3248223d4b7ac01ac28754e033e4c1" exitCode=0 Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.245235 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qb44z" event={"ID":"832cf5b0-2b5a-4975-b147-8a1f08f08456","Type":"ContainerDied","Data":"5265233da520fc41c05f0a788a63a097be3248223d4b7ac01ac28754e033e4c1"} Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.245279 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qb44z" event={"ID":"832cf5b0-2b5a-4975-b147-8a1f08f08456","Type":"ContainerDied","Data":"28db21333d28ca42ce84cdd16cba78d9d5e29a16f4c00c9231fe8bb51c621b05"} Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.245334 4826 scope.go:117] "RemoveContainer" containerID="5265233da520fc41c05f0a788a63a097be3248223d4b7ac01ac28754e033e4c1" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.245340 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qb44z" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.290817 4826 scope.go:117] "RemoveContainer" containerID="056329e99688daf07a6a5c6a9bf7d8e8df10a4bb5ed8896364009e8f49a92062" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.297833 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qb44z"] Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.303386 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qb44z"] Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.321855 4826 scope.go:117] "RemoveContainer" containerID="1f80edadedbb32c8fb5132c32eef7b9c460e928ced345bdd987c1cc167d3326c" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.354022 4826 scope.go:117] "RemoveContainer" containerID="5265233da520fc41c05f0a788a63a097be3248223d4b7ac01ac28754e033e4c1" Jan 29 06:46:25 crc kubenswrapper[4826]: E0129 06:46:25.354970 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5265233da520fc41c05f0a788a63a097be3248223d4b7ac01ac28754e033e4c1\": container with ID starting with 5265233da520fc41c05f0a788a63a097be3248223d4b7ac01ac28754e033e4c1 not found: ID does not exist" containerID="5265233da520fc41c05f0a788a63a097be3248223d4b7ac01ac28754e033e4c1" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.355039 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5265233da520fc41c05f0a788a63a097be3248223d4b7ac01ac28754e033e4c1"} err="failed to get container status \"5265233da520fc41c05f0a788a63a097be3248223d4b7ac01ac28754e033e4c1\": rpc error: code = NotFound desc = could not find container \"5265233da520fc41c05f0a788a63a097be3248223d4b7ac01ac28754e033e4c1\": container with ID starting with 5265233da520fc41c05f0a788a63a097be3248223d4b7ac01ac28754e033e4c1 not found: ID does not exist" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.355084 4826 scope.go:117] "RemoveContainer" containerID="056329e99688daf07a6a5c6a9bf7d8e8df10a4bb5ed8896364009e8f49a92062" Jan 29 06:46:25 crc kubenswrapper[4826]: E0129 06:46:25.356081 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"056329e99688daf07a6a5c6a9bf7d8e8df10a4bb5ed8896364009e8f49a92062\": container with ID starting with 056329e99688daf07a6a5c6a9bf7d8e8df10a4bb5ed8896364009e8f49a92062 not found: ID does not exist" containerID="056329e99688daf07a6a5c6a9bf7d8e8df10a4bb5ed8896364009e8f49a92062" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.356148 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"056329e99688daf07a6a5c6a9bf7d8e8df10a4bb5ed8896364009e8f49a92062"} err="failed to get container status \"056329e99688daf07a6a5c6a9bf7d8e8df10a4bb5ed8896364009e8f49a92062\": rpc error: code = NotFound desc = could not find container \"056329e99688daf07a6a5c6a9bf7d8e8df10a4bb5ed8896364009e8f49a92062\": container with ID starting with 056329e99688daf07a6a5c6a9bf7d8e8df10a4bb5ed8896364009e8f49a92062 not found: ID does not exist" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.356196 4826 scope.go:117] "RemoveContainer" containerID="1f80edadedbb32c8fb5132c32eef7b9c460e928ced345bdd987c1cc167d3326c" Jan 29 06:46:25 crc kubenswrapper[4826]: E0129 06:46:25.356842 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f80edadedbb32c8fb5132c32eef7b9c460e928ced345bdd987c1cc167d3326c\": container with ID starting with 1f80edadedbb32c8fb5132c32eef7b9c460e928ced345bdd987c1cc167d3326c not found: ID does not exist" containerID="1f80edadedbb32c8fb5132c32eef7b9c460e928ced345bdd987c1cc167d3326c" Jan 29 06:46:25 crc kubenswrapper[4826]: I0129 06:46:25.356915 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f80edadedbb32c8fb5132c32eef7b9c460e928ced345bdd987c1cc167d3326c"} err="failed to get container status \"1f80edadedbb32c8fb5132c32eef7b9c460e928ced345bdd987c1cc167d3326c\": rpc error: code = NotFound desc = could not find container \"1f80edadedbb32c8fb5132c32eef7b9c460e928ced345bdd987c1cc167d3326c\": container with ID starting with 1f80edadedbb32c8fb5132c32eef7b9c460e928ced345bdd987c1cc167d3326c not found: ID does not exist" Jan 29 06:46:26 crc kubenswrapper[4826]: I0129 06:46:26.820113 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832cf5b0-2b5a-4975-b147-8a1f08f08456" path="/var/lib/kubelet/pods/832cf5b0-2b5a-4975-b147-8a1f08f08456/volumes" Jan 29 06:46:28 crc kubenswrapper[4826]: I0129 06:46:28.008797 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:46:28 crc kubenswrapper[4826]: I0129 06:46:28.072267 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:46:28 crc kubenswrapper[4826]: I0129 06:46:28.459335 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfmlt"] Jan 29 06:46:29 crc kubenswrapper[4826]: I0129 06:46:29.281544 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sfmlt" podUID="87b64822-a98d-4129-946c-b073cffe4f6c" containerName="registry-server" containerID="cri-o://269df3dbb3eb18fbdf76d41511df8f9f5f5ace37a61755d649d2c6d7d448ed0a" gracePeriod=2 Jan 29 06:46:29 crc kubenswrapper[4826]: I0129 06:46:29.778230 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:46:29 crc kubenswrapper[4826]: I0129 06:46:29.952546 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.115679 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqh8x\" (UniqueName: \"kubernetes.io/projected/87b64822-a98d-4129-946c-b073cffe4f6c-kube-api-access-kqh8x\") pod \"87b64822-a98d-4129-946c-b073cffe4f6c\" (UID: \"87b64822-a98d-4129-946c-b073cffe4f6c\") " Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.115828 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b64822-a98d-4129-946c-b073cffe4f6c-catalog-content\") pod \"87b64822-a98d-4129-946c-b073cffe4f6c\" (UID: \"87b64822-a98d-4129-946c-b073cffe4f6c\") " Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.116072 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b64822-a98d-4129-946c-b073cffe4f6c-utilities\") pod \"87b64822-a98d-4129-946c-b073cffe4f6c\" (UID: \"87b64822-a98d-4129-946c-b073cffe4f6c\") " Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.117782 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b64822-a98d-4129-946c-b073cffe4f6c-utilities" (OuterVolumeSpecName: "utilities") pod "87b64822-a98d-4129-946c-b073cffe4f6c" (UID: "87b64822-a98d-4129-946c-b073cffe4f6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.133056 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b64822-a98d-4129-946c-b073cffe4f6c-kube-api-access-kqh8x" (OuterVolumeSpecName: "kube-api-access-kqh8x") pod "87b64822-a98d-4129-946c-b073cffe4f6c" (UID: "87b64822-a98d-4129-946c-b073cffe4f6c"). InnerVolumeSpecName "kube-api-access-kqh8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.212420 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b64822-a98d-4129-946c-b073cffe4f6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87b64822-a98d-4129-946c-b073cffe4f6c" (UID: "87b64822-a98d-4129-946c-b073cffe4f6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.217950 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b64822-a98d-4129-946c-b073cffe4f6c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.218017 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqh8x\" (UniqueName: \"kubernetes.io/projected/87b64822-a98d-4129-946c-b073cffe4f6c-kube-api-access-kqh8x\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.218043 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b64822-a98d-4129-946c-b073cffe4f6c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.306799 4826 generic.go:334] "Generic (PLEG): container finished" podID="87b64822-a98d-4129-946c-b073cffe4f6c" containerID="269df3dbb3eb18fbdf76d41511df8f9f5f5ace37a61755d649d2c6d7d448ed0a" exitCode=0 Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.306970 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfmlt" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.306898 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfmlt" event={"ID":"87b64822-a98d-4129-946c-b073cffe4f6c","Type":"ContainerDied","Data":"269df3dbb3eb18fbdf76d41511df8f9f5f5ace37a61755d649d2c6d7d448ed0a"} Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.307202 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfmlt" event={"ID":"87b64822-a98d-4129-946c-b073cffe4f6c","Type":"ContainerDied","Data":"d7177835ae17005e0f7ccad7d8a9ab6466ef052514f24eaf0a2758a46175091c"} Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.307251 4826 scope.go:117] "RemoveContainer" containerID="269df3dbb3eb18fbdf76d41511df8f9f5f5ace37a61755d649d2c6d7d448ed0a" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.341388 4826 scope.go:117] "RemoveContainer" containerID="1c52936c59779c76a4ac1a0c1dce5e0f842fe3ba6cb25ecca0d495b9ecdbff02" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.357670 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfmlt"] Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.369386 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sfmlt"] Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.381084 4826 scope.go:117] "RemoveContainer" containerID="fb5a11be0225127a995d30d63e988d63a493795f05d4bc9c94a8bdca101092e2" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.413635 4826 scope.go:117] "RemoveContainer" containerID="269df3dbb3eb18fbdf76d41511df8f9f5f5ace37a61755d649d2c6d7d448ed0a" Jan 29 06:46:30 crc kubenswrapper[4826]: E0129 06:46:30.414272 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"269df3dbb3eb18fbdf76d41511df8f9f5f5ace37a61755d649d2c6d7d448ed0a\": container with ID starting with 269df3dbb3eb18fbdf76d41511df8f9f5f5ace37a61755d649d2c6d7d448ed0a not found: ID does not exist" containerID="269df3dbb3eb18fbdf76d41511df8f9f5f5ace37a61755d649d2c6d7d448ed0a" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.414398 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"269df3dbb3eb18fbdf76d41511df8f9f5f5ace37a61755d649d2c6d7d448ed0a"} err="failed to get container status \"269df3dbb3eb18fbdf76d41511df8f9f5f5ace37a61755d649d2c6d7d448ed0a\": rpc error: code = NotFound desc = could not find container \"269df3dbb3eb18fbdf76d41511df8f9f5f5ace37a61755d649d2c6d7d448ed0a\": container with ID starting with 269df3dbb3eb18fbdf76d41511df8f9f5f5ace37a61755d649d2c6d7d448ed0a not found: ID does not exist" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.414444 4826 scope.go:117] "RemoveContainer" containerID="1c52936c59779c76a4ac1a0c1dce5e0f842fe3ba6cb25ecca0d495b9ecdbff02" Jan 29 06:46:30 crc kubenswrapper[4826]: E0129 06:46:30.414903 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c52936c59779c76a4ac1a0c1dce5e0f842fe3ba6cb25ecca0d495b9ecdbff02\": container with ID starting with 1c52936c59779c76a4ac1a0c1dce5e0f842fe3ba6cb25ecca0d495b9ecdbff02 not found: ID does not exist" containerID="1c52936c59779c76a4ac1a0c1dce5e0f842fe3ba6cb25ecca0d495b9ecdbff02" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.414951 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c52936c59779c76a4ac1a0c1dce5e0f842fe3ba6cb25ecca0d495b9ecdbff02"} err="failed to get container status \"1c52936c59779c76a4ac1a0c1dce5e0f842fe3ba6cb25ecca0d495b9ecdbff02\": rpc error: code = NotFound desc = could not find container \"1c52936c59779c76a4ac1a0c1dce5e0f842fe3ba6cb25ecca0d495b9ecdbff02\": container with ID starting with 1c52936c59779c76a4ac1a0c1dce5e0f842fe3ba6cb25ecca0d495b9ecdbff02 not found: ID does not exist" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.414976 4826 scope.go:117] "RemoveContainer" containerID="fb5a11be0225127a995d30d63e988d63a493795f05d4bc9c94a8bdca101092e2" Jan 29 06:46:30 crc kubenswrapper[4826]: E0129 06:46:30.415556 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb5a11be0225127a995d30d63e988d63a493795f05d4bc9c94a8bdca101092e2\": container with ID starting with fb5a11be0225127a995d30d63e988d63a493795f05d4bc9c94a8bdca101092e2 not found: ID does not exist" containerID="fb5a11be0225127a995d30d63e988d63a493795f05d4bc9c94a8bdca101092e2" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.415604 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb5a11be0225127a995d30d63e988d63a493795f05d4bc9c94a8bdca101092e2"} err="failed to get container status \"fb5a11be0225127a995d30d63e988d63a493795f05d4bc9c94a8bdca101092e2\": rpc error: code = NotFound desc = could not find container \"fb5a11be0225127a995d30d63e988d63a493795f05d4bc9c94a8bdca101092e2\": container with ID starting with fb5a11be0225127a995d30d63e988d63a493795f05d4bc9c94a8bdca101092e2 not found: ID does not exist" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.645917 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.694915 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:46:30 crc kubenswrapper[4826]: I0129 06:46:30.820719 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b64822-a98d-4129-946c-b073cffe4f6c" path="/var/lib/kubelet/pods/87b64822-a98d-4129-946c-b073cffe4f6c/volumes" Jan 29 06:46:31 crc kubenswrapper[4826]: I0129 06:46:31.642814 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" podUID="30bc5222-e3c7-4cad-8e68-d39368e9d00d" containerName="oauth-openshift" containerID="cri-o://1514f7bf0c18e0fa17f17ff3623544dd54f00df0d156ae5df25342ca9ce38396" gracePeriod=15 Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.063475 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsfgw"] Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.064232 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rsfgw" podUID="737f9828-15a3-401e-aa29-e9467773637f" containerName="registry-server" containerID="cri-o://e0e6b099c6cb9f1fef6f62f7763a45a8485c335ef23f5803fc15ea941e20ec51" gracePeriod=2 Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.294460 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.336849 4826 generic.go:334] "Generic (PLEG): container finished" podID="737f9828-15a3-401e-aa29-e9467773637f" containerID="e0e6b099c6cb9f1fef6f62f7763a45a8485c335ef23f5803fc15ea941e20ec51" exitCode=0 Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.336927 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsfgw" event={"ID":"737f9828-15a3-401e-aa29-e9467773637f","Type":"ContainerDied","Data":"e0e6b099c6cb9f1fef6f62f7763a45a8485c335ef23f5803fc15ea941e20ec51"} Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.338349 4826 generic.go:334] "Generic (PLEG): container finished" podID="30bc5222-e3c7-4cad-8e68-d39368e9d00d" containerID="1514f7bf0c18e0fa17f17ff3623544dd54f00df0d156ae5df25342ca9ce38396" exitCode=0 Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.338384 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" event={"ID":"30bc5222-e3c7-4cad-8e68-d39368e9d00d","Type":"ContainerDied","Data":"1514f7bf0c18e0fa17f17ff3623544dd54f00df0d156ae5df25342ca9ce38396"} Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.338404 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" event={"ID":"30bc5222-e3c7-4cad-8e68-d39368e9d00d","Type":"ContainerDied","Data":"8c9710e152f6d8b54e77c47ad302512a731be9b517006fa241a898f24fc14fec"} Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.338423 4826 scope.go:117] "RemoveContainer" containerID="1514f7bf0c18e0fa17f17ff3623544dd54f00df0d156ae5df25342ca9ce38396" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.338525 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2fdxd" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.371143 4826 scope.go:117] "RemoveContainer" containerID="1514f7bf0c18e0fa17f17ff3623544dd54f00df0d156ae5df25342ca9ce38396" Jan 29 06:46:32 crc kubenswrapper[4826]: E0129 06:46:32.371680 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1514f7bf0c18e0fa17f17ff3623544dd54f00df0d156ae5df25342ca9ce38396\": container with ID starting with 1514f7bf0c18e0fa17f17ff3623544dd54f00df0d156ae5df25342ca9ce38396 not found: ID does not exist" containerID="1514f7bf0c18e0fa17f17ff3623544dd54f00df0d156ae5df25342ca9ce38396" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.371733 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1514f7bf0c18e0fa17f17ff3623544dd54f00df0d156ae5df25342ca9ce38396"} err="failed to get container status \"1514f7bf0c18e0fa17f17ff3623544dd54f00df0d156ae5df25342ca9ce38396\": rpc error: code = NotFound desc = could not find container \"1514f7bf0c18e0fa17f17ff3623544dd54f00df0d156ae5df25342ca9ce38396\": container with ID starting with 1514f7bf0c18e0fa17f17ff3623544dd54f00df0d156ae5df25342ca9ce38396 not found: ID does not exist" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.456047 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngg5l\" (UniqueName: \"kubernetes.io/projected/30bc5222-e3c7-4cad-8e68-d39368e9d00d-kube-api-access-ngg5l\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.456113 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-serving-cert\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.456177 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30bc5222-e3c7-4cad-8e68-d39368e9d00d-audit-dir\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.456214 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-service-ca\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.456239 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-cliconfig\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.456277 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-idp-0-file-data\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.456289 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bc5222-e3c7-4cad-8e68-d39368e9d00d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.456338 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-ocp-branding-template\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.456366 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-router-certs\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.456402 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-error\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.457916 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.457949 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.466492 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-trusted-ca-bundle\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.466539 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-audit-policies\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.466581 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-session\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.466665 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-login\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.466700 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-provider-selection\") pod \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\" (UID: \"30bc5222-e3c7-4cad-8e68-d39368e9d00d\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.467518 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.467710 4826 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30bc5222-e3c7-4cad-8e68-d39368e9d00d-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.467738 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.467752 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.467766 4826 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.467938 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.468482 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.469812 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.471911 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bc5222-e3c7-4cad-8e68-d39368e9d00d-kube-api-access-ngg5l" (OuterVolumeSpecName: "kube-api-access-ngg5l") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "kube-api-access-ngg5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.471901 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.473176 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.474139 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.475143 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.476100 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.477006 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "30bc5222-e3c7-4cad-8e68-d39368e9d00d" (UID: "30bc5222-e3c7-4cad-8e68-d39368e9d00d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.569034 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.569066 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.569079 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.569088 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.569097 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.569108 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.569117 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.569127 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.569139 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngg5l\" (UniqueName: \"kubernetes.io/projected/30bc5222-e3c7-4cad-8e68-d39368e9d00d-kube-api-access-ngg5l\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.569147 4826 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30bc5222-e3c7-4cad-8e68-d39368e9d00d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.581995 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.669808 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737f9828-15a3-401e-aa29-e9467773637f-utilities\") pod \"737f9828-15a3-401e-aa29-e9467773637f\" (UID: \"737f9828-15a3-401e-aa29-e9467773637f\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.669915 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737f9828-15a3-401e-aa29-e9467773637f-catalog-content\") pod \"737f9828-15a3-401e-aa29-e9467773637f\" (UID: \"737f9828-15a3-401e-aa29-e9467773637f\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.670003 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j79v\" (UniqueName: \"kubernetes.io/projected/737f9828-15a3-401e-aa29-e9467773637f-kube-api-access-2j79v\") pod \"737f9828-15a3-401e-aa29-e9467773637f\" (UID: \"737f9828-15a3-401e-aa29-e9467773637f\") " Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.675579 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/737f9828-15a3-401e-aa29-e9467773637f-utilities" (OuterVolumeSpecName: "utilities") pod "737f9828-15a3-401e-aa29-e9467773637f" (UID: "737f9828-15a3-401e-aa29-e9467773637f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.680402 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2fdxd"] Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.682498 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/737f9828-15a3-401e-aa29-e9467773637f-kube-api-access-2j79v" (OuterVolumeSpecName: "kube-api-access-2j79v") pod "737f9828-15a3-401e-aa29-e9467773637f" (UID: "737f9828-15a3-401e-aa29-e9467773637f"). InnerVolumeSpecName "kube-api-access-2j79v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.684646 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2fdxd"] Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.710383 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/737f9828-15a3-401e-aa29-e9467773637f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "737f9828-15a3-401e-aa29-e9467773637f" (UID: "737f9828-15a3-401e-aa29-e9467773637f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.772054 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737f9828-15a3-401e-aa29-e9467773637f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.772109 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737f9828-15a3-401e-aa29-e9467773637f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.772130 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j79v\" (UniqueName: \"kubernetes.io/projected/737f9828-15a3-401e-aa29-e9467773637f-kube-api-access-2j79v\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:32 crc kubenswrapper[4826]: I0129 06:46:32.821664 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30bc5222-e3c7-4cad-8e68-d39368e9d00d" path="/var/lib/kubelet/pods/30bc5222-e3c7-4cad-8e68-d39368e9d00d/volumes" Jan 29 06:46:33 crc kubenswrapper[4826]: I0129 06:46:33.347113 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rsfgw" Jan 29 06:46:33 crc kubenswrapper[4826]: I0129 06:46:33.347287 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rsfgw" event={"ID":"737f9828-15a3-401e-aa29-e9467773637f","Type":"ContainerDied","Data":"c0259a3ef32ec3d52105bb14d5ccd2878dc174e6630f0496243cd7aa7078a161"} Jan 29 06:46:33 crc kubenswrapper[4826]: I0129 06:46:33.347389 4826 scope.go:117] "RemoveContainer" containerID="e0e6b099c6cb9f1fef6f62f7763a45a8485c335ef23f5803fc15ea941e20ec51" Jan 29 06:46:33 crc kubenswrapper[4826]: I0129 06:46:33.369587 4826 scope.go:117] "RemoveContainer" containerID="b3ae2002e115149a8d69e5c57f5cf4b96847e97174841301ed61c5f7b726443c" Jan 29 06:46:33 crc kubenswrapper[4826]: I0129 06:46:33.371053 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsfgw"] Jan 29 06:46:33 crc kubenswrapper[4826]: I0129 06:46:33.374225 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rsfgw"] Jan 29 06:46:33 crc kubenswrapper[4826]: I0129 06:46:33.394507 4826 scope.go:117] "RemoveContainer" containerID="c4ece3877797c7a29cb8ce3a8da7e6a22d1f80a63d7e8cd6cba2cb07223f2a46" Jan 29 06:46:34 crc kubenswrapper[4826]: I0129 06:46:34.482149 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 06:46:34 crc kubenswrapper[4826]: I0129 06:46:34.816601 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="737f9828-15a3-401e-aa29-e9467773637f" path="/var/lib/kubelet/pods/737f9828-15a3-401e-aa29-e9467773637f/volumes" Jan 29 06:46:35 crc kubenswrapper[4826]: I0129 06:46:35.657061 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:46:35 crc kubenswrapper[4826]: I0129 06:46:35.657162 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.111378 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67f47f66d7-p2plj"] Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.111634 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" podUID="86f3ea64-461e-47d1-bf4e-ec9018c2e384" containerName="controller-manager" containerID="cri-o://ebd23a58ec635eb08189a4eca7f19ad4eb9efcd48b45b0d2e2102afaf70ab7e0" gracePeriod=30 Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.221406 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw"] Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.222238 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" podUID="61692392-7bde-4832-ba20-3729bd0473c0" containerName="route-controller-manager" containerID="cri-o://5ffca51764ef2f509e0147f7704122f8e4225ef2291996573165bd9a03590518" gracePeriod=30 Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.369505 4826 generic.go:334] "Generic (PLEG): container finished" podID="86f3ea64-461e-47d1-bf4e-ec9018c2e384" containerID="ebd23a58ec635eb08189a4eca7f19ad4eb9efcd48b45b0d2e2102afaf70ab7e0" exitCode=0 Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.369569 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" event={"ID":"86f3ea64-461e-47d1-bf4e-ec9018c2e384","Type":"ContainerDied","Data":"ebd23a58ec635eb08189a4eca7f19ad4eb9efcd48b45b0d2e2102afaf70ab7e0"} Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.371489 4826 generic.go:334] "Generic (PLEG): container finished" podID="61692392-7bde-4832-ba20-3729bd0473c0" containerID="5ffca51764ef2f509e0147f7704122f8e4225ef2291996573165bd9a03590518" exitCode=0 Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.371509 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" event={"ID":"61692392-7bde-4832-ba20-3729bd0473c0","Type":"ContainerDied","Data":"5ffca51764ef2f509e0147f7704122f8e4225ef2291996573165bd9a03590518"} Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.766154 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.821721 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.846715 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61692392-7bde-4832-ba20-3729bd0473c0-config\") pod \"61692392-7bde-4832-ba20-3729bd0473c0\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.846778 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61692392-7bde-4832-ba20-3729bd0473c0-serving-cert\") pod \"61692392-7bde-4832-ba20-3729bd0473c0\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.846861 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61692392-7bde-4832-ba20-3729bd0473c0-client-ca\") pod \"61692392-7bde-4832-ba20-3729bd0473c0\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.846888 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhgb6\" (UniqueName: \"kubernetes.io/projected/61692392-7bde-4832-ba20-3729bd0473c0-kube-api-access-rhgb6\") pod \"61692392-7bde-4832-ba20-3729bd0473c0\" (UID: \"61692392-7bde-4832-ba20-3729bd0473c0\") " Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.847470 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61692392-7bde-4832-ba20-3729bd0473c0-config" (OuterVolumeSpecName: "config") pod "61692392-7bde-4832-ba20-3729bd0473c0" (UID: "61692392-7bde-4832-ba20-3729bd0473c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.847792 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61692392-7bde-4832-ba20-3729bd0473c0-client-ca" (OuterVolumeSpecName: "client-ca") pod "61692392-7bde-4832-ba20-3729bd0473c0" (UID: "61692392-7bde-4832-ba20-3729bd0473c0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.854209 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61692392-7bde-4832-ba20-3729bd0473c0-kube-api-access-rhgb6" (OuterVolumeSpecName: "kube-api-access-rhgb6") pod "61692392-7bde-4832-ba20-3729bd0473c0" (UID: "61692392-7bde-4832-ba20-3729bd0473c0"). InnerVolumeSpecName "kube-api-access-rhgb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.857445 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61692392-7bde-4832-ba20-3729bd0473c0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "61692392-7bde-4832-ba20-3729bd0473c0" (UID: "61692392-7bde-4832-ba20-3729bd0473c0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.947811 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-client-ca\") pod \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.947855 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-config\") pod \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.947892 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-proxy-ca-bundles\") pod \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.947976 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86f3ea64-461e-47d1-bf4e-ec9018c2e384-serving-cert\") pod \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.948002 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhbnp\" (UniqueName: \"kubernetes.io/projected/86f3ea64-461e-47d1-bf4e-ec9018c2e384-kube-api-access-vhbnp\") pod \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\" (UID: \"86f3ea64-461e-47d1-bf4e-ec9018c2e384\") " Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.948203 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61692392-7bde-4832-ba20-3729bd0473c0-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.948219 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61692392-7bde-4832-ba20-3729bd0473c0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.948227 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61692392-7bde-4832-ba20-3729bd0473c0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.948237 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhgb6\" (UniqueName: \"kubernetes.io/projected/61692392-7bde-4832-ba20-3729bd0473c0-kube-api-access-rhgb6\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.949319 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "86f3ea64-461e-47d1-bf4e-ec9018c2e384" (UID: "86f3ea64-461e-47d1-bf4e-ec9018c2e384"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.949458 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-config" (OuterVolumeSpecName: "config") pod "86f3ea64-461e-47d1-bf4e-ec9018c2e384" (UID: "86f3ea64-461e-47d1-bf4e-ec9018c2e384"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.949817 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-client-ca" (OuterVolumeSpecName: "client-ca") pod "86f3ea64-461e-47d1-bf4e-ec9018c2e384" (UID: "86f3ea64-461e-47d1-bf4e-ec9018c2e384"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.952082 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f3ea64-461e-47d1-bf4e-ec9018c2e384-kube-api-access-vhbnp" (OuterVolumeSpecName: "kube-api-access-vhbnp") pod "86f3ea64-461e-47d1-bf4e-ec9018c2e384" (UID: "86f3ea64-461e-47d1-bf4e-ec9018c2e384"). InnerVolumeSpecName "kube-api-access-vhbnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:46:36 crc kubenswrapper[4826]: I0129 06:46:36.953125 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f3ea64-461e-47d1-bf4e-ec9018c2e384-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "86f3ea64-461e-47d1-bf4e-ec9018c2e384" (UID: "86f3ea64-461e-47d1-bf4e-ec9018c2e384"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.049584 4826 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86f3ea64-461e-47d1-bf4e-ec9018c2e384-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.049862 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhbnp\" (UniqueName: \"kubernetes.io/projected/86f3ea64-461e-47d1-bf4e-ec9018c2e384-kube-api-access-vhbnp\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.049952 4826 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.050029 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.050152 4826 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86f3ea64-461e-47d1-bf4e-ec9018c2e384-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.381630 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" event={"ID":"86f3ea64-461e-47d1-bf4e-ec9018c2e384","Type":"ContainerDied","Data":"b2f8f20a7e951dd1af55248bcf4b8f976fc099fe1ce5dafbbe121816548f0f76"} Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.381680 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67f47f66d7-p2plj" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.381730 4826 scope.go:117] "RemoveContainer" containerID="ebd23a58ec635eb08189a4eca7f19ad4eb9efcd48b45b0d2e2102afaf70ab7e0" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.389145 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" event={"ID":"61692392-7bde-4832-ba20-3729bd0473c0","Type":"ContainerDied","Data":"f087438f1bbc888c3f1263a7aa7e8f4317add7fc9a8b2593c8f24cc6b449834e"} Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.389293 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.420008 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw"] Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.420321 4826 scope.go:117] "RemoveContainer" containerID="5ffca51764ef2f509e0147f7704122f8e4225ef2291996573165bd9a03590518" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.425991 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84c68cb56-78tzw"] Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.439939 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67f47f66d7-p2plj"] Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.453987 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67f47f66d7-p2plj"] Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.460261 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7546777c46-kr4ll"] Jan 29 06:46:37 crc kubenswrapper[4826]: E0129 06:46:37.460647 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832cf5b0-2b5a-4975-b147-8a1f08f08456" containerName="extract-utilities" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.460681 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="832cf5b0-2b5a-4975-b147-8a1f08f08456" containerName="extract-utilities" Jan 29 06:46:37 crc kubenswrapper[4826]: E0129 06:46:37.460702 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832cf5b0-2b5a-4975-b147-8a1f08f08456" containerName="registry-server" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.460715 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="832cf5b0-2b5a-4975-b147-8a1f08f08456" containerName="registry-server" Jan 29 06:46:37 crc kubenswrapper[4826]: E0129 06:46:37.460741 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737f9828-15a3-401e-aa29-e9467773637f" containerName="registry-server" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.460753 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="737f9828-15a3-401e-aa29-e9467773637f" containerName="registry-server" Jan 29 06:46:37 crc kubenswrapper[4826]: E0129 06:46:37.460767 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f3ea64-461e-47d1-bf4e-ec9018c2e384" containerName="controller-manager" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.460780 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f3ea64-461e-47d1-bf4e-ec9018c2e384" containerName="controller-manager" Jan 29 06:46:37 crc kubenswrapper[4826]: E0129 06:46:37.460800 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832cf5b0-2b5a-4975-b147-8a1f08f08456" containerName="extract-content" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.460814 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="832cf5b0-2b5a-4975-b147-8a1f08f08456" containerName="extract-content" Jan 29 06:46:37 crc kubenswrapper[4826]: E0129 06:46:37.460830 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61692392-7bde-4832-ba20-3729bd0473c0" containerName="route-controller-manager" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.460843 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="61692392-7bde-4832-ba20-3729bd0473c0" containerName="route-controller-manager" Jan 29 06:46:37 crc kubenswrapper[4826]: E0129 06:46:37.460864 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b64822-a98d-4129-946c-b073cffe4f6c" containerName="extract-utilities" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.460876 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b64822-a98d-4129-946c-b073cffe4f6c" containerName="extract-utilities" Jan 29 06:46:37 crc kubenswrapper[4826]: E0129 06:46:37.460899 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b64822-a98d-4129-946c-b073cffe4f6c" containerName="registry-server" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.460912 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b64822-a98d-4129-946c-b073cffe4f6c" containerName="registry-server" Jan 29 06:46:37 crc kubenswrapper[4826]: E0129 06:46:37.460927 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737f9828-15a3-401e-aa29-e9467773637f" containerName="extract-content" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.460939 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="737f9828-15a3-401e-aa29-e9467773637f" containerName="extract-content" Jan 29 06:46:37 crc kubenswrapper[4826]: E0129 06:46:37.460956 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bc5222-e3c7-4cad-8e68-d39368e9d00d" containerName="oauth-openshift" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.460969 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bc5222-e3c7-4cad-8e68-d39368e9d00d" containerName="oauth-openshift" Jan 29 06:46:37 crc kubenswrapper[4826]: E0129 06:46:37.460987 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737f9828-15a3-401e-aa29-e9467773637f" containerName="extract-utilities" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.460999 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="737f9828-15a3-401e-aa29-e9467773637f" containerName="extract-utilities" Jan 29 06:46:37 crc kubenswrapper[4826]: E0129 06:46:37.461039 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b64822-a98d-4129-946c-b073cffe4f6c" containerName="extract-content" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.461051 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b64822-a98d-4129-946c-b073cffe4f6c" containerName="extract-content" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.461253 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b64822-a98d-4129-946c-b073cffe4f6c" containerName="registry-server" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.461277 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="61692392-7bde-4832-ba20-3729bd0473c0" containerName="route-controller-manager" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.461319 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bc5222-e3c7-4cad-8e68-d39368e9d00d" containerName="oauth-openshift" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.461345 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="832cf5b0-2b5a-4975-b147-8a1f08f08456" containerName="registry-server" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.461363 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f3ea64-461e-47d1-bf4e-ec9018c2e384" containerName="controller-manager" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.461380 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="737f9828-15a3-401e-aa29-e9467773637f" containerName="registry-server" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.462008 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.466259 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv"] Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.467200 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.467509 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.467963 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.468220 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.469082 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.469801 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.476860 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.482878 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.487573 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv"] Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.489084 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.489403 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.489668 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.489748 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.489976 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.490068 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.495877 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7546777c46-kr4ll"] Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.556344 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96kk7\" (UniqueName: \"kubernetes.io/projected/c3bda865-4d5e-41ee-8500-994ae72d221b-kube-api-access-96kk7\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.556398 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bda865-4d5e-41ee-8500-994ae72d221b-config\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.556432 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95dlp\" (UniqueName: \"kubernetes.io/projected/645b4d85-a3c2-42d9-b11c-80017f2f01ab-kube-api-access-95dlp\") pod \"route-controller-manager-7684b7ccb7-l8bvv\" (UID: \"645b4d85-a3c2-42d9-b11c-80017f2f01ab\") " pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.556489 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3bda865-4d5e-41ee-8500-994ae72d221b-proxy-ca-bundles\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.556528 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645b4d85-a3c2-42d9-b11c-80017f2f01ab-config\") pod \"route-controller-manager-7684b7ccb7-l8bvv\" (UID: \"645b4d85-a3c2-42d9-b11c-80017f2f01ab\") " pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.556550 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3bda865-4d5e-41ee-8500-994ae72d221b-serving-cert\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.556574 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645b4d85-a3c2-42d9-b11c-80017f2f01ab-serving-cert\") pod \"route-controller-manager-7684b7ccb7-l8bvv\" (UID: \"645b4d85-a3c2-42d9-b11c-80017f2f01ab\") " pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.556613 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/645b4d85-a3c2-42d9-b11c-80017f2f01ab-client-ca\") pod \"route-controller-manager-7684b7ccb7-l8bvv\" (UID: \"645b4d85-a3c2-42d9-b11c-80017f2f01ab\") " pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.556634 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3bda865-4d5e-41ee-8500-994ae72d221b-client-ca\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.658024 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bda865-4d5e-41ee-8500-994ae72d221b-config\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.658086 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95dlp\" (UniqueName: \"kubernetes.io/projected/645b4d85-a3c2-42d9-b11c-80017f2f01ab-kube-api-access-95dlp\") pod \"route-controller-manager-7684b7ccb7-l8bvv\" (UID: \"645b4d85-a3c2-42d9-b11c-80017f2f01ab\") " pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.658150 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3bda865-4d5e-41ee-8500-994ae72d221b-proxy-ca-bundles\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.658189 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645b4d85-a3c2-42d9-b11c-80017f2f01ab-config\") pod \"route-controller-manager-7684b7ccb7-l8bvv\" (UID: \"645b4d85-a3c2-42d9-b11c-80017f2f01ab\") " pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.658213 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3bda865-4d5e-41ee-8500-994ae72d221b-serving-cert\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.658237 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645b4d85-a3c2-42d9-b11c-80017f2f01ab-serving-cert\") pod \"route-controller-manager-7684b7ccb7-l8bvv\" (UID: \"645b4d85-a3c2-42d9-b11c-80017f2f01ab\") " pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.658276 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/645b4d85-a3c2-42d9-b11c-80017f2f01ab-client-ca\") pod \"route-controller-manager-7684b7ccb7-l8bvv\" (UID: \"645b4d85-a3c2-42d9-b11c-80017f2f01ab\") " pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.658344 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3bda865-4d5e-41ee-8500-994ae72d221b-client-ca\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.658382 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96kk7\" (UniqueName: \"kubernetes.io/projected/c3bda865-4d5e-41ee-8500-994ae72d221b-kube-api-access-96kk7\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.660173 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bda865-4d5e-41ee-8500-994ae72d221b-config\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.661819 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3bda865-4d5e-41ee-8500-994ae72d221b-proxy-ca-bundles\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.663241 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/645b4d85-a3c2-42d9-b11c-80017f2f01ab-client-ca\") pod \"route-controller-manager-7684b7ccb7-l8bvv\" (UID: \"645b4d85-a3c2-42d9-b11c-80017f2f01ab\") " pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.663395 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3bda865-4d5e-41ee-8500-994ae72d221b-client-ca\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.663661 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645b4d85-a3c2-42d9-b11c-80017f2f01ab-config\") pod \"route-controller-manager-7684b7ccb7-l8bvv\" (UID: \"645b4d85-a3c2-42d9-b11c-80017f2f01ab\") " pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.665110 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645b4d85-a3c2-42d9-b11c-80017f2f01ab-serving-cert\") pod \"route-controller-manager-7684b7ccb7-l8bvv\" (UID: \"645b4d85-a3c2-42d9-b11c-80017f2f01ab\") " pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.665902 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3bda865-4d5e-41ee-8500-994ae72d221b-serving-cert\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.683670 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95dlp\" (UniqueName: \"kubernetes.io/projected/645b4d85-a3c2-42d9-b11c-80017f2f01ab-kube-api-access-95dlp\") pod \"route-controller-manager-7684b7ccb7-l8bvv\" (UID: \"645b4d85-a3c2-42d9-b11c-80017f2f01ab\") " pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.683875 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96kk7\" (UniqueName: \"kubernetes.io/projected/c3bda865-4d5e-41ee-8500-994ae72d221b-kube-api-access-96kk7\") pod \"controller-manager-7546777c46-kr4ll\" (UID: \"c3bda865-4d5e-41ee-8500-994ae72d221b\") " pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.789639 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:37 crc kubenswrapper[4826]: I0129 06:46:37.824782 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:38 crc kubenswrapper[4826]: I0129 06:46:38.171461 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv"] Jan 29 06:46:38 crc kubenswrapper[4826]: W0129 06:46:38.178972 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod645b4d85_a3c2_42d9_b11c_80017f2f01ab.slice/crio-bf7882819b015e6ce3c097042a510c46ff164e536c003e28a0c18f6815885d47 WatchSource:0}: Error finding container bf7882819b015e6ce3c097042a510c46ff164e536c003e28a0c18f6815885d47: Status 404 returned error can't find the container with id bf7882819b015e6ce3c097042a510c46ff164e536c003e28a0c18f6815885d47 Jan 29 06:46:38 crc kubenswrapper[4826]: I0129 06:46:38.308501 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7546777c46-kr4ll"] Jan 29 06:46:38 crc kubenswrapper[4826]: W0129 06:46:38.315969 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3bda865_4d5e_41ee_8500_994ae72d221b.slice/crio-73280dbb20b83805876627a8dc9ba788bf52cbf26f8eaf40ffc66cdf7514b8d4 WatchSource:0}: Error finding container 73280dbb20b83805876627a8dc9ba788bf52cbf26f8eaf40ffc66cdf7514b8d4: Status 404 returned error can't find the container with id 73280dbb20b83805876627a8dc9ba788bf52cbf26f8eaf40ffc66cdf7514b8d4 Jan 29 06:46:38 crc kubenswrapper[4826]: I0129 06:46:38.396658 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" event={"ID":"645b4d85-a3c2-42d9-b11c-80017f2f01ab","Type":"ContainerStarted","Data":"8874d9f9e09957315f6a96be9abeeb96a8dc853f07925f0ba1265c8b39d82643"} Jan 29 06:46:38 crc kubenswrapper[4826]: I0129 06:46:38.397084 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:38 crc kubenswrapper[4826]: I0129 06:46:38.397102 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" event={"ID":"645b4d85-a3c2-42d9-b11c-80017f2f01ab","Type":"ContainerStarted","Data":"bf7882819b015e6ce3c097042a510c46ff164e536c003e28a0c18f6815885d47"} Jan 29 06:46:38 crc kubenswrapper[4826]: I0129 06:46:38.402104 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" event={"ID":"c3bda865-4d5e-41ee-8500-994ae72d221b","Type":"ContainerStarted","Data":"73280dbb20b83805876627a8dc9ba788bf52cbf26f8eaf40ffc66cdf7514b8d4"} Jan 29 06:46:38 crc kubenswrapper[4826]: I0129 06:46:38.426420 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" podStartSLOduration=2.426398107 podStartE2EDuration="2.426398107s" podCreationTimestamp="2026-01-29 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:46:38.415195111 +0000 UTC m=+182.276988180" watchObservedRunningTime="2026-01-29 06:46:38.426398107 +0000 UTC m=+182.288191176" Jan 29 06:46:38 crc kubenswrapper[4826]: I0129 06:46:38.601791 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7684b7ccb7-l8bvv" Jan 29 06:46:38 crc kubenswrapper[4826]: I0129 06:46:38.814262 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61692392-7bde-4832-ba20-3729bd0473c0" path="/var/lib/kubelet/pods/61692392-7bde-4832-ba20-3729bd0473c0/volumes" Jan 29 06:46:38 crc kubenswrapper[4826]: I0129 06:46:38.814793 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f3ea64-461e-47d1-bf4e-ec9018c2e384" path="/var/lib/kubelet/pods/86f3ea64-461e-47d1-bf4e-ec9018c2e384/volumes" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.415055 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" event={"ID":"c3bda865-4d5e-41ee-8500-994ae72d221b","Type":"ContainerStarted","Data":"1895ef7049d0979d162cf46836fff1faa5d883982598a572bcb6846f3a5f5515"} Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.450407 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" podStartSLOduration=3.450369714 podStartE2EDuration="3.450369714s" podCreationTimestamp="2026-01-29 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:46:39.450049067 +0000 UTC m=+183.311842176" watchObservedRunningTime="2026-01-29 06:46:39.450369714 +0000 UTC m=+183.312162853" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.462923 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-77c8c5f65c-8c78x"] Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.463845 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.467969 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.468001 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.468387 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.468629 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.468666 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.468723 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.469068 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.469192 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.469288 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.469359 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.471397 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.477410 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.488283 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-session\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.488612 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.488808 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.489002 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-service-ca\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.489170 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8630064c-87da-46c4-9dd3-e146d84f2b65-audit-policies\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.489335 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-user-template-login\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.489521 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.489680 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.489239 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77c8c5f65c-8c78x"] Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.489859 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.490447 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8630064c-87da-46c4-9dd3-e146d84f2b65-audit-dir\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.490555 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvnm2\" (UniqueName: \"kubernetes.io/projected/8630064c-87da-46c4-9dd3-e146d84f2b65-kube-api-access-bvnm2\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.490592 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.490623 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-router-certs\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.490646 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-user-template-error\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.495636 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.495730 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.500637 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592194 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592270 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592376 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592425 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8630064c-87da-46c4-9dd3-e146d84f2b65-audit-dir\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592473 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvnm2\" (UniqueName: \"kubernetes.io/projected/8630064c-87da-46c4-9dd3-e146d84f2b65-kube-api-access-bvnm2\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592519 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592555 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-router-certs\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592592 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-user-template-error\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592638 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-session\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592684 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592749 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592804 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-service-ca\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592843 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8630064c-87da-46c4-9dd3-e146d84f2b65-audit-policies\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.592882 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-user-template-login\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.594582 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8630064c-87da-46c4-9dd3-e146d84f2b65-audit-dir\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.594753 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.595139 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-service-ca\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.595685 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8630064c-87da-46c4-9dd3-e146d84f2b65-audit-policies\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.596782 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.601781 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-router-certs\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.601988 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.602561 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-session\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.602864 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-user-template-error\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.603905 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.604375 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.604845 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.607212 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8630064c-87da-46c4-9dd3-e146d84f2b65-v4-0-config-user-template-login\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.612085 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvnm2\" (UniqueName: \"kubernetes.io/projected/8630064c-87da-46c4-9dd3-e146d84f2b65-kube-api-access-bvnm2\") pod \"oauth-openshift-77c8c5f65c-8c78x\" (UID: \"8630064c-87da-46c4-9dd3-e146d84f2b65\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:39 crc kubenswrapper[4826]: I0129 06:46:39.808328 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:40 crc kubenswrapper[4826]: I0129 06:46:40.124088 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77c8c5f65c-8c78x"] Jan 29 06:46:40 crc kubenswrapper[4826]: I0129 06:46:40.430658 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" event={"ID":"8630064c-87da-46c4-9dd3-e146d84f2b65","Type":"ContainerStarted","Data":"e9a7eb93e86f769a64b12c196df26f25ecefcfb485b5b342babc9f748df5ac70"} Jan 29 06:46:40 crc kubenswrapper[4826]: I0129 06:46:40.433614 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:40 crc kubenswrapper[4826]: I0129 06:46:40.441669 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7546777c46-kr4ll" Jan 29 06:46:41 crc kubenswrapper[4826]: I0129 06:46:41.441321 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" event={"ID":"8630064c-87da-46c4-9dd3-e146d84f2b65","Type":"ContainerStarted","Data":"e76d284e60bc4c5389c5afd12a830b56ff9d8e2c939b9acb4d2ed66a1c3f3336"} Jan 29 06:46:41 crc kubenswrapper[4826]: I0129 06:46:41.442176 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:41 crc kubenswrapper[4826]: I0129 06:46:41.451342 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" Jan 29 06:46:41 crc kubenswrapper[4826]: I0129 06:46:41.473443 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-77c8c5f65c-8c78x" podStartSLOduration=35.473421827 podStartE2EDuration="35.473421827s" podCreationTimestamp="2026-01-29 06:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:46:41.467197499 +0000 UTC m=+185.328990608" watchObservedRunningTime="2026-01-29 06:46:41.473421827 +0000 UTC m=+185.335214926" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.006089 4826 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.008106 4826 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.008348 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.008815 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add" gracePeriod=15 Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.008869 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de" gracePeriod=15 Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.009142 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0" gracePeriod=15 Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.009084 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488" gracePeriod=15 Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.009351 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6" gracePeriod=15 Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.015971 4826 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 06:46:45 crc kubenswrapper[4826]: E0129 06:46:45.016702 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.017718 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 06:46:45 crc kubenswrapper[4826]: E0129 06:46:45.018124 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.023246 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 06:46:45 crc kubenswrapper[4826]: E0129 06:46:45.023913 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.025825 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 06:46:45 crc kubenswrapper[4826]: E0129 06:46:45.025978 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.027186 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 06:46:45 crc kubenswrapper[4826]: E0129 06:46:45.027428 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.027587 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 06:46:45 crc kubenswrapper[4826]: E0129 06:46:45.027711 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.027849 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.029078 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.029346 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.030055 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.030255 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.030571 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.089123 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.089598 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.089744 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.090398 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.090811 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.090930 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.091060 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.091502 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.195717 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.195788 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.195828 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.195852 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.195887 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.195915 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.195932 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.195947 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.196007 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.196053 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.196078 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.196091 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.196118 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.196144 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.196170 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.196197 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.484408 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.485769 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de" exitCode=0 Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.485816 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0" exitCode=0 Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.485834 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488" exitCode=0 Jan 29 06:46:45 crc kubenswrapper[4826]: I0129 06:46:45.485853 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6" exitCode=2 Jan 29 06:46:46 crc kubenswrapper[4826]: E0129 06:46:46.631364 4826 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:46 crc kubenswrapper[4826]: E0129 06:46:46.633657 4826 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:46 crc kubenswrapper[4826]: E0129 06:46:46.634836 4826 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:46 crc kubenswrapper[4826]: E0129 06:46:46.635440 4826 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:46 crc kubenswrapper[4826]: E0129 06:46:46.636280 4826 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:46 crc kubenswrapper[4826]: I0129 06:46:46.636408 4826 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 06:46:46 crc kubenswrapper[4826]: E0129 06:46:46.637200 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="200ms" Jan 29 06:46:46 crc kubenswrapper[4826]: I0129 06:46:46.814950 4826 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:46 crc kubenswrapper[4826]: E0129 06:46:46.838876 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="400ms" Jan 29 06:46:47 crc kubenswrapper[4826]: E0129 06:46:47.240151 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="800ms" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.495412 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.498712 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.501538 4826 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.505148 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.506231 4826 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add" exitCode=0 Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.506369 4826 scope.go:117] "RemoveContainer" containerID="45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.506434 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.528679 4826 scope.go:117] "RemoveContainer" containerID="0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.533658 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.533735 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.533789 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.533851 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.533919 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.533995 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.534245 4826 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.534275 4826 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.534294 4826 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.548864 4826 scope.go:117] "RemoveContainer" containerID="44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.574786 4826 scope.go:117] "RemoveContainer" containerID="31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.599922 4826 scope.go:117] "RemoveContainer" containerID="40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.619763 4826 scope.go:117] "RemoveContainer" containerID="45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.648392 4826 scope.go:117] "RemoveContainer" containerID="45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de" Jan 29 06:46:47 crc kubenswrapper[4826]: E0129 06:46:47.649039 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\": container with ID starting with 45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de not found: ID does not exist" containerID="45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.649093 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de"} err="failed to get container status \"45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\": rpc error: code = NotFound desc = could not find container \"45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de\": container with ID starting with 45d883a4ea759e22fa21239ae2921ef04ecccfa89d497da49e3c679b327324de not found: ID does not exist" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.649136 4826 scope.go:117] "RemoveContainer" containerID="0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0" Jan 29 06:46:47 crc kubenswrapper[4826]: E0129 06:46:47.657705 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\": container with ID starting with 0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0 not found: ID does not exist" containerID="0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.657798 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0"} err="failed to get container status \"0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\": rpc error: code = NotFound desc = could not find container \"0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0\": container with ID starting with 0d37a2f9924c7f61c2dc23553c6888ff3d003c1f214fc27451dc39cb2904beb0 not found: ID does not exist" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.657852 4826 scope.go:117] "RemoveContainer" containerID="44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488" Jan 29 06:46:47 crc kubenswrapper[4826]: E0129 06:46:47.658468 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\": container with ID starting with 44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488 not found: ID does not exist" containerID="44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.658538 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488"} err="failed to get container status \"44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\": rpc error: code = NotFound desc = could not find container \"44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488\": container with ID starting with 44fcb87a9ed7e20eed3477821f91ab9e64ffa55c3d164bea6ebba4efe4559488 not found: ID does not exist" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.658582 4826 scope.go:117] "RemoveContainer" containerID="31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6" Jan 29 06:46:47 crc kubenswrapper[4826]: E0129 06:46:47.659211 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\": container with ID starting with 31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6 not found: ID does not exist" containerID="31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.659235 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6"} err="failed to get container status \"31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\": rpc error: code = NotFound desc = could not find container \"31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6\": container with ID starting with 31d84079dc3eb633cabeb599d5ab31b5dc97899e72354ef0df2b02d077b6f2c6 not found: ID does not exist" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.659255 4826 scope.go:117] "RemoveContainer" containerID="40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add" Jan 29 06:46:47 crc kubenswrapper[4826]: E0129 06:46:47.659853 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\": container with ID starting with 40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add not found: ID does not exist" containerID="40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.659885 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add"} err="failed to get container status \"40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\": rpc error: code = NotFound desc = could not find container \"40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add\": container with ID starting with 40e8c258188b3be59563080446325a4fb12bdd0cb3fea94331bfcbfaec5d7add not found: ID does not exist" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.659899 4826 scope.go:117] "RemoveContainer" containerID="45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b" Jan 29 06:46:47 crc kubenswrapper[4826]: E0129 06:46:47.660195 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\": container with ID starting with 45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b not found: ID does not exist" containerID="45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.660221 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b"} err="failed to get container status \"45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\": rpc error: code = NotFound desc = could not find container \"45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b\": container with ID starting with 45d106c386b3f1952f6a2dccb3aed5ac7b18e66bd1150c9aa51d6739f75a3f3b not found: ID does not exist" Jan 29 06:46:47 crc kubenswrapper[4826]: E0129 06:46:47.831671 4826 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" volumeName="registry-storage" Jan 29 06:46:47 crc kubenswrapper[4826]: I0129 06:46:47.833420 4826 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:48 crc kubenswrapper[4826]: E0129 06:46:48.042863 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="1.6s" Jan 29 06:46:48 crc kubenswrapper[4826]: I0129 06:46:48.824076 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 06:46:49 crc kubenswrapper[4826]: E0129 06:46:49.644725 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="3.2s" Jan 29 06:46:50 crc kubenswrapper[4826]: E0129 06:46:50.080993 4826 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:50 crc kubenswrapper[4826]: I0129 06:46:50.081914 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:50 crc kubenswrapper[4826]: E0129 06:46:50.124771 4826 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f20c53f6377f3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 06:46:50.124023795 +0000 UTC m=+193.985816904,LastTimestamp:2026-01-29 06:46:50.124023795 +0000 UTC m=+193.985816904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 06:46:50 crc kubenswrapper[4826]: I0129 06:46:50.554729 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ff83b7823356c12661cf8c0d32bce080f96d367605c073abc70b950a384f0795"} Jan 29 06:46:50 crc kubenswrapper[4826]: I0129 06:46:50.555294 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"628c877ed751ede4de2fc8f56df6ae95e120a5bc9ff8f6c2dedc7eac10d33c42"} Jan 29 06:46:50 crc kubenswrapper[4826]: E0129 06:46:50.556561 4826 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:46:50 crc kubenswrapper[4826]: I0129 06:46:50.562860 4826 generic.go:334] "Generic (PLEG): container finished" podID="da70e922-2407-45cd-a1d2-6f3d9129de21" containerID="046a494a51071db8912277eee22adf85896be4e6f5bbcb9648dfd3c6837efd63" exitCode=0 Jan 29 06:46:50 crc kubenswrapper[4826]: I0129 06:46:50.562939 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"da70e922-2407-45cd-a1d2-6f3d9129de21","Type":"ContainerDied","Data":"046a494a51071db8912277eee22adf85896be4e6f5bbcb9648dfd3c6837efd63"} Jan 29 06:46:50 crc kubenswrapper[4826]: I0129 06:46:50.563863 4826 status_manager.go:851] "Failed to get status for pod" podUID="da70e922-2407-45cd-a1d2-6f3d9129de21" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.111442 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.116990 4826 status_manager.go:851] "Failed to get status for pod" podUID="da70e922-2407-45cd-a1d2-6f3d9129de21" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.166240 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da70e922-2407-45cd-a1d2-6f3d9129de21-kube-api-access\") pod \"da70e922-2407-45cd-a1d2-6f3d9129de21\" (UID: \"da70e922-2407-45cd-a1d2-6f3d9129de21\") " Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.166354 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da70e922-2407-45cd-a1d2-6f3d9129de21-kubelet-dir\") pod \"da70e922-2407-45cd-a1d2-6f3d9129de21\" (UID: \"da70e922-2407-45cd-a1d2-6f3d9129de21\") " Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.166403 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da70e922-2407-45cd-a1d2-6f3d9129de21-var-lock\") pod \"da70e922-2407-45cd-a1d2-6f3d9129de21\" (UID: \"da70e922-2407-45cd-a1d2-6f3d9129de21\") " Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.166615 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da70e922-2407-45cd-a1d2-6f3d9129de21-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "da70e922-2407-45cd-a1d2-6f3d9129de21" (UID: "da70e922-2407-45cd-a1d2-6f3d9129de21"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.166737 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da70e922-2407-45cd-a1d2-6f3d9129de21-var-lock" (OuterVolumeSpecName: "var-lock") pod "da70e922-2407-45cd-a1d2-6f3d9129de21" (UID: "da70e922-2407-45cd-a1d2-6f3d9129de21"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.166960 4826 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da70e922-2407-45cd-a1d2-6f3d9129de21-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.166984 4826 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da70e922-2407-45cd-a1d2-6f3d9129de21-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.174825 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da70e922-2407-45cd-a1d2-6f3d9129de21-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "da70e922-2407-45cd-a1d2-6f3d9129de21" (UID: "da70e922-2407-45cd-a1d2-6f3d9129de21"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.268722 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da70e922-2407-45cd-a1d2-6f3d9129de21-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.581694 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"da70e922-2407-45cd-a1d2-6f3d9129de21","Type":"ContainerDied","Data":"d31d53fd81fcdd67cade79b1e6a45ee0ba7abdbcc3bb2d5fa4b3484691e577c2"} Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.582134 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d31d53fd81fcdd67cade79b1e6a45ee0ba7abdbcc3bb2d5fa4b3484691e577c2" Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.581818 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 06:46:52 crc kubenswrapper[4826]: I0129 06:46:52.596670 4826 status_manager.go:851] "Failed to get status for pod" podUID="da70e922-2407-45cd-a1d2-6f3d9129de21" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:52 crc kubenswrapper[4826]: E0129 06:46:52.846971 4826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="6.4s" Jan 29 06:46:53 crc kubenswrapper[4826]: E0129 06:46:53.845092 4826 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f20c53f6377f3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 06:46:50.124023795 +0000 UTC m=+193.985816904,LastTimestamp:2026-01-29 06:46:50.124023795 +0000 UTC m=+193.985816904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 06:46:56 crc kubenswrapper[4826]: I0129 06:46:56.821334 4826 status_manager.go:851] "Failed to get status for pod" podUID="da70e922-2407-45cd-a1d2-6f3d9129de21" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:57 crc kubenswrapper[4826]: E0129 06:46:57.512218 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:46:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:46:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:46:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T06:46:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:57 crc kubenswrapper[4826]: E0129 06:46:57.512837 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:57 crc kubenswrapper[4826]: E0129 06:46:57.513548 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:57 crc kubenswrapper[4826]: E0129 06:46:57.514601 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:57 crc kubenswrapper[4826]: E0129 06:46:57.514995 4826 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:57 crc kubenswrapper[4826]: E0129 06:46:57.515022 4826 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 06:46:57 crc kubenswrapper[4826]: I0129 06:46:57.807796 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:57 crc kubenswrapper[4826]: I0129 06:46:57.808851 4826 status_manager.go:851] "Failed to get status for pod" podUID="da70e922-2407-45cd-a1d2-6f3d9129de21" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:57 crc kubenswrapper[4826]: I0129 06:46:57.834126 4826 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66b74e7b-cd1e-4181-8ce6-eb41576c41e7" Jan 29 06:46:57 crc kubenswrapper[4826]: I0129 06:46:57.834187 4826 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66b74e7b-cd1e-4181-8ce6-eb41576c41e7" Jan 29 06:46:57 crc kubenswrapper[4826]: E0129 06:46:57.834801 4826 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:57 crc kubenswrapper[4826]: I0129 06:46:57.835658 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:57 crc kubenswrapper[4826]: W0129 06:46:57.870936 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-758a2a0dc674b1d5bba8be0f8f8f98411ab6c548cbf8a61bc22b0028e944e5a6 WatchSource:0}: Error finding container 758a2a0dc674b1d5bba8be0f8f8f98411ab6c548cbf8a61bc22b0028e944e5a6: Status 404 returned error can't find the container with id 758a2a0dc674b1d5bba8be0f8f8f98411ab6c548cbf8a61bc22b0028e944e5a6 Jan 29 06:46:58 crc kubenswrapper[4826]: I0129 06:46:58.633724 4826 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="3087dc6849a2f2da869130d9c7367f689437904f732e58bef48b66e17ac8dfcc" exitCode=0 Jan 29 06:46:58 crc kubenswrapper[4826]: I0129 06:46:58.633852 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"3087dc6849a2f2da869130d9c7367f689437904f732e58bef48b66e17ac8dfcc"} Jan 29 06:46:58 crc kubenswrapper[4826]: I0129 06:46:58.634371 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"758a2a0dc674b1d5bba8be0f8f8f98411ab6c548cbf8a61bc22b0028e944e5a6"} Jan 29 06:46:58 crc kubenswrapper[4826]: I0129 06:46:58.634846 4826 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66b74e7b-cd1e-4181-8ce6-eb41576c41e7" Jan 29 06:46:58 crc kubenswrapper[4826]: I0129 06:46:58.634876 4826 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66b74e7b-cd1e-4181-8ce6-eb41576c41e7" Jan 29 06:46:58 crc kubenswrapper[4826]: E0129 06:46:58.635535 4826 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:46:58 crc kubenswrapper[4826]: I0129 06:46:58.635558 4826 status_manager.go:851] "Failed to get status for pod" podUID="da70e922-2407-45cd-a1d2-6f3d9129de21" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Jan 29 06:46:59 crc kubenswrapper[4826]: I0129 06:46:59.644229 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"820effc52050b835815d3bf2091971930f5a91335366f69afbb806924d219010"} Jan 29 06:46:59 crc kubenswrapper[4826]: I0129 06:46:59.644813 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"30e88bbd8b836af40b675a8dacfa92f2d38a04542795818e13e77c78e5c68910"} Jan 29 06:46:59 crc kubenswrapper[4826]: I0129 06:46:59.644835 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c4d240365284d3ff4d9e0b51c7acf8b9ef9b6c0e4d21a12686473ffd0fe6586b"} Jan 29 06:47:00 crc kubenswrapper[4826]: I0129 06:47:00.652439 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 06:47:00 crc kubenswrapper[4826]: I0129 06:47:00.652708 4826 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f" exitCode=1 Jan 29 06:47:00 crc kubenswrapper[4826]: I0129 06:47:00.652782 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f"} Jan 29 06:47:00 crc kubenswrapper[4826]: I0129 06:47:00.653266 4826 scope.go:117] "RemoveContainer" containerID="24eebf52ead4277f7dc3d546d26a6d1e645deb748e283fd10e00df41ff65f10f" Jan 29 06:47:00 crc kubenswrapper[4826]: I0129 06:47:00.656639 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"71cee75a15ce75c708ed9a611116e43285a8a9be75a7508a448e145a416a63e0"} Jan 29 06:47:00 crc kubenswrapper[4826]: I0129 06:47:00.656681 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d286095f5a04c87d396a6eba7e74cedeffbe2952987840242725d1d770afc905"} Jan 29 06:47:00 crc kubenswrapper[4826]: I0129 06:47:00.656821 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:47:00 crc kubenswrapper[4826]: I0129 06:47:00.656974 4826 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66b74e7b-cd1e-4181-8ce6-eb41576c41e7" Jan 29 06:47:00 crc kubenswrapper[4826]: I0129 06:47:00.657012 4826 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66b74e7b-cd1e-4181-8ce6-eb41576c41e7" Jan 29 06:47:01 crc kubenswrapper[4826]: I0129 06:47:01.537594 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:47:01 crc kubenswrapper[4826]: I0129 06:47:01.664969 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 06:47:01 crc kubenswrapper[4826]: I0129 06:47:01.665041 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ba7cf45d2a64c300278213a2894da325b3810348880173f5daced01bc4bbbc6e"} Jan 29 06:47:02 crc kubenswrapper[4826]: I0129 06:47:02.836459 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:47:02 crc kubenswrapper[4826]: I0129 06:47:02.836958 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:47:02 crc kubenswrapper[4826]: I0129 06:47:02.849273 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:47:03 crc kubenswrapper[4826]: I0129 06:47:03.917826 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:47:05 crc kubenswrapper[4826]: I0129 06:47:05.656634 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:47:05 crc kubenswrapper[4826]: I0129 06:47:05.657122 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:47:05 crc kubenswrapper[4826]: I0129 06:47:05.657209 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:47:05 crc kubenswrapper[4826]: I0129 06:47:05.658245 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 06:47:05 crc kubenswrapper[4826]: I0129 06:47:05.658424 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f" gracePeriod=600 Jan 29 06:47:05 crc kubenswrapper[4826]: I0129 06:47:05.670667 4826 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:47:06 crc kubenswrapper[4826]: I0129 06:47:06.700945 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f" exitCode=0 Jan 29 06:47:06 crc kubenswrapper[4826]: I0129 06:47:06.701073 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f"} Jan 29 06:47:06 crc kubenswrapper[4826]: I0129 06:47:06.703532 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"49712b74d21f3c3377a3574a3bd33dea7533ff74de9557e188300ceb42aaf015"} Jan 29 06:47:06 crc kubenswrapper[4826]: I0129 06:47:06.704344 4826 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66b74e7b-cd1e-4181-8ce6-eb41576c41e7" Jan 29 06:47:06 crc kubenswrapper[4826]: I0129 06:47:06.704544 4826 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66b74e7b-cd1e-4181-8ce6-eb41576c41e7" Jan 29 06:47:06 crc kubenswrapper[4826]: I0129 06:47:06.710338 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:47:06 crc kubenswrapper[4826]: I0129 06:47:06.849192 4826 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4556c57b-15ff-42ec-865e-02584a018a29" Jan 29 06:47:07 crc kubenswrapper[4826]: I0129 06:47:07.710930 4826 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66b74e7b-cd1e-4181-8ce6-eb41576c41e7" Jan 29 06:47:07 crc kubenswrapper[4826]: I0129 06:47:07.710978 4826 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="66b74e7b-cd1e-4181-8ce6-eb41576c41e7" Jan 29 06:47:07 crc kubenswrapper[4826]: I0129 06:47:07.715778 4826 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4556c57b-15ff-42ec-865e-02584a018a29" Jan 29 06:47:10 crc kubenswrapper[4826]: I0129 06:47:10.060618 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:47:10 crc kubenswrapper[4826]: I0129 06:47:10.061059 4826 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 06:47:10 crc kubenswrapper[4826]: I0129 06:47:10.061676 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 06:47:16 crc kubenswrapper[4826]: I0129 06:47:16.402734 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 06:47:16 crc kubenswrapper[4826]: I0129 06:47:16.762398 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 06:47:16 crc kubenswrapper[4826]: I0129 06:47:16.943972 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 06:47:17 crc kubenswrapper[4826]: I0129 06:47:17.061547 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 06:47:17 crc kubenswrapper[4826]: I0129 06:47:17.263388 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 06:47:17 crc kubenswrapper[4826]: I0129 06:47:17.301590 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 06:47:17 crc kubenswrapper[4826]: I0129 06:47:17.425119 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 06:47:17 crc kubenswrapper[4826]: I0129 06:47:17.664626 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 06:47:17 crc kubenswrapper[4826]: I0129 06:47:17.813563 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 06:47:17 crc kubenswrapper[4826]: I0129 06:47:17.879793 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.154754 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.252122 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.269963 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.456210 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.475063 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.530038 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.545227 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.608480 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.718287 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.725467 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.752064 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.846902 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.934249 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.955330 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 06:47:18 crc kubenswrapper[4826]: I0129 06:47:18.994154 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 06:47:19 crc kubenswrapper[4826]: I0129 06:47:19.153222 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 06:47:19 crc kubenswrapper[4826]: I0129 06:47:19.224521 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 06:47:19 crc kubenswrapper[4826]: I0129 06:47:19.249357 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 06:47:19 crc kubenswrapper[4826]: I0129 06:47:19.283702 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 06:47:19 crc kubenswrapper[4826]: I0129 06:47:19.296445 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 06:47:19 crc kubenswrapper[4826]: I0129 06:47:19.587437 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 06:47:19 crc kubenswrapper[4826]: I0129 06:47:19.911776 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.060842 4826 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.060970 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.119254 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.123388 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.219444 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.286261 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.293762 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.316449 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.371886 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.406519 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.408172 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.409795 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.470744 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.482385 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.484261 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.541393 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.595097 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.631562 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.650092 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.727833 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.796189 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.825195 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.842854 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.862196 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.908726 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 06:47:20 crc kubenswrapper[4826]: I0129 06:47:20.982144 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.015230 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.140856 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.149543 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.161249 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.175647 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.245963 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.300525 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.400340 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.465434 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.514469 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.676608 4826 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.879273 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.907786 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 06:47:21 crc kubenswrapper[4826]: I0129 06:47:21.975801 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.019664 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.093820 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.095189 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.127400 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.211178 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.213532 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.240251 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.357131 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.473834 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.580914 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.580936 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.637775 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.654973 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.657817 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.661466 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.891644 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 06:47:22 crc kubenswrapper[4826]: I0129 06:47:22.987677 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.033092 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.109038 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.147407 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.175939 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.284184 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.310582 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.472058 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.536888 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.586082 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.620815 4826 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.637133 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.641895 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.641986 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.646999 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.668570 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.668544513 podStartE2EDuration="18.668544513s" podCreationTimestamp="2026-01-29 06:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:47:23.667664427 +0000 UTC m=+227.529457536" watchObservedRunningTime="2026-01-29 06:47:23.668544513 +0000 UTC m=+227.530337622" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.685045 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.764704 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.779084 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.900627 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.946609 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.953238 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 06:47:23 crc kubenswrapper[4826]: I0129 06:47:23.958987 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.015804 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.034971 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.064852 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.141997 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.153937 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.223539 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.235652 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.328461 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.502347 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.617200 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.627617 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.634798 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.661947 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.736386 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.748095 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.750046 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.753133 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.774910 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.923398 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.963392 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 06:47:24 crc kubenswrapper[4826]: I0129 06:47:24.991709 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.005630 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.055709 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.114203 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.125958 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.132829 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.132886 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.190876 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.275878 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.316496 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.451219 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.543033 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.577209 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.594469 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.602899 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.608537 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.619404 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.736103 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.738279 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.754228 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.770239 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.898938 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.926677 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.926686 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 06:47:25 crc kubenswrapper[4826]: I0129 06:47:25.969520 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.037862 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.139022 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.167762 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.171093 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.195978 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.321026 4826 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.358282 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.381270 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.521038 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.527371 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.603563 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.683947 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.839022 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.870885 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.882750 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 06:47:26 crc kubenswrapper[4826]: I0129 06:47:26.906836 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.047946 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.061519 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.095270 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.116848 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.157219 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.171455 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.202489 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.207128 4826 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.253968 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.285629 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.335900 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.353148 4826 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.354170 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.389759 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.392177 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.404243 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.418094 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.467243 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.594104 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.605807 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.675336 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.692359 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.704755 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.716169 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.758525 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.784846 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.883274 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.933029 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 06:47:27 crc kubenswrapper[4826]: I0129 06:47:27.949167 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.006251 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.102938 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.170973 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.198224 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.250137 4826 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.250475 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ff83b7823356c12661cf8c0d32bce080f96d367605c073abc70b950a384f0795" gracePeriod=5 Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.298641 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.344933 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.372715 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.379003 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.434928 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.500939 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.535005 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.639269 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.708120 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.718030 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.788585 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.885032 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 06:47:28 crc kubenswrapper[4826]: I0129 06:47:28.926198 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.039364 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.220354 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.239276 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.246600 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.260244 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.314060 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.374290 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.427402 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.490517 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.556118 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.645036 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.651687 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.672916 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.741982 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.845762 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 06:47:29 crc kubenswrapper[4826]: I0129 06:47:29.945704 4826 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 06:47:30 crc kubenswrapper[4826]: I0129 06:47:30.068552 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:47:30 crc kubenswrapper[4826]: I0129 06:47:30.077294 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 06:47:30 crc kubenswrapper[4826]: I0129 06:47:30.078767 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 06:47:30 crc kubenswrapper[4826]: I0129 06:47:30.135226 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 06:47:30 crc kubenswrapper[4826]: I0129 06:47:30.172292 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 06:47:30 crc kubenswrapper[4826]: I0129 06:47:30.432157 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 06:47:30 crc kubenswrapper[4826]: I0129 06:47:30.432954 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 06:47:30 crc kubenswrapper[4826]: I0129 06:47:30.438382 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 06:47:30 crc kubenswrapper[4826]: I0129 06:47:30.636323 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 06:47:30 crc kubenswrapper[4826]: I0129 06:47:30.702691 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 06:47:31 crc kubenswrapper[4826]: I0129 06:47:31.204385 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 06:47:31 crc kubenswrapper[4826]: I0129 06:47:31.415783 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 06:47:31 crc kubenswrapper[4826]: I0129 06:47:31.450869 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 06:47:31 crc kubenswrapper[4826]: I0129 06:47:31.654557 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 06:47:31 crc kubenswrapper[4826]: I0129 06:47:31.832202 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 06:47:31 crc kubenswrapper[4826]: I0129 06:47:31.920575 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 06:47:32 crc kubenswrapper[4826]: I0129 06:47:32.040272 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 06:47:32 crc kubenswrapper[4826]: I0129 06:47:32.040718 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 06:47:32 crc kubenswrapper[4826]: I0129 06:47:32.096252 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 06:47:32 crc kubenswrapper[4826]: I0129 06:47:32.319529 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 06:47:32 crc kubenswrapper[4826]: I0129 06:47:32.581248 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 06:47:32 crc kubenswrapper[4826]: I0129 06:47:32.863233 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 06:47:32 crc kubenswrapper[4826]: I0129 06:47:32.899627 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 06:47:33 crc kubenswrapper[4826]: I0129 06:47:33.429601 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 06:47:33 crc kubenswrapper[4826]: I0129 06:47:33.461089 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 06:47:33 crc kubenswrapper[4826]: I0129 06:47:33.861932 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 06:47:33 crc kubenswrapper[4826]: I0129 06:47:33.862046 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:47:33 crc kubenswrapper[4826]: I0129 06:47:33.897108 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 06:47:33 crc kubenswrapper[4826]: I0129 06:47:33.897180 4826 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ff83b7823356c12661cf8c0d32bce080f96d367605c073abc70b950a384f0795" exitCode=137 Jan 29 06:47:33 crc kubenswrapper[4826]: I0129 06:47:33.897353 4826 scope.go:117] "RemoveContainer" containerID="ff83b7823356c12661cf8c0d32bce080f96d367605c073abc70b950a384f0795" Jan 29 06:47:33 crc kubenswrapper[4826]: I0129 06:47:33.897547 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 06:47:33 crc kubenswrapper[4826]: I0129 06:47:33.929815 4826 scope.go:117] "RemoveContainer" containerID="ff83b7823356c12661cf8c0d32bce080f96d367605c073abc70b950a384f0795" Jan 29 06:47:33 crc kubenswrapper[4826]: E0129 06:47:33.930567 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff83b7823356c12661cf8c0d32bce080f96d367605c073abc70b950a384f0795\": container with ID starting with ff83b7823356c12661cf8c0d32bce080f96d367605c073abc70b950a384f0795 not found: ID does not exist" containerID="ff83b7823356c12661cf8c0d32bce080f96d367605c073abc70b950a384f0795" Jan 29 06:47:33 crc kubenswrapper[4826]: I0129 06:47:33.930625 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff83b7823356c12661cf8c0d32bce080f96d367605c073abc70b950a384f0795"} err="failed to get container status \"ff83b7823356c12661cf8c0d32bce080f96d367605c073abc70b950a384f0795\": rpc error: code = NotFound desc = could not find container \"ff83b7823356c12661cf8c0d32bce080f96d367605c073abc70b950a384f0795\": container with ID starting with ff83b7823356c12661cf8c0d32bce080f96d367605c073abc70b950a384f0795 not found: ID does not exist" Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.012895 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.013003 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.013241 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.013276 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.013293 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.013370 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.013412 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.013502 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.013720 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.014165 4826 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.014189 4826 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.014205 4826 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.014222 4826 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.025924 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.116087 4826 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 06:47:34 crc kubenswrapper[4826]: I0129 06:47:34.819569 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 06:47:47 crc kubenswrapper[4826]: I0129 06:47:47.015259 4826 generic.go:334] "Generic (PLEG): container finished" podID="7ebf8342-e0f9-413f-811d-57ca9df94f2d" containerID="9d13a267560333a1d187c96aabdd7d32057bd22f83d5e3f44fb9d50b46b66260" exitCode=0 Jan 29 06:47:47 crc kubenswrapper[4826]: I0129 06:47:47.015983 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" event={"ID":"7ebf8342-e0f9-413f-811d-57ca9df94f2d","Type":"ContainerDied","Data":"9d13a267560333a1d187c96aabdd7d32057bd22f83d5e3f44fb9d50b46b66260"} Jan 29 06:47:47 crc kubenswrapper[4826]: I0129 06:47:47.016680 4826 scope.go:117] "RemoveContainer" containerID="9d13a267560333a1d187c96aabdd7d32057bd22f83d5e3f44fb9d50b46b66260" Jan 29 06:47:48 crc kubenswrapper[4826]: I0129 06:47:48.023134 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" event={"ID":"7ebf8342-e0f9-413f-811d-57ca9df94f2d","Type":"ContainerStarted","Data":"306b63cc65010e89b7909e7be3d24c83a459eb1133b5d64d4490a4a5498010e5"} Jan 29 06:47:48 crc kubenswrapper[4826]: I0129 06:47:48.023793 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:47:48 crc kubenswrapper[4826]: I0129 06:47:48.025999 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:47:51 crc kubenswrapper[4826]: I0129 06:47:51.662876 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 06:48:35 crc kubenswrapper[4826]: I0129 06:48:35.940080 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vz5vb"] Jan 29 06:48:35 crc kubenswrapper[4826]: E0129 06:48:35.941394 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da70e922-2407-45cd-a1d2-6f3d9129de21" containerName="installer" Jan 29 06:48:35 crc kubenswrapper[4826]: I0129 06:48:35.941429 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="da70e922-2407-45cd-a1d2-6f3d9129de21" containerName="installer" Jan 29 06:48:35 crc kubenswrapper[4826]: E0129 06:48:35.941473 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 06:48:35 crc kubenswrapper[4826]: I0129 06:48:35.941491 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 06:48:35 crc kubenswrapper[4826]: I0129 06:48:35.941760 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 06:48:35 crc kubenswrapper[4826]: I0129 06:48:35.941802 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="da70e922-2407-45cd-a1d2-6f3d9129de21" containerName="installer" Jan 29 06:48:35 crc kubenswrapper[4826]: I0129 06:48:35.942768 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:35 crc kubenswrapper[4826]: I0129 06:48:35.960902 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vz5vb"] Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.041473 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-registry-certificates\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.041532 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-trusted-ca\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.041550 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.041572 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.041731 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-registry-tls\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.041798 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-bound-sa-token\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.042051 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.042141 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6rp9\" (UniqueName: \"kubernetes.io/projected/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-kube-api-access-h6rp9\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.076782 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.144383 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6rp9\" (UniqueName: \"kubernetes.io/projected/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-kube-api-access-h6rp9\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.144486 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-registry-certificates\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.144534 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-trusted-ca\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.144568 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.144609 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.144644 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-registry-tls\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.144681 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-bound-sa-token\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.145377 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.146437 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-trusted-ca\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.146947 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-registry-certificates\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.155817 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.155898 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-registry-tls\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.163509 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6rp9\" (UniqueName: \"kubernetes.io/projected/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-kube-api-access-h6rp9\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.174200 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6edde1ee-ebe7-40e2-be1a-995bfba0a61d-bound-sa-token\") pod \"image-registry-66df7c8f76-vz5vb\" (UID: \"6edde1ee-ebe7-40e2-be1a-995bfba0a61d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.266172 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.566790 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vz5vb"] Jan 29 06:48:36 crc kubenswrapper[4826]: I0129 06:48:36.569424 4826 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 06:48:37 crc kubenswrapper[4826]: I0129 06:48:37.400824 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" event={"ID":"6edde1ee-ebe7-40e2-be1a-995bfba0a61d","Type":"ContainerStarted","Data":"2fea05fcf0fd95d0cc05b5635dc992cd619a6d29aa00ff289c4c5040d7f0fe6c"} Jan 29 06:48:37 crc kubenswrapper[4826]: I0129 06:48:37.401426 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" event={"ID":"6edde1ee-ebe7-40e2-be1a-995bfba0a61d","Type":"ContainerStarted","Data":"5cdcc82f7a87f00776937373976ec6024cafd52ebea1c6f21c19e8364b309ce2"} Jan 29 06:48:37 crc kubenswrapper[4826]: I0129 06:48:37.401605 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:37 crc kubenswrapper[4826]: I0129 06:48:37.442792 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" podStartSLOduration=2.442762573 podStartE2EDuration="2.442762573s" podCreationTimestamp="2026-01-29 06:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:48:37.436802566 +0000 UTC m=+301.298595645" watchObservedRunningTime="2026-01-29 06:48:37.442762573 +0000 UTC m=+301.304555672" Jan 29 06:48:56 crc kubenswrapper[4826]: I0129 06:48:56.275493 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vz5vb" Jan 29 06:48:56 crc kubenswrapper[4826]: I0129 06:48:56.422710 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ddt5t"] Jan 29 06:49:04 crc kubenswrapper[4826]: I0129 06:49:04.841154 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-brqmx"] Jan 29 06:49:04 crc kubenswrapper[4826]: I0129 06:49:04.842084 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-brqmx" podUID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" containerName="registry-server" containerID="cri-o://c0bc02934618b56670e02c41ece2d294ccddb2ef03d666340615b38efcc32cc5" gracePeriod=30 Jan 29 06:49:04 crc kubenswrapper[4826]: I0129 06:49:04.849107 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmn6q"] Jan 29 06:49:04 crc kubenswrapper[4826]: I0129 06:49:04.849457 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dmn6q" podUID="d84323bb-bf74-4538-8cff-b507cb9b261d" containerName="registry-server" containerID="cri-o://78b8cbdb5ea73d6bbd4b1371a4d7f2a056ce489f4cda0047214a0673ccadc7c6" gracePeriod=30 Jan 29 06:49:04 crc kubenswrapper[4826]: I0129 06:49:04.864337 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bmqnb"] Jan 29 06:49:04 crc kubenswrapper[4826]: I0129 06:49:04.864595 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" podUID="7ebf8342-e0f9-413f-811d-57ca9df94f2d" containerName="marketplace-operator" containerID="cri-o://306b63cc65010e89b7909e7be3d24c83a459eb1133b5d64d4490a4a5498010e5" gracePeriod=30 Jan 29 06:49:04 crc kubenswrapper[4826]: I0129 06:49:04.869213 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vvbnx"] Jan 29 06:49:04 crc kubenswrapper[4826]: I0129 06:49:04.869464 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vvbnx" podUID="775bf475-9e63-49e0-9bde-bef34dce79c9" containerName="registry-server" containerID="cri-o://4a9f2e7beae972051dca2ae472c02d8a51bbb70ccee034b2e3b68cae66197bfd" gracePeriod=30 Jan 29 06:49:04 crc kubenswrapper[4826]: I0129 06:49:04.873829 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8g6tv"] Jan 29 06:49:04 crc kubenswrapper[4826]: I0129 06:49:04.874965 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" Jan 29 06:49:04 crc kubenswrapper[4826]: I0129 06:49:04.876645 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xt89w"] Jan 29 06:49:04 crc kubenswrapper[4826]: I0129 06:49:04.876868 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xt89w" podUID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" containerName="registry-server" containerID="cri-o://3d895f90b224c8fe8be2213ad186be47562803ea80d2eac16cf0c25ec2d4f99c" gracePeriod=30 Jan 29 06:49:04 crc kubenswrapper[4826]: I0129 06:49:04.887137 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8g6tv"] Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.006131 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/720ff77a-51ee-49c7-8678-4d6d9f179942-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8g6tv\" (UID: \"720ff77a-51ee-49c7-8678-4d6d9f179942\") " pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.006214 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xf5k\" (UniqueName: \"kubernetes.io/projected/720ff77a-51ee-49c7-8678-4d6d9f179942-kube-api-access-9xf5k\") pod \"marketplace-operator-79b997595-8g6tv\" (UID: \"720ff77a-51ee-49c7-8678-4d6d9f179942\") " pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.006240 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/720ff77a-51ee-49c7-8678-4d6d9f179942-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8g6tv\" (UID: \"720ff77a-51ee-49c7-8678-4d6d9f179942\") " pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.107049 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xf5k\" (UniqueName: \"kubernetes.io/projected/720ff77a-51ee-49c7-8678-4d6d9f179942-kube-api-access-9xf5k\") pod \"marketplace-operator-79b997595-8g6tv\" (UID: \"720ff77a-51ee-49c7-8678-4d6d9f179942\") " pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.107406 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/720ff77a-51ee-49c7-8678-4d6d9f179942-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8g6tv\" (UID: \"720ff77a-51ee-49c7-8678-4d6d9f179942\") " pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.107459 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/720ff77a-51ee-49c7-8678-4d6d9f179942-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8g6tv\" (UID: \"720ff77a-51ee-49c7-8678-4d6d9f179942\") " pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.109965 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/720ff77a-51ee-49c7-8678-4d6d9f179942-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8g6tv\" (UID: \"720ff77a-51ee-49c7-8678-4d6d9f179942\") " pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.113140 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/720ff77a-51ee-49c7-8678-4d6d9f179942-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8g6tv\" (UID: \"720ff77a-51ee-49c7-8678-4d6d9f179942\") " pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.124772 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xf5k\" (UniqueName: \"kubernetes.io/projected/720ff77a-51ee-49c7-8678-4d6d9f179942-kube-api-access-9xf5k\") pod \"marketplace-operator-79b997595-8g6tv\" (UID: \"720ff77a-51ee-49c7-8678-4d6d9f179942\") " pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.191926 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.268336 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.344833 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.351588 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.352822 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.357077 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.410881 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb065b7c-e4d6-4607-aa51-e8acf00117fa-catalog-content\") pod \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\" (UID: \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.410952 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crv66\" (UniqueName: \"kubernetes.io/projected/eb065b7c-e4d6-4607-aa51-e8acf00117fa-kube-api-access-crv66\") pod \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\" (UID: \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.410992 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb065b7c-e4d6-4607-aa51-e8acf00117fa-utilities\") pod \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\" (UID: \"eb065b7c-e4d6-4607-aa51-e8acf00117fa\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.415997 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb065b7c-e4d6-4607-aa51-e8acf00117fa-utilities" (OuterVolumeSpecName: "utilities") pod "eb065b7c-e4d6-4607-aa51-e8acf00117fa" (UID: "eb065b7c-e4d6-4607-aa51-e8acf00117fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.417010 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb065b7c-e4d6-4607-aa51-e8acf00117fa-kube-api-access-crv66" (OuterVolumeSpecName: "kube-api-access-crv66") pod "eb065b7c-e4d6-4607-aa51-e8acf00117fa" (UID: "eb065b7c-e4d6-4607-aa51-e8acf00117fa"). InnerVolumeSpecName "kube-api-access-crv66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.448810 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8g6tv"] Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.493790 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb065b7c-e4d6-4607-aa51-e8acf00117fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb065b7c-e4d6-4607-aa51-e8acf00117fa" (UID: "eb065b7c-e4d6-4607-aa51-e8acf00117fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.511638 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ebf8342-e0f9-413f-811d-57ca9df94f2d-marketplace-trusted-ca\") pod \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\" (UID: \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.511873 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84323bb-bf74-4538-8cff-b507cb9b261d-catalog-content\") pod \"d84323bb-bf74-4538-8cff-b507cb9b261d\" (UID: \"d84323bb-bf74-4538-8cff-b507cb9b261d\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.512078 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkjjg\" (UniqueName: \"kubernetes.io/projected/7ebf8342-e0f9-413f-811d-57ca9df94f2d-kube-api-access-rkjjg\") pod \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\" (UID: \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.512143 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84323bb-bf74-4538-8cff-b507cb9b261d-utilities\") pod \"d84323bb-bf74-4538-8cff-b507cb9b261d\" (UID: \"d84323bb-bf74-4538-8cff-b507cb9b261d\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.512164 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-catalog-content\") pod \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\" (UID: \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.512211 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2r6w\" (UniqueName: \"kubernetes.io/projected/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-kube-api-access-f2r6w\") pod \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\" (UID: \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.512253 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7ebf8342-e0f9-413f-811d-57ca9df94f2d-marketplace-operator-metrics\") pod \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\" (UID: \"7ebf8342-e0f9-413f-811d-57ca9df94f2d\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.512281 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjtzd\" (UniqueName: \"kubernetes.io/projected/775bf475-9e63-49e0-9bde-bef34dce79c9-kube-api-access-gjtzd\") pod \"775bf475-9e63-49e0-9bde-bef34dce79c9\" (UID: \"775bf475-9e63-49e0-9bde-bef34dce79c9\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.512314 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-utilities\") pod \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\" (UID: \"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.512331 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djcrp\" (UniqueName: \"kubernetes.io/projected/d84323bb-bf74-4538-8cff-b507cb9b261d-kube-api-access-djcrp\") pod \"d84323bb-bf74-4538-8cff-b507cb9b261d\" (UID: \"d84323bb-bf74-4538-8cff-b507cb9b261d\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.512353 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/775bf475-9e63-49e0-9bde-bef34dce79c9-utilities\") pod \"775bf475-9e63-49e0-9bde-bef34dce79c9\" (UID: \"775bf475-9e63-49e0-9bde-bef34dce79c9\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.512380 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/775bf475-9e63-49e0-9bde-bef34dce79c9-catalog-content\") pod \"775bf475-9e63-49e0-9bde-bef34dce79c9\" (UID: \"775bf475-9e63-49e0-9bde-bef34dce79c9\") " Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.512716 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb065b7c-e4d6-4607-aa51-e8acf00117fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.512728 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crv66\" (UniqueName: \"kubernetes.io/projected/eb065b7c-e4d6-4607-aa51-e8acf00117fa-kube-api-access-crv66\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.512742 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb065b7c-e4d6-4607-aa51-e8acf00117fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.513622 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ebf8342-e0f9-413f-811d-57ca9df94f2d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7ebf8342-e0f9-413f-811d-57ca9df94f2d" (UID: "7ebf8342-e0f9-413f-811d-57ca9df94f2d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.514978 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-utilities" (OuterVolumeSpecName: "utilities") pod "2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" (UID: "2b7e32bd-0e0a-49fd-a29e-4c8087218b7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.516091 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84323bb-bf74-4538-8cff-b507cb9b261d-utilities" (OuterVolumeSpecName: "utilities") pod "d84323bb-bf74-4538-8cff-b507cb9b261d" (UID: "d84323bb-bf74-4538-8cff-b507cb9b261d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.517486 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/775bf475-9e63-49e0-9bde-bef34dce79c9-utilities" (OuterVolumeSpecName: "utilities") pod "775bf475-9e63-49e0-9bde-bef34dce79c9" (UID: "775bf475-9e63-49e0-9bde-bef34dce79c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.517823 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ebf8342-e0f9-413f-811d-57ca9df94f2d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7ebf8342-e0f9-413f-811d-57ca9df94f2d" (UID: "7ebf8342-e0f9-413f-811d-57ca9df94f2d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.518445 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775bf475-9e63-49e0-9bde-bef34dce79c9-kube-api-access-gjtzd" (OuterVolumeSpecName: "kube-api-access-gjtzd") pod "775bf475-9e63-49e0-9bde-bef34dce79c9" (UID: "775bf475-9e63-49e0-9bde-bef34dce79c9"). InnerVolumeSpecName "kube-api-access-gjtzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.519237 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84323bb-bf74-4538-8cff-b507cb9b261d-kube-api-access-djcrp" (OuterVolumeSpecName: "kube-api-access-djcrp") pod "d84323bb-bf74-4538-8cff-b507cb9b261d" (UID: "d84323bb-bf74-4538-8cff-b507cb9b261d"). InnerVolumeSpecName "kube-api-access-djcrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.519283 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-kube-api-access-f2r6w" (OuterVolumeSpecName: "kube-api-access-f2r6w") pod "2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" (UID: "2b7e32bd-0e0a-49fd-a29e-4c8087218b7a"). InnerVolumeSpecName "kube-api-access-f2r6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.519981 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ebf8342-e0f9-413f-811d-57ca9df94f2d-kube-api-access-rkjjg" (OuterVolumeSpecName: "kube-api-access-rkjjg") pod "7ebf8342-e0f9-413f-811d-57ca9df94f2d" (UID: "7ebf8342-e0f9-413f-811d-57ca9df94f2d"). InnerVolumeSpecName "kube-api-access-rkjjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.541765 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/775bf475-9e63-49e0-9bde-bef34dce79c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "775bf475-9e63-49e0-9bde-bef34dce79c9" (UID: "775bf475-9e63-49e0-9bde-bef34dce79c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.576683 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84323bb-bf74-4538-8cff-b507cb9b261d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d84323bb-bf74-4538-8cff-b507cb9b261d" (UID: "d84323bb-bf74-4538-8cff-b507cb9b261d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.603317 4826 generic.go:334] "Generic (PLEG): container finished" podID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" containerID="3d895f90b224c8fe8be2213ad186be47562803ea80d2eac16cf0c25ec2d4f99c" exitCode=0 Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.603404 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xt89w" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.603418 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xt89w" event={"ID":"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a","Type":"ContainerDied","Data":"3d895f90b224c8fe8be2213ad186be47562803ea80d2eac16cf0c25ec2d4f99c"} Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.603453 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xt89w" event={"ID":"2b7e32bd-0e0a-49fd-a29e-4c8087218b7a","Type":"ContainerDied","Data":"b720454e3f0bbda6945fcfa0dd7596f225953bdbd979ebb31dbbb1d426288bc0"} Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.603471 4826 scope.go:117] "RemoveContainer" containerID="3d895f90b224c8fe8be2213ad186be47562803ea80d2eac16cf0c25ec2d4f99c" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.606631 4826 generic.go:334] "Generic (PLEG): container finished" podID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" containerID="c0bc02934618b56670e02c41ece2d294ccddb2ef03d666340615b38efcc32cc5" exitCode=0 Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.606699 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brqmx" event={"ID":"eb065b7c-e4d6-4607-aa51-e8acf00117fa","Type":"ContainerDied","Data":"c0bc02934618b56670e02c41ece2d294ccddb2ef03d666340615b38efcc32cc5"} Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.606729 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brqmx" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.606735 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brqmx" event={"ID":"eb065b7c-e4d6-4607-aa51-e8acf00117fa","Type":"ContainerDied","Data":"82f3de28f4e602bc6bb07666df61a67355bc9d8820b62606c2eef062d2776766"} Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.610363 4826 generic.go:334] "Generic (PLEG): container finished" podID="7ebf8342-e0f9-413f-811d-57ca9df94f2d" containerID="306b63cc65010e89b7909e7be3d24c83a459eb1133b5d64d4490a4a5498010e5" exitCode=0 Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.610442 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.610432 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" event={"ID":"7ebf8342-e0f9-413f-811d-57ca9df94f2d","Type":"ContainerDied","Data":"306b63cc65010e89b7909e7be3d24c83a459eb1133b5d64d4490a4a5498010e5"} Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.610632 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bmqnb" event={"ID":"7ebf8342-e0f9-413f-811d-57ca9df94f2d","Type":"ContainerDied","Data":"e42832eec1ee0c0543eb84741388e15eb1fce8d0d5d12443978a7d970704b984"} Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.613699 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkjjg\" (UniqueName: \"kubernetes.io/projected/7ebf8342-e0f9-413f-811d-57ca9df94f2d-kube-api-access-rkjjg\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.613724 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d84323bb-bf74-4538-8cff-b507cb9b261d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.613734 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2r6w\" (UniqueName: \"kubernetes.io/projected/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-kube-api-access-f2r6w\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.613742 4826 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7ebf8342-e0f9-413f-811d-57ca9df94f2d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.613751 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjtzd\" (UniqueName: \"kubernetes.io/projected/775bf475-9e63-49e0-9bde-bef34dce79c9-kube-api-access-gjtzd\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.613760 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.613769 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djcrp\" (UniqueName: \"kubernetes.io/projected/d84323bb-bf74-4538-8cff-b507cb9b261d-kube-api-access-djcrp\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.613777 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/775bf475-9e63-49e0-9bde-bef34dce79c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.613785 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/775bf475-9e63-49e0-9bde-bef34dce79c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.613818 4826 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7ebf8342-e0f9-413f-811d-57ca9df94f2d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.613828 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d84323bb-bf74-4538-8cff-b507cb9b261d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.617327 4826 scope.go:117] "RemoveContainer" containerID="b06d4a57a2fdc6d072a97c3e703a28492b72a415b68fa6d36b0745a41643ca2e" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.617632 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" event={"ID":"720ff77a-51ee-49c7-8678-4d6d9f179942","Type":"ContainerStarted","Data":"aa8eb4e2f8659c2cc43d82167128b4d9cf27407a4e485787604e547e58af9360"} Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.617676 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" event={"ID":"720ff77a-51ee-49c7-8678-4d6d9f179942","Type":"ContainerStarted","Data":"4cc2705b4213b947c694d2761a5de5749f024bf96c4e1de6f010b4a59a4bfb63"} Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.617902 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.619163 4826 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8g6tv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" start-of-body= Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.619412 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" podUID="720ff77a-51ee-49c7-8678-4d6d9f179942" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.637943 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" podStartSLOduration=1.637920849 podStartE2EDuration="1.637920849s" podCreationTimestamp="2026-01-29 06:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:49:05.630007015 +0000 UTC m=+329.491800104" watchObservedRunningTime="2026-01-29 06:49:05.637920849 +0000 UTC m=+329.499713918" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.640432 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmn6q" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.640688 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmn6q" event={"ID":"d84323bb-bf74-4538-8cff-b507cb9b261d","Type":"ContainerDied","Data":"78b8cbdb5ea73d6bbd4b1371a4d7f2a056ce489f4cda0047214a0673ccadc7c6"} Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.645394 4826 generic.go:334] "Generic (PLEG): container finished" podID="d84323bb-bf74-4538-8cff-b507cb9b261d" containerID="78b8cbdb5ea73d6bbd4b1371a4d7f2a056ce489f4cda0047214a0673ccadc7c6" exitCode=0 Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.645547 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmn6q" event={"ID":"d84323bb-bf74-4538-8cff-b507cb9b261d","Type":"ContainerDied","Data":"c5a5916dc5412383ba1b1fb5534ee99b9ebeb7a738b3ec00020ccaa0e876b3ea"} Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.647639 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bmqnb"] Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.650574 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bmqnb"] Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.652988 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vvbnx" event={"ID":"775bf475-9e63-49e0-9bde-bef34dce79c9","Type":"ContainerDied","Data":"4a9f2e7beae972051dca2ae472c02d8a51bbb70ccee034b2e3b68cae66197bfd"} Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.653010 4826 generic.go:334] "Generic (PLEG): container finished" podID="775bf475-9e63-49e0-9bde-bef34dce79c9" containerID="4a9f2e7beae972051dca2ae472c02d8a51bbb70ccee034b2e3b68cae66197bfd" exitCode=0 Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.653199 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vvbnx" event={"ID":"775bf475-9e63-49e0-9bde-bef34dce79c9","Type":"ContainerDied","Data":"5c22df299b9076d31cec4b7cd353ba23386b074977940ccced5af207703f1646"} Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.653816 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vvbnx" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.661803 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" (UID: "2b7e32bd-0e0a-49fd-a29e-4c8087218b7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.676985 4826 scope.go:117] "RemoveContainer" containerID="b8a10173bc4e7821db0e0fb791583089acb8b1c14a3d19219aa89c620b683a17" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.678020 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-brqmx"] Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.682851 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-brqmx"] Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.696708 4826 scope.go:117] "RemoveContainer" containerID="3d895f90b224c8fe8be2213ad186be47562803ea80d2eac16cf0c25ec2d4f99c" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.701210 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d895f90b224c8fe8be2213ad186be47562803ea80d2eac16cf0c25ec2d4f99c\": container with ID starting with 3d895f90b224c8fe8be2213ad186be47562803ea80d2eac16cf0c25ec2d4f99c not found: ID does not exist" containerID="3d895f90b224c8fe8be2213ad186be47562803ea80d2eac16cf0c25ec2d4f99c" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.701257 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d895f90b224c8fe8be2213ad186be47562803ea80d2eac16cf0c25ec2d4f99c"} err="failed to get container status \"3d895f90b224c8fe8be2213ad186be47562803ea80d2eac16cf0c25ec2d4f99c\": rpc error: code = NotFound desc = could not find container \"3d895f90b224c8fe8be2213ad186be47562803ea80d2eac16cf0c25ec2d4f99c\": container with ID starting with 3d895f90b224c8fe8be2213ad186be47562803ea80d2eac16cf0c25ec2d4f99c not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.701286 4826 scope.go:117] "RemoveContainer" containerID="b06d4a57a2fdc6d072a97c3e703a28492b72a415b68fa6d36b0745a41643ca2e" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.701638 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b06d4a57a2fdc6d072a97c3e703a28492b72a415b68fa6d36b0745a41643ca2e\": container with ID starting with b06d4a57a2fdc6d072a97c3e703a28492b72a415b68fa6d36b0745a41643ca2e not found: ID does not exist" containerID="b06d4a57a2fdc6d072a97c3e703a28492b72a415b68fa6d36b0745a41643ca2e" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.701677 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06d4a57a2fdc6d072a97c3e703a28492b72a415b68fa6d36b0745a41643ca2e"} err="failed to get container status \"b06d4a57a2fdc6d072a97c3e703a28492b72a415b68fa6d36b0745a41643ca2e\": rpc error: code = NotFound desc = could not find container \"b06d4a57a2fdc6d072a97c3e703a28492b72a415b68fa6d36b0745a41643ca2e\": container with ID starting with b06d4a57a2fdc6d072a97c3e703a28492b72a415b68fa6d36b0745a41643ca2e not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.701727 4826 scope.go:117] "RemoveContainer" containerID="b8a10173bc4e7821db0e0fb791583089acb8b1c14a3d19219aa89c620b683a17" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.702002 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8a10173bc4e7821db0e0fb791583089acb8b1c14a3d19219aa89c620b683a17\": container with ID starting with b8a10173bc4e7821db0e0fb791583089acb8b1c14a3d19219aa89c620b683a17 not found: ID does not exist" containerID="b8a10173bc4e7821db0e0fb791583089acb8b1c14a3d19219aa89c620b683a17" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.702047 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a10173bc4e7821db0e0fb791583089acb8b1c14a3d19219aa89c620b683a17"} err="failed to get container status \"b8a10173bc4e7821db0e0fb791583089acb8b1c14a3d19219aa89c620b683a17\": rpc error: code = NotFound desc = could not find container \"b8a10173bc4e7821db0e0fb791583089acb8b1c14a3d19219aa89c620b683a17\": container with ID starting with b8a10173bc4e7821db0e0fb791583089acb8b1c14a3d19219aa89c620b683a17 not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.702068 4826 scope.go:117] "RemoveContainer" containerID="c0bc02934618b56670e02c41ece2d294ccddb2ef03d666340615b38efcc32cc5" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.714689 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.715864 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmn6q"] Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.721489 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dmn6q"] Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.723637 4826 scope.go:117] "RemoveContainer" containerID="0a938e388c8c25ec14c47bf893ba63529c38b0c2acff9e379bab2a5505fc5bf3" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.726379 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vvbnx"] Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.729152 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vvbnx"] Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.741997 4826 scope.go:117] "RemoveContainer" containerID="c55cff804a612afac504c9d39496fadbef4898ffcb86b0e1c7800a3ae82d9c5d" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.752732 4826 scope.go:117] "RemoveContainer" containerID="c0bc02934618b56670e02c41ece2d294ccddb2ef03d666340615b38efcc32cc5" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.753050 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0bc02934618b56670e02c41ece2d294ccddb2ef03d666340615b38efcc32cc5\": container with ID starting with c0bc02934618b56670e02c41ece2d294ccddb2ef03d666340615b38efcc32cc5 not found: ID does not exist" containerID="c0bc02934618b56670e02c41ece2d294ccddb2ef03d666340615b38efcc32cc5" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.753080 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0bc02934618b56670e02c41ece2d294ccddb2ef03d666340615b38efcc32cc5"} err="failed to get container status \"c0bc02934618b56670e02c41ece2d294ccddb2ef03d666340615b38efcc32cc5\": rpc error: code = NotFound desc = could not find container \"c0bc02934618b56670e02c41ece2d294ccddb2ef03d666340615b38efcc32cc5\": container with ID starting with c0bc02934618b56670e02c41ece2d294ccddb2ef03d666340615b38efcc32cc5 not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.753101 4826 scope.go:117] "RemoveContainer" containerID="0a938e388c8c25ec14c47bf893ba63529c38b0c2acff9e379bab2a5505fc5bf3" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.753425 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a938e388c8c25ec14c47bf893ba63529c38b0c2acff9e379bab2a5505fc5bf3\": container with ID starting with 0a938e388c8c25ec14c47bf893ba63529c38b0c2acff9e379bab2a5505fc5bf3 not found: ID does not exist" containerID="0a938e388c8c25ec14c47bf893ba63529c38b0c2acff9e379bab2a5505fc5bf3" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.753466 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a938e388c8c25ec14c47bf893ba63529c38b0c2acff9e379bab2a5505fc5bf3"} err="failed to get container status \"0a938e388c8c25ec14c47bf893ba63529c38b0c2acff9e379bab2a5505fc5bf3\": rpc error: code = NotFound desc = could not find container \"0a938e388c8c25ec14c47bf893ba63529c38b0c2acff9e379bab2a5505fc5bf3\": container with ID starting with 0a938e388c8c25ec14c47bf893ba63529c38b0c2acff9e379bab2a5505fc5bf3 not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.753490 4826 scope.go:117] "RemoveContainer" containerID="c55cff804a612afac504c9d39496fadbef4898ffcb86b0e1c7800a3ae82d9c5d" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.753709 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55cff804a612afac504c9d39496fadbef4898ffcb86b0e1c7800a3ae82d9c5d\": container with ID starting with c55cff804a612afac504c9d39496fadbef4898ffcb86b0e1c7800a3ae82d9c5d not found: ID does not exist" containerID="c55cff804a612afac504c9d39496fadbef4898ffcb86b0e1c7800a3ae82d9c5d" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.753751 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55cff804a612afac504c9d39496fadbef4898ffcb86b0e1c7800a3ae82d9c5d"} err="failed to get container status \"c55cff804a612afac504c9d39496fadbef4898ffcb86b0e1c7800a3ae82d9c5d\": rpc error: code = NotFound desc = could not find container \"c55cff804a612afac504c9d39496fadbef4898ffcb86b0e1c7800a3ae82d9c5d\": container with ID starting with c55cff804a612afac504c9d39496fadbef4898ffcb86b0e1c7800a3ae82d9c5d not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.753765 4826 scope.go:117] "RemoveContainer" containerID="306b63cc65010e89b7909e7be3d24c83a459eb1133b5d64d4490a4a5498010e5" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.763024 4826 scope.go:117] "RemoveContainer" containerID="9d13a267560333a1d187c96aabdd7d32057bd22f83d5e3f44fb9d50b46b66260" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.775045 4826 scope.go:117] "RemoveContainer" containerID="306b63cc65010e89b7909e7be3d24c83a459eb1133b5d64d4490a4a5498010e5" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.775428 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"306b63cc65010e89b7909e7be3d24c83a459eb1133b5d64d4490a4a5498010e5\": container with ID starting with 306b63cc65010e89b7909e7be3d24c83a459eb1133b5d64d4490a4a5498010e5 not found: ID does not exist" containerID="306b63cc65010e89b7909e7be3d24c83a459eb1133b5d64d4490a4a5498010e5" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.775466 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"306b63cc65010e89b7909e7be3d24c83a459eb1133b5d64d4490a4a5498010e5"} err="failed to get container status \"306b63cc65010e89b7909e7be3d24c83a459eb1133b5d64d4490a4a5498010e5\": rpc error: code = NotFound desc = could not find container \"306b63cc65010e89b7909e7be3d24c83a459eb1133b5d64d4490a4a5498010e5\": container with ID starting with 306b63cc65010e89b7909e7be3d24c83a459eb1133b5d64d4490a4a5498010e5 not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.775509 4826 scope.go:117] "RemoveContainer" containerID="9d13a267560333a1d187c96aabdd7d32057bd22f83d5e3f44fb9d50b46b66260" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.775790 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d13a267560333a1d187c96aabdd7d32057bd22f83d5e3f44fb9d50b46b66260\": container with ID starting with 9d13a267560333a1d187c96aabdd7d32057bd22f83d5e3f44fb9d50b46b66260 not found: ID does not exist" containerID="9d13a267560333a1d187c96aabdd7d32057bd22f83d5e3f44fb9d50b46b66260" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.775830 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d13a267560333a1d187c96aabdd7d32057bd22f83d5e3f44fb9d50b46b66260"} err="failed to get container status \"9d13a267560333a1d187c96aabdd7d32057bd22f83d5e3f44fb9d50b46b66260\": rpc error: code = NotFound desc = could not find container \"9d13a267560333a1d187c96aabdd7d32057bd22f83d5e3f44fb9d50b46b66260\": container with ID starting with 9d13a267560333a1d187c96aabdd7d32057bd22f83d5e3f44fb9d50b46b66260 not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.775851 4826 scope.go:117] "RemoveContainer" containerID="78b8cbdb5ea73d6bbd4b1371a4d7f2a056ce489f4cda0047214a0673ccadc7c6" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.790164 4826 scope.go:117] "RemoveContainer" containerID="ef83680496d460896de72721893d3b78d6d9ca46519d0631cc6a3f2f70a719ed" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.806150 4826 scope.go:117] "RemoveContainer" containerID="9382e8aa609a0a4b952fd37c5764f24f87a328c8fb04035fe80cd225ee14f2ab" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.818495 4826 scope.go:117] "RemoveContainer" containerID="78b8cbdb5ea73d6bbd4b1371a4d7f2a056ce489f4cda0047214a0673ccadc7c6" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.818771 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78b8cbdb5ea73d6bbd4b1371a4d7f2a056ce489f4cda0047214a0673ccadc7c6\": container with ID starting with 78b8cbdb5ea73d6bbd4b1371a4d7f2a056ce489f4cda0047214a0673ccadc7c6 not found: ID does not exist" containerID="78b8cbdb5ea73d6bbd4b1371a4d7f2a056ce489f4cda0047214a0673ccadc7c6" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.818813 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b8cbdb5ea73d6bbd4b1371a4d7f2a056ce489f4cda0047214a0673ccadc7c6"} err="failed to get container status \"78b8cbdb5ea73d6bbd4b1371a4d7f2a056ce489f4cda0047214a0673ccadc7c6\": rpc error: code = NotFound desc = could not find container \"78b8cbdb5ea73d6bbd4b1371a4d7f2a056ce489f4cda0047214a0673ccadc7c6\": container with ID starting with 78b8cbdb5ea73d6bbd4b1371a4d7f2a056ce489f4cda0047214a0673ccadc7c6 not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.818832 4826 scope.go:117] "RemoveContainer" containerID="ef83680496d460896de72721893d3b78d6d9ca46519d0631cc6a3f2f70a719ed" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.819128 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef83680496d460896de72721893d3b78d6d9ca46519d0631cc6a3f2f70a719ed\": container with ID starting with ef83680496d460896de72721893d3b78d6d9ca46519d0631cc6a3f2f70a719ed not found: ID does not exist" containerID="ef83680496d460896de72721893d3b78d6d9ca46519d0631cc6a3f2f70a719ed" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.819165 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef83680496d460896de72721893d3b78d6d9ca46519d0631cc6a3f2f70a719ed"} err="failed to get container status \"ef83680496d460896de72721893d3b78d6d9ca46519d0631cc6a3f2f70a719ed\": rpc error: code = NotFound desc = could not find container \"ef83680496d460896de72721893d3b78d6d9ca46519d0631cc6a3f2f70a719ed\": container with ID starting with ef83680496d460896de72721893d3b78d6d9ca46519d0631cc6a3f2f70a719ed not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.819180 4826 scope.go:117] "RemoveContainer" containerID="9382e8aa609a0a4b952fd37c5764f24f87a328c8fb04035fe80cd225ee14f2ab" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.819492 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9382e8aa609a0a4b952fd37c5764f24f87a328c8fb04035fe80cd225ee14f2ab\": container with ID starting with 9382e8aa609a0a4b952fd37c5764f24f87a328c8fb04035fe80cd225ee14f2ab not found: ID does not exist" containerID="9382e8aa609a0a4b952fd37c5764f24f87a328c8fb04035fe80cd225ee14f2ab" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.819531 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9382e8aa609a0a4b952fd37c5764f24f87a328c8fb04035fe80cd225ee14f2ab"} err="failed to get container status \"9382e8aa609a0a4b952fd37c5764f24f87a328c8fb04035fe80cd225ee14f2ab\": rpc error: code = NotFound desc = could not find container \"9382e8aa609a0a4b952fd37c5764f24f87a328c8fb04035fe80cd225ee14f2ab\": container with ID starting with 9382e8aa609a0a4b952fd37c5764f24f87a328c8fb04035fe80cd225ee14f2ab not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.819556 4826 scope.go:117] "RemoveContainer" containerID="4a9f2e7beae972051dca2ae472c02d8a51bbb70ccee034b2e3b68cae66197bfd" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.831025 4826 scope.go:117] "RemoveContainer" containerID="c9ef883ef9db777685b949347e7f0bc07c24255c673366692dc014d0673ec2b9" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.843155 4826 scope.go:117] "RemoveContainer" containerID="898ac7973139c03c3de08a31213d18800bb6c9037147321302205d22532e7482" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.859891 4826 scope.go:117] "RemoveContainer" containerID="4a9f2e7beae972051dca2ae472c02d8a51bbb70ccee034b2e3b68cae66197bfd" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.860600 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a9f2e7beae972051dca2ae472c02d8a51bbb70ccee034b2e3b68cae66197bfd\": container with ID starting with 4a9f2e7beae972051dca2ae472c02d8a51bbb70ccee034b2e3b68cae66197bfd not found: ID does not exist" containerID="4a9f2e7beae972051dca2ae472c02d8a51bbb70ccee034b2e3b68cae66197bfd" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.860638 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a9f2e7beae972051dca2ae472c02d8a51bbb70ccee034b2e3b68cae66197bfd"} err="failed to get container status \"4a9f2e7beae972051dca2ae472c02d8a51bbb70ccee034b2e3b68cae66197bfd\": rpc error: code = NotFound desc = could not find container \"4a9f2e7beae972051dca2ae472c02d8a51bbb70ccee034b2e3b68cae66197bfd\": container with ID starting with 4a9f2e7beae972051dca2ae472c02d8a51bbb70ccee034b2e3b68cae66197bfd not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.860662 4826 scope.go:117] "RemoveContainer" containerID="c9ef883ef9db777685b949347e7f0bc07c24255c673366692dc014d0673ec2b9" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.861021 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ef883ef9db777685b949347e7f0bc07c24255c673366692dc014d0673ec2b9\": container with ID starting with c9ef883ef9db777685b949347e7f0bc07c24255c673366692dc014d0673ec2b9 not found: ID does not exist" containerID="c9ef883ef9db777685b949347e7f0bc07c24255c673366692dc014d0673ec2b9" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.861050 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ef883ef9db777685b949347e7f0bc07c24255c673366692dc014d0673ec2b9"} err="failed to get container status \"c9ef883ef9db777685b949347e7f0bc07c24255c673366692dc014d0673ec2b9\": rpc error: code = NotFound desc = could not find container \"c9ef883ef9db777685b949347e7f0bc07c24255c673366692dc014d0673ec2b9\": container with ID starting with c9ef883ef9db777685b949347e7f0bc07c24255c673366692dc014d0673ec2b9 not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.861065 4826 scope.go:117] "RemoveContainer" containerID="898ac7973139c03c3de08a31213d18800bb6c9037147321302205d22532e7482" Jan 29 06:49:05 crc kubenswrapper[4826]: E0129 06:49:05.861414 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"898ac7973139c03c3de08a31213d18800bb6c9037147321302205d22532e7482\": container with ID starting with 898ac7973139c03c3de08a31213d18800bb6c9037147321302205d22532e7482 not found: ID does not exist" containerID="898ac7973139c03c3de08a31213d18800bb6c9037147321302205d22532e7482" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.861471 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"898ac7973139c03c3de08a31213d18800bb6c9037147321302205d22532e7482"} err="failed to get container status \"898ac7973139c03c3de08a31213d18800bb6c9037147321302205d22532e7482\": rpc error: code = NotFound desc = could not find container \"898ac7973139c03c3de08a31213d18800bb6c9037147321302205d22532e7482\": container with ID starting with 898ac7973139c03c3de08a31213d18800bb6c9037147321302205d22532e7482 not found: ID does not exist" Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.931188 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xt89w"] Jan 29 06:49:05 crc kubenswrapper[4826]: I0129 06:49:05.935144 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xt89w"] Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.362389 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gdm6z"] Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.363040 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" containerName="extract-utilities" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.363520 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" containerName="extract-utilities" Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.363655 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebf8342-e0f9-413f-811d-57ca9df94f2d" containerName="marketplace-operator" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.363822 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebf8342-e0f9-413f-811d-57ca9df94f2d" containerName="marketplace-operator" Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.363970 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebf8342-e0f9-413f-811d-57ca9df94f2d" containerName="marketplace-operator" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.364095 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebf8342-e0f9-413f-811d-57ca9df94f2d" containerName="marketplace-operator" Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.364234 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775bf475-9e63-49e0-9bde-bef34dce79c9" containerName="registry-server" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.364414 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="775bf475-9e63-49e0-9bde-bef34dce79c9" containerName="registry-server" Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.364553 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" containerName="registry-server" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.364683 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" containerName="registry-server" Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.364820 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" containerName="extract-utilities" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.364957 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" containerName="extract-utilities" Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.365091 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" containerName="extract-content" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.365203 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" containerName="extract-content" Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.365355 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" containerName="registry-server" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.365475 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" containerName="registry-server" Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.365705 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84323bb-bf74-4538-8cff-b507cb9b261d" containerName="extract-utilities" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.365823 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84323bb-bf74-4538-8cff-b507cb9b261d" containerName="extract-utilities" Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.365944 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775bf475-9e63-49e0-9bde-bef34dce79c9" containerName="extract-content" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.366110 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="775bf475-9e63-49e0-9bde-bef34dce79c9" containerName="extract-content" Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.366278 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" containerName="extract-content" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.366433 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" containerName="extract-content" Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.366576 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84323bb-bf74-4538-8cff-b507cb9b261d" containerName="registry-server" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.366694 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84323bb-bf74-4538-8cff-b507cb9b261d" containerName="registry-server" Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.366826 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775bf475-9e63-49e0-9bde-bef34dce79c9" containerName="extract-utilities" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.366949 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="775bf475-9e63-49e0-9bde-bef34dce79c9" containerName="extract-utilities" Jan 29 06:49:06 crc kubenswrapper[4826]: E0129 06:49:06.367067 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84323bb-bf74-4538-8cff-b507cb9b261d" containerName="extract-content" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.367180 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84323bb-bf74-4538-8cff-b507cb9b261d" containerName="extract-content" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.367508 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ebf8342-e0f9-413f-811d-57ca9df94f2d" containerName="marketplace-operator" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.367701 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" containerName="registry-server" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.367830 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ebf8342-e0f9-413f-811d-57ca9df94f2d" containerName="marketplace-operator" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.367962 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="775bf475-9e63-49e0-9bde-bef34dce79c9" containerName="registry-server" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.368099 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" containerName="registry-server" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.368223 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84323bb-bf74-4538-8cff-b507cb9b261d" containerName="registry-server" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.379407 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdm6z"] Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.406803 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.411699 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.526685 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26778232-3c9d-4c90-9f32-a7a0dc0e87b4-catalog-content\") pod \"community-operators-gdm6z\" (UID: \"26778232-3c9d-4c90-9f32-a7a0dc0e87b4\") " pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.526768 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26778232-3c9d-4c90-9f32-a7a0dc0e87b4-utilities\") pod \"community-operators-gdm6z\" (UID: \"26778232-3c9d-4c90-9f32-a7a0dc0e87b4\") " pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.526795 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqxdg\" (UniqueName: \"kubernetes.io/projected/26778232-3c9d-4c90-9f32-a7a0dc0e87b4-kube-api-access-xqxdg\") pod \"community-operators-gdm6z\" (UID: \"26778232-3c9d-4c90-9f32-a7a0dc0e87b4\") " pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.628749 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26778232-3c9d-4c90-9f32-a7a0dc0e87b4-catalog-content\") pod \"community-operators-gdm6z\" (UID: \"26778232-3c9d-4c90-9f32-a7a0dc0e87b4\") " pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.629263 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26778232-3c9d-4c90-9f32-a7a0dc0e87b4-utilities\") pod \"community-operators-gdm6z\" (UID: \"26778232-3c9d-4c90-9f32-a7a0dc0e87b4\") " pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.629489 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqxdg\" (UniqueName: \"kubernetes.io/projected/26778232-3c9d-4c90-9f32-a7a0dc0e87b4-kube-api-access-xqxdg\") pod \"community-operators-gdm6z\" (UID: \"26778232-3c9d-4c90-9f32-a7a0dc0e87b4\") " pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.629810 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26778232-3c9d-4c90-9f32-a7a0dc0e87b4-catalog-content\") pod \"community-operators-gdm6z\" (UID: \"26778232-3c9d-4c90-9f32-a7a0dc0e87b4\") " pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.630070 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26778232-3c9d-4c90-9f32-a7a0dc0e87b4-utilities\") pod \"community-operators-gdm6z\" (UID: \"26778232-3c9d-4c90-9f32-a7a0dc0e87b4\") " pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.650280 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqxdg\" (UniqueName: \"kubernetes.io/projected/26778232-3c9d-4c90-9f32-a7a0dc0e87b4-kube-api-access-xqxdg\") pod \"community-operators-gdm6z\" (UID: \"26778232-3c9d-4c90-9f32-a7a0dc0e87b4\") " pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.671001 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8g6tv" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.732107 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.818705 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7e32bd-0e0a-49fd-a29e-4c8087218b7a" path="/var/lib/kubelet/pods/2b7e32bd-0e0a-49fd-a29e-4c8087218b7a/volumes" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.820753 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="775bf475-9e63-49e0-9bde-bef34dce79c9" path="/var/lib/kubelet/pods/775bf475-9e63-49e0-9bde-bef34dce79c9/volumes" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.823171 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ebf8342-e0f9-413f-811d-57ca9df94f2d" path="/var/lib/kubelet/pods/7ebf8342-e0f9-413f-811d-57ca9df94f2d/volumes" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.838029 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84323bb-bf74-4538-8cff-b507cb9b261d" path="/var/lib/kubelet/pods/d84323bb-bf74-4538-8cff-b507cb9b261d/volumes" Jan 29 06:49:06 crc kubenswrapper[4826]: I0129 06:49:06.838848 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb065b7c-e4d6-4607-aa51-e8acf00117fa" path="/var/lib/kubelet/pods/eb065b7c-e4d6-4607-aa51-e8acf00117fa/volumes" Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.144458 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdm6z"] Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.676080 4826 generic.go:334] "Generic (PLEG): container finished" podID="26778232-3c9d-4c90-9f32-a7a0dc0e87b4" containerID="435eb9836e1d1fef285787c2f9600fb82ffbd5d867cd5dcbb632a1b16c6a7100" exitCode=0 Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.678820 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdm6z" event={"ID":"26778232-3c9d-4c90-9f32-a7a0dc0e87b4","Type":"ContainerDied","Data":"435eb9836e1d1fef285787c2f9600fb82ffbd5d867cd5dcbb632a1b16c6a7100"} Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.678868 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdm6z" event={"ID":"26778232-3c9d-4c90-9f32-a7a0dc0e87b4","Type":"ContainerStarted","Data":"c7c16612bed59252d2f926219d51624c125b3d208ff1676761e0a6e39e13c04d"} Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.763409 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kpmf6"] Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.764655 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.771792 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.812093 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kpmf6"] Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.849779 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6-utilities\") pod \"redhat-marketplace-kpmf6\" (UID: \"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6\") " pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.849875 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgfkq\" (UniqueName: \"kubernetes.io/projected/083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6-kube-api-access-tgfkq\") pod \"redhat-marketplace-kpmf6\" (UID: \"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6\") " pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.849988 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6-catalog-content\") pod \"redhat-marketplace-kpmf6\" (UID: \"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6\") " pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.951211 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6-utilities\") pod \"redhat-marketplace-kpmf6\" (UID: \"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6\") " pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.951268 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgfkq\" (UniqueName: \"kubernetes.io/projected/083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6-kube-api-access-tgfkq\") pod \"redhat-marketplace-kpmf6\" (UID: \"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6\") " pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.951342 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6-catalog-content\") pod \"redhat-marketplace-kpmf6\" (UID: \"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6\") " pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.951883 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6-utilities\") pod \"redhat-marketplace-kpmf6\" (UID: \"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6\") " pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.953332 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6-catalog-content\") pod \"redhat-marketplace-kpmf6\" (UID: \"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6\") " pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:07 crc kubenswrapper[4826]: I0129 06:49:07.972125 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgfkq\" (UniqueName: \"kubernetes.io/projected/083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6-kube-api-access-tgfkq\") pod \"redhat-marketplace-kpmf6\" (UID: \"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6\") " pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.082406 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.527991 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kpmf6"] Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.689112 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpmf6" event={"ID":"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6","Type":"ContainerStarted","Data":"392dadf34c18606d986e639db24e7917c797b72b46af0dbdc86672984b133c27"} Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.757683 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-577mf"] Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.759707 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.764436 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.770088 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-577mf"] Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.866928 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58fea2da-3284-4b38-883f-665355002814-utilities\") pod \"redhat-operators-577mf\" (UID: \"58fea2da-3284-4b38-883f-665355002814\") " pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.866986 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58fea2da-3284-4b38-883f-665355002814-catalog-content\") pod \"redhat-operators-577mf\" (UID: \"58fea2da-3284-4b38-883f-665355002814\") " pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.867063 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tp2j\" (UniqueName: \"kubernetes.io/projected/58fea2da-3284-4b38-883f-665355002814-kube-api-access-7tp2j\") pod \"redhat-operators-577mf\" (UID: \"58fea2da-3284-4b38-883f-665355002814\") " pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.968032 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58fea2da-3284-4b38-883f-665355002814-utilities\") pod \"redhat-operators-577mf\" (UID: \"58fea2da-3284-4b38-883f-665355002814\") " pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.968129 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58fea2da-3284-4b38-883f-665355002814-catalog-content\") pod \"redhat-operators-577mf\" (UID: \"58fea2da-3284-4b38-883f-665355002814\") " pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.968247 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tp2j\" (UniqueName: \"kubernetes.io/projected/58fea2da-3284-4b38-883f-665355002814-kube-api-access-7tp2j\") pod \"redhat-operators-577mf\" (UID: \"58fea2da-3284-4b38-883f-665355002814\") " pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.968600 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58fea2da-3284-4b38-883f-665355002814-utilities\") pod \"redhat-operators-577mf\" (UID: \"58fea2da-3284-4b38-883f-665355002814\") " pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.968656 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58fea2da-3284-4b38-883f-665355002814-catalog-content\") pod \"redhat-operators-577mf\" (UID: \"58fea2da-3284-4b38-883f-665355002814\") " pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:08 crc kubenswrapper[4826]: I0129 06:49:08.991024 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tp2j\" (UniqueName: \"kubernetes.io/projected/58fea2da-3284-4b38-883f-665355002814-kube-api-access-7tp2j\") pod \"redhat-operators-577mf\" (UID: \"58fea2da-3284-4b38-883f-665355002814\") " pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:09 crc kubenswrapper[4826]: I0129 06:49:09.150618 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:09 crc kubenswrapper[4826]: I0129 06:49:09.379711 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-577mf"] Jan 29 06:49:09 crc kubenswrapper[4826]: I0129 06:49:09.696921 4826 generic.go:334] "Generic (PLEG): container finished" podID="083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6" containerID="0b8dccbb14f797d940e9a9efbc858a2b5ebbe389f1edd6e2c5ec9e16eb89adad" exitCode=0 Jan 29 06:49:09 crc kubenswrapper[4826]: I0129 06:49:09.697007 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpmf6" event={"ID":"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6","Type":"ContainerDied","Data":"0b8dccbb14f797d940e9a9efbc858a2b5ebbe389f1edd6e2c5ec9e16eb89adad"} Jan 29 06:49:09 crc kubenswrapper[4826]: I0129 06:49:09.704267 4826 generic.go:334] "Generic (PLEG): container finished" podID="58fea2da-3284-4b38-883f-665355002814" containerID="fcf17731d208f0b126ee36728671b5a6bfdd47b23dd539d5a1ebc5949028627f" exitCode=0 Jan 29 06:49:09 crc kubenswrapper[4826]: I0129 06:49:09.704384 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-577mf" event={"ID":"58fea2da-3284-4b38-883f-665355002814","Type":"ContainerDied","Data":"fcf17731d208f0b126ee36728671b5a6bfdd47b23dd539d5a1ebc5949028627f"} Jan 29 06:49:09 crc kubenswrapper[4826]: I0129 06:49:09.704418 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-577mf" event={"ID":"58fea2da-3284-4b38-883f-665355002814","Type":"ContainerStarted","Data":"bcd67a75e7f35ecc2dbbffd201d5b4d3f215fd6e1271a773a65ee08f67ee8c6b"} Jan 29 06:49:09 crc kubenswrapper[4826]: I0129 06:49:09.707679 4826 generic.go:334] "Generic (PLEG): container finished" podID="26778232-3c9d-4c90-9f32-a7a0dc0e87b4" containerID="fb3695f7ab6e1d4ad2e116e702c3a0f5e955e57bf2dfb40adb15b656132f628b" exitCode=0 Jan 29 06:49:09 crc kubenswrapper[4826]: I0129 06:49:09.707714 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdm6z" event={"ID":"26778232-3c9d-4c90-9f32-a7a0dc0e87b4","Type":"ContainerDied","Data":"fb3695f7ab6e1d4ad2e116e702c3a0f5e955e57bf2dfb40adb15b656132f628b"} Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.156680 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5xd8c"] Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.158734 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.165542 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.176085 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5xd8c"] Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.289860 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece9f354-f8c6-4108-af2b-9fc51ea418a3-utilities\") pod \"certified-operators-5xd8c\" (UID: \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\") " pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.289954 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece9f354-f8c6-4108-af2b-9fc51ea418a3-catalog-content\") pod \"certified-operators-5xd8c\" (UID: \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\") " pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.289986 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc2bd\" (UniqueName: \"kubernetes.io/projected/ece9f354-f8c6-4108-af2b-9fc51ea418a3-kube-api-access-bc2bd\") pod \"certified-operators-5xd8c\" (UID: \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\") " pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.392215 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece9f354-f8c6-4108-af2b-9fc51ea418a3-utilities\") pod \"certified-operators-5xd8c\" (UID: \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\") " pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.392527 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece9f354-f8c6-4108-af2b-9fc51ea418a3-catalog-content\") pod \"certified-operators-5xd8c\" (UID: \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\") " pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.392705 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc2bd\" (UniqueName: \"kubernetes.io/projected/ece9f354-f8c6-4108-af2b-9fc51ea418a3-kube-api-access-bc2bd\") pod \"certified-operators-5xd8c\" (UID: \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\") " pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.392949 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece9f354-f8c6-4108-af2b-9fc51ea418a3-utilities\") pod \"certified-operators-5xd8c\" (UID: \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\") " pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.393275 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece9f354-f8c6-4108-af2b-9fc51ea418a3-catalog-content\") pod \"certified-operators-5xd8c\" (UID: \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\") " pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.421674 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc2bd\" (UniqueName: \"kubernetes.io/projected/ece9f354-f8c6-4108-af2b-9fc51ea418a3-kube-api-access-bc2bd\") pod \"certified-operators-5xd8c\" (UID: \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\") " pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.487586 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.703504 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5xd8c"] Jan 29 06:49:10 crc kubenswrapper[4826]: W0129 06:49:10.711502 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podece9f354_f8c6_4108_af2b_9fc51ea418a3.slice/crio-ad4cc99e29d65b069e37c5ec71b050d53a650293f04ca4aba7aa967b3bc75c84 WatchSource:0}: Error finding container ad4cc99e29d65b069e37c5ec71b050d53a650293f04ca4aba7aa967b3bc75c84: Status 404 returned error can't find the container with id ad4cc99e29d65b069e37c5ec71b050d53a650293f04ca4aba7aa967b3bc75c84 Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.717829 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdm6z" event={"ID":"26778232-3c9d-4c90-9f32-a7a0dc0e87b4","Type":"ContainerStarted","Data":"a63e8a9c8d03e33a383095fbd904f4c9d1b782cddafcc7551618add67a88357e"} Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.719928 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpmf6" event={"ID":"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6","Type":"ContainerStarted","Data":"fb3526125d1045b6c77c4be22ae6503e192d6fa13f0627571a15d6bf627d4938"} Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.721421 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-577mf" event={"ID":"58fea2da-3284-4b38-883f-665355002814","Type":"ContainerStarted","Data":"db031c51c1bfc54f90978c99f48b0fe2deb4b39dec4abae233c69652ea841a33"} Jan 29 06:49:10 crc kubenswrapper[4826]: I0129 06:49:10.741369 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gdm6z" podStartSLOduration=2.253804131 podStartE2EDuration="4.741348687s" podCreationTimestamp="2026-01-29 06:49:06 +0000 UTC" firstStartedPulling="2026-01-29 06:49:07.725769717 +0000 UTC m=+331.587562806" lastFinishedPulling="2026-01-29 06:49:10.213314283 +0000 UTC m=+334.075107362" observedRunningTime="2026-01-29 06:49:10.738216123 +0000 UTC m=+334.600009192" watchObservedRunningTime="2026-01-29 06:49:10.741348687 +0000 UTC m=+334.603141756" Jan 29 06:49:11 crc kubenswrapper[4826]: I0129 06:49:11.728503 4826 generic.go:334] "Generic (PLEG): container finished" podID="ece9f354-f8c6-4108-af2b-9fc51ea418a3" containerID="09ad6f08d7191ab076f73ba31265b23b6baac0957bdef5ed4cdb58759020e7d8" exitCode=0 Jan 29 06:49:11 crc kubenswrapper[4826]: I0129 06:49:11.728545 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xd8c" event={"ID":"ece9f354-f8c6-4108-af2b-9fc51ea418a3","Type":"ContainerDied","Data":"09ad6f08d7191ab076f73ba31265b23b6baac0957bdef5ed4cdb58759020e7d8"} Jan 29 06:49:11 crc kubenswrapper[4826]: I0129 06:49:11.728965 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xd8c" event={"ID":"ece9f354-f8c6-4108-af2b-9fc51ea418a3","Type":"ContainerStarted","Data":"ad4cc99e29d65b069e37c5ec71b050d53a650293f04ca4aba7aa967b3bc75c84"} Jan 29 06:49:11 crc kubenswrapper[4826]: I0129 06:49:11.732456 4826 generic.go:334] "Generic (PLEG): container finished" podID="083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6" containerID="fb3526125d1045b6c77c4be22ae6503e192d6fa13f0627571a15d6bf627d4938" exitCode=0 Jan 29 06:49:11 crc kubenswrapper[4826]: I0129 06:49:11.732506 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpmf6" event={"ID":"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6","Type":"ContainerDied","Data":"fb3526125d1045b6c77c4be22ae6503e192d6fa13f0627571a15d6bf627d4938"} Jan 29 06:49:11 crc kubenswrapper[4826]: I0129 06:49:11.742041 4826 generic.go:334] "Generic (PLEG): container finished" podID="58fea2da-3284-4b38-883f-665355002814" containerID="db031c51c1bfc54f90978c99f48b0fe2deb4b39dec4abae233c69652ea841a33" exitCode=0 Jan 29 06:49:11 crc kubenswrapper[4826]: I0129 06:49:11.744046 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-577mf" event={"ID":"58fea2da-3284-4b38-883f-665355002814","Type":"ContainerDied","Data":"db031c51c1bfc54f90978c99f48b0fe2deb4b39dec4abae233c69652ea841a33"} Jan 29 06:49:12 crc kubenswrapper[4826]: I0129 06:49:12.750281 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kpmf6" event={"ID":"083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6","Type":"ContainerStarted","Data":"9d4a321ace327570564a01ae871ce69dd20f100acc28c36ea358780272876155"} Jan 29 06:49:12 crc kubenswrapper[4826]: I0129 06:49:12.755274 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-577mf" event={"ID":"58fea2da-3284-4b38-883f-665355002814","Type":"ContainerStarted","Data":"e2c493a918f4e5a7cee6e0f4f2b27307c38960aad4cc38645759faeee9018181"} Jan 29 06:49:12 crc kubenswrapper[4826]: I0129 06:49:12.758292 4826 generic.go:334] "Generic (PLEG): container finished" podID="ece9f354-f8c6-4108-af2b-9fc51ea418a3" containerID="0e0a6b7092da0c6471e4f5c673e89210bfdef6862fdf6337d8e8e2368dc183e3" exitCode=0 Jan 29 06:49:12 crc kubenswrapper[4826]: I0129 06:49:12.758351 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xd8c" event={"ID":"ece9f354-f8c6-4108-af2b-9fc51ea418a3","Type":"ContainerDied","Data":"0e0a6b7092da0c6471e4f5c673e89210bfdef6862fdf6337d8e8e2368dc183e3"} Jan 29 06:49:12 crc kubenswrapper[4826]: I0129 06:49:12.780706 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kpmf6" podStartSLOduration=3.348251379 podStartE2EDuration="5.780672906s" podCreationTimestamp="2026-01-29 06:49:07 +0000 UTC" firstStartedPulling="2026-01-29 06:49:09.699825253 +0000 UTC m=+333.561618312" lastFinishedPulling="2026-01-29 06:49:12.13224677 +0000 UTC m=+335.994039839" observedRunningTime="2026-01-29 06:49:12.777996914 +0000 UTC m=+336.639789993" watchObservedRunningTime="2026-01-29 06:49:12.780672906 +0000 UTC m=+336.642465975" Jan 29 06:49:13 crc kubenswrapper[4826]: I0129 06:49:13.766996 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xd8c" event={"ID":"ece9f354-f8c6-4108-af2b-9fc51ea418a3","Type":"ContainerStarted","Data":"2ffc73d46ea2b794b661ba835cddef0fcb314575bbe6a219fb4c1156660ef82c"} Jan 29 06:49:13 crc kubenswrapper[4826]: I0129 06:49:13.803201 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-577mf" podStartSLOduration=3.120059178 podStartE2EDuration="5.803171626s" podCreationTimestamp="2026-01-29 06:49:08 +0000 UTC" firstStartedPulling="2026-01-29 06:49:09.705846676 +0000 UTC m=+333.567639755" lastFinishedPulling="2026-01-29 06:49:12.388959134 +0000 UTC m=+336.250752203" observedRunningTime="2026-01-29 06:49:12.832690891 +0000 UTC m=+336.694483970" watchObservedRunningTime="2026-01-29 06:49:13.803171626 +0000 UTC m=+337.664964695" Jan 29 06:49:13 crc kubenswrapper[4826]: I0129 06:49:13.804139 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5xd8c" podStartSLOduration=2.371702127 podStartE2EDuration="3.804131782s" podCreationTimestamp="2026-01-29 06:49:10 +0000 UTC" firstStartedPulling="2026-01-29 06:49:11.731234377 +0000 UTC m=+335.593027446" lastFinishedPulling="2026-01-29 06:49:13.163664032 +0000 UTC m=+337.025457101" observedRunningTime="2026-01-29 06:49:13.795203211 +0000 UTC m=+337.656996280" watchObservedRunningTime="2026-01-29 06:49:13.804131782 +0000 UTC m=+337.665924851" Jan 29 06:49:16 crc kubenswrapper[4826]: I0129 06:49:16.733365 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:16 crc kubenswrapper[4826]: I0129 06:49:16.734521 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:16 crc kubenswrapper[4826]: I0129 06:49:16.801433 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:16 crc kubenswrapper[4826]: I0129 06:49:16.864000 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gdm6z" Jan 29 06:49:18 crc kubenswrapper[4826]: I0129 06:49:18.083646 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:18 crc kubenswrapper[4826]: I0129 06:49:18.084204 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:18 crc kubenswrapper[4826]: I0129 06:49:18.159431 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:18 crc kubenswrapper[4826]: I0129 06:49:18.867264 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kpmf6" Jan 29 06:49:19 crc kubenswrapper[4826]: I0129 06:49:19.150965 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:19 crc kubenswrapper[4826]: I0129 06:49:19.151078 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:20 crc kubenswrapper[4826]: I0129 06:49:20.207859 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-577mf" podUID="58fea2da-3284-4b38-883f-665355002814" containerName="registry-server" probeResult="failure" output=< Jan 29 06:49:20 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 06:49:20 crc kubenswrapper[4826]: > Jan 29 06:49:20 crc kubenswrapper[4826]: I0129 06:49:20.488596 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:20 crc kubenswrapper[4826]: I0129 06:49:20.488657 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:20 crc kubenswrapper[4826]: I0129 06:49:20.535402 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:20 crc kubenswrapper[4826]: I0129 06:49:20.850111 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 06:49:21 crc kubenswrapper[4826]: I0129 06:49:21.467083 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" podUID="cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" containerName="registry" containerID="cri-o://4cd3af2bccdd7658508f9d8bb94ab6156559fb89c7a83b3d1a5535d79018b9fc" gracePeriod=30 Jan 29 06:49:21 crc kubenswrapper[4826]: I0129 06:49:21.814428 4826 generic.go:334] "Generic (PLEG): container finished" podID="cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" containerID="4cd3af2bccdd7658508f9d8bb94ab6156559fb89c7a83b3d1a5535d79018b9fc" exitCode=0 Jan 29 06:49:21 crc kubenswrapper[4826]: I0129 06:49:21.814542 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" event={"ID":"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1","Type":"ContainerDied","Data":"4cd3af2bccdd7658508f9d8bb94ab6156559fb89c7a83b3d1a5535d79018b9fc"} Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.249328 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.398830 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-registry-certificates\") pod \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.398876 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-installation-pull-secrets\") pod \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.398904 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5skd\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-kube-api-access-n5skd\") pod \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.399086 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.399145 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-ca-trust-extracted\") pod \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.399175 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-trusted-ca\") pod \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.399201 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-registry-tls\") pod \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.399231 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-bound-sa-token\") pod \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\" (UID: \"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1\") " Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.400156 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.400376 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.405087 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.407749 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-kube-api-access-n5skd" (OuterVolumeSpecName: "kube-api-access-n5skd") pod "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1"). InnerVolumeSpecName "kube-api-access-n5skd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.408246 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.408442 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.415579 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.420322 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" (UID: "cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.500205 4826 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.500655 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.500667 4826 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.500678 4826 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.500690 4826 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.500704 4826 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.500712 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5skd\" (UniqueName: \"kubernetes.io/projected/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1-kube-api-access-n5skd\") on node \"crc\" DevicePath \"\"" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.825269 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" event={"ID":"cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1","Type":"ContainerDied","Data":"b9a4fdee7181884fd7de12b0fd4f3b11ae2a0d024a96d5c82f7b349686274d48"} Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.825371 4826 scope.go:117] "RemoveContainer" containerID="4cd3af2bccdd7658508f9d8bb94ab6156559fb89c7a83b3d1a5535d79018b9fc" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.825653 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ddt5t" Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.886074 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ddt5t"] Jan 29 06:49:22 crc kubenswrapper[4826]: I0129 06:49:22.890173 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ddt5t"] Jan 29 06:49:24 crc kubenswrapper[4826]: I0129 06:49:24.816506 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" path="/var/lib/kubelet/pods/cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1/volumes" Jan 29 06:49:29 crc kubenswrapper[4826]: I0129 06:49:29.217284 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:29 crc kubenswrapper[4826]: I0129 06:49:29.286017 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-577mf" Jan 29 06:49:35 crc kubenswrapper[4826]: I0129 06:49:35.657052 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:49:35 crc kubenswrapper[4826]: I0129 06:49:35.657615 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:50:05 crc kubenswrapper[4826]: I0129 06:50:05.656513 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:50:05 crc kubenswrapper[4826]: I0129 06:50:05.659131 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:50:35 crc kubenswrapper[4826]: I0129 06:50:35.656972 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:50:35 crc kubenswrapper[4826]: I0129 06:50:35.658002 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:50:35 crc kubenswrapper[4826]: I0129 06:50:35.658098 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:50:35 crc kubenswrapper[4826]: I0129 06:50:35.659358 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49712b74d21f3c3377a3574a3bd33dea7533ff74de9557e188300ceb42aaf015"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 06:50:35 crc kubenswrapper[4826]: I0129 06:50:35.659519 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://49712b74d21f3c3377a3574a3bd33dea7533ff74de9557e188300ceb42aaf015" gracePeriod=600 Jan 29 06:50:36 crc kubenswrapper[4826]: I0129 06:50:36.343858 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="49712b74d21f3c3377a3574a3bd33dea7533ff74de9557e188300ceb42aaf015" exitCode=0 Jan 29 06:50:36 crc kubenswrapper[4826]: I0129 06:50:36.343995 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"49712b74d21f3c3377a3574a3bd33dea7533ff74de9557e188300ceb42aaf015"} Jan 29 06:50:36 crc kubenswrapper[4826]: I0129 06:50:36.344268 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"fcf1ea2f7e5c5fbf4091e579b4a862769cc5342172862a00f7331d08867ff6a2"} Jan 29 06:50:36 crc kubenswrapper[4826]: I0129 06:50:36.344316 4826 scope.go:117] "RemoveContainer" containerID="0adf6fcaa6a5cc342f8af2dbb14c7fbaf0c953f17ee6e8ef3156e57c3893b93f" Jan 29 06:53:05 crc kubenswrapper[4826]: I0129 06:53:05.659960 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:53:05 crc kubenswrapper[4826]: I0129 06:53:05.660628 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:53:35 crc kubenswrapper[4826]: I0129 06:53:35.656138 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:53:35 crc kubenswrapper[4826]: I0129 06:53:35.656945 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:54:05 crc kubenswrapper[4826]: I0129 06:54:05.656703 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:54:05 crc kubenswrapper[4826]: I0129 06:54:05.659835 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:54:05 crc kubenswrapper[4826]: I0129 06:54:05.660265 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:54:05 crc kubenswrapper[4826]: I0129 06:54:05.661375 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fcf1ea2f7e5c5fbf4091e579b4a862769cc5342172862a00f7331d08867ff6a2"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 06:54:05 crc kubenswrapper[4826]: I0129 06:54:05.661574 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://fcf1ea2f7e5c5fbf4091e579b4a862769cc5342172862a00f7331d08867ff6a2" gracePeriod=600 Jan 29 06:54:05 crc kubenswrapper[4826]: I0129 06:54:05.862181 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="fcf1ea2f7e5c5fbf4091e579b4a862769cc5342172862a00f7331d08867ff6a2" exitCode=0 Jan 29 06:54:05 crc kubenswrapper[4826]: I0129 06:54:05.862220 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"fcf1ea2f7e5c5fbf4091e579b4a862769cc5342172862a00f7331d08867ff6a2"} Jan 29 06:54:05 crc kubenswrapper[4826]: I0129 06:54:05.862260 4826 scope.go:117] "RemoveContainer" containerID="49712b74d21f3c3377a3574a3bd33dea7533ff74de9557e188300ceb42aaf015" Jan 29 06:54:06 crc kubenswrapper[4826]: I0129 06:54:06.872989 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"a3e78bc337b51dad14a09b0691d31de9770dca1238f79d864ee283914dbce58e"} Jan 29 06:55:17 crc kubenswrapper[4826]: I0129 06:55:17.581054 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-s7xfk"] Jan 29 06:55:17 crc kubenswrapper[4826]: I0129 06:55:17.582512 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovn-controller" containerID="cri-o://82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b" gracePeriod=30 Jan 29 06:55:17 crc kubenswrapper[4826]: I0129 06:55:17.582568 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="sbdb" containerID="cri-o://4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2" gracePeriod=30 Jan 29 06:55:17 crc kubenswrapper[4826]: I0129 06:55:17.582678 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="nbdb" containerID="cri-o://ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317" gracePeriod=30 Jan 29 06:55:17 crc kubenswrapper[4826]: I0129 06:55:17.582762 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="northd" containerID="cri-o://57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24" gracePeriod=30 Jan 29 06:55:17 crc kubenswrapper[4826]: I0129 06:55:17.582824 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a" gracePeriod=30 Jan 29 06:55:17 crc kubenswrapper[4826]: I0129 06:55:17.582884 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="kube-rbac-proxy-node" containerID="cri-o://a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a" gracePeriod=30 Jan 29 06:55:17 crc kubenswrapper[4826]: I0129 06:55:17.582945 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovn-acl-logging" containerID="cri-o://8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220" gracePeriod=30 Jan 29 06:55:17 crc kubenswrapper[4826]: I0129 06:55:17.628190 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovnkube-controller" containerID="cri-o://883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21" gracePeriod=30 Jan 29 06:55:17 crc kubenswrapper[4826]: I0129 06:55:17.971395 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovnkube-controller/2.log" Jan 29 06:55:17 crc kubenswrapper[4826]: I0129 06:55:17.974080 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovn-acl-logging/0.log" Jan 29 06:55:17 crc kubenswrapper[4826]: I0129 06:55:17.974612 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovn-controller/0.log" Jan 29 06:55:17 crc kubenswrapper[4826]: I0129 06:55:17.975094 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030076 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nc95s"] Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.030288 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovnkube-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030302 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovnkube-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.030315 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovnkube-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030321 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovnkube-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.030344 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" containerName="registry" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030350 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" containerName="registry" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.030360 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="kube-rbac-proxy-node" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030366 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="kube-rbac-proxy-node" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.030376 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovnkube-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030382 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovnkube-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.030390 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="northd" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030396 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="northd" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.030410 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovn-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030416 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovn-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.030422 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="sbdb" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030428 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="sbdb" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.030436 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="kubecfg-setup" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030442 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="kubecfg-setup" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.030447 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030453 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.030461 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovn-acl-logging" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030467 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovn-acl-logging" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.030474 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="nbdb" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030480 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="nbdb" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030570 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovnkube-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030579 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovn-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030590 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="northd" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030597 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovn-acl-logging" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030603 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovnkube-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030611 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5cd3bd-f492-44e4-9e3f-30bf2211e3f1" containerName="registry" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030620 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="kube-rbac-proxy-node" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030627 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="nbdb" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030633 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="sbdb" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030640 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovnkube-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030648 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.030732 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovnkube-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030741 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovnkube-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.030822 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerName="ovnkube-controller" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.032283 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123055 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-cni-bin\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123139 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovnkube-script-lib\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123160 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-run-ovn-kubernetes\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123186 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-var-lib-openvswitch\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123218 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-ovn\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123239 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovnkube-config\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123261 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-cni-netd\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123285 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp8s9\" (UniqueName: \"kubernetes.io/projected/6f0c380f-ebc1-482f-9a91-8b08033eadf2-kube-api-access-dp8s9\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123309 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-kubelet\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123371 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-node-log\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123394 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovn-node-metrics-cert\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123417 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-systemd-units\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123436 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-systemd\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123458 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-slash\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123500 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-run-netns\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123530 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-openvswitch\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123556 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-etc-openvswitch\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123575 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-log-socket\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123600 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-env-overrides\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123621 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\" (UID: \"6f0c380f-ebc1-482f-9a91-8b08033eadf2\") " Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123805 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-run-openvswitch\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123840 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/25d04361-6620-4379-90ae-b88e5a8cd03d-ovnkube-script-lib\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123871 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123902 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-run-netns\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123946 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/25d04361-6620-4379-90ae-b88e5a8cd03d-ovnkube-config\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.123981 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-node-log\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124014 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmp5r\" (UniqueName: \"kubernetes.io/projected/25d04361-6620-4379-90ae-b88e5a8cd03d-kube-api-access-vmp5r\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124039 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-run-ovn-kubernetes\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124064 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-slash\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124106 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-run-ovn\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124134 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-var-lib-openvswitch\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124159 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-cni-bin\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124185 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-kubelet\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124218 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-etc-openvswitch\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124248 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-run-systemd\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124274 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-log-socket\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124315 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-systemd-units\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124366 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/25d04361-6620-4379-90ae-b88e5a8cd03d-env-overrides\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124399 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/25d04361-6620-4379-90ae-b88e5a8cd03d-ovn-node-metrics-cert\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124431 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-cni-netd\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124546 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124908 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124957 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124977 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124994 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.124990 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.125030 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.125064 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.125076 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-log-socket" (OuterVolumeSpecName: "log-socket") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.125095 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.125169 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-node-log" (OuterVolumeSpecName: "node-log") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.125213 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.125402 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.125437 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-slash" (OuterVolumeSpecName: "host-slash") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.125461 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.125465 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.126467 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.133180 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.134474 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f0c380f-ebc1-482f-9a91-8b08033eadf2-kube-api-access-dp8s9" (OuterVolumeSpecName: "kube-api-access-dp8s9") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "kube-api-access-dp8s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.143007 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6f0c380f-ebc1-482f-9a91-8b08033eadf2" (UID: "6f0c380f-ebc1-482f-9a91-8b08033eadf2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225238 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/25d04361-6620-4379-90ae-b88e5a8cd03d-ovnkube-config\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225357 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-node-log\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225414 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmp5r\" (UniqueName: \"kubernetes.io/projected/25d04361-6620-4379-90ae-b88e5a8cd03d-kube-api-access-vmp5r\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225451 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-run-ovn-kubernetes\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225482 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-slash\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225543 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-var-lib-openvswitch\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225575 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-run-ovn\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225604 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-cni-bin\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225635 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-kubelet\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225664 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-etc-openvswitch\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225695 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-run-systemd\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225721 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-systemd-units\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225748 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-log-socket\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225781 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/25d04361-6620-4379-90ae-b88e5a8cd03d-env-overrides\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225809 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/25d04361-6620-4379-90ae-b88e5a8cd03d-ovn-node-metrics-cert\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225844 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-cni-netd\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225894 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-run-openvswitch\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225924 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/25d04361-6620-4379-90ae-b88e5a8cd03d-ovnkube-script-lib\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225963 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.225993 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-run-netns\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226137 4826 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226161 4826 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226182 4826 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226204 4826 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226230 4826 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226247 4826 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226269 4826 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226287 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp8s9\" (UniqueName: \"kubernetes.io/projected/6f0c380f-ebc1-482f-9a91-8b08033eadf2-kube-api-access-dp8s9\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226308 4826 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226361 4826 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226386 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6f0c380f-ebc1-482f-9a91-8b08033eadf2-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226403 4826 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226420 4826 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226436 4826 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226454 4826 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226471 4826 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226488 4826 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226506 4826 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226522 4826 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6f0c380f-ebc1-482f-9a91-8b08033eadf2-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226540 4826 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6f0c380f-ebc1-482f-9a91-8b08033eadf2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226606 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-run-netns\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226666 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-node-log\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.226961 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/25d04361-6620-4379-90ae-b88e5a8cd03d-ovnkube-config\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.227099 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-systemd-units\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.227098 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-run-systemd\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.227157 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-run-ovn-kubernetes\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.227164 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-log-socket\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.227267 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-slash\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.227389 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-var-lib-openvswitch\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.227469 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-run-ovn\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.227537 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-cni-bin\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.227602 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-kubelet\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.227675 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-etc-openvswitch\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.227745 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-run-openvswitch\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.227892 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/25d04361-6620-4379-90ae-b88e5a8cd03d-env-overrides\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.228741 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.228795 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/25d04361-6620-4379-90ae-b88e5a8cd03d-ovnkube-script-lib\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.228841 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25d04361-6620-4379-90ae-b88e5a8cd03d-host-cni-netd\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.232759 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/25d04361-6620-4379-90ae-b88e5a8cd03d-ovn-node-metrics-cert\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.246019 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmp5r\" (UniqueName: \"kubernetes.io/projected/25d04361-6620-4379-90ae-b88e5a8cd03d-kube-api-access-vmp5r\") pod \"ovnkube-node-nc95s\" (UID: \"25d04361-6620-4379-90ae-b88e5a8cd03d\") " pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.349901 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.358095 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kdv64_fa65a108-1826-4e74-8e8a-1eae605298f3/kube-multus/1.log" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.358611 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kdv64_fa65a108-1826-4e74-8e8a-1eae605298f3/kube-multus/0.log" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.358661 4826 generic.go:334] "Generic (PLEG): container finished" podID="fa65a108-1826-4e74-8e8a-1eae605298f3" containerID="68861c1d0c499dea2e10366881f21ddfe8325202fa0e7a18c8162c45279ed5eb" exitCode=2 Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.358733 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kdv64" event={"ID":"fa65a108-1826-4e74-8e8a-1eae605298f3","Type":"ContainerDied","Data":"68861c1d0c499dea2e10366881f21ddfe8325202fa0e7a18c8162c45279ed5eb"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.358779 4826 scope.go:117] "RemoveContainer" containerID="7e804e3c0c839d032b5f1c678a3b0646e1b4792bcc5fac6cbd49dd2cb5bc3209" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.359385 4826 scope.go:117] "RemoveContainer" containerID="68861c1d0c499dea2e10366881f21ddfe8325202fa0e7a18c8162c45279ed5eb" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.364529 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovnkube-controller/2.log" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.367725 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovn-acl-logging/0.log" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.368461 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-s7xfk_6f0c380f-ebc1-482f-9a91-8b08033eadf2/ovn-controller/0.log" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369013 4826 generic.go:334] "Generic (PLEG): container finished" podID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerID="883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21" exitCode=0 Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369041 4826 generic.go:334] "Generic (PLEG): container finished" podID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerID="4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2" exitCode=0 Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369054 4826 generic.go:334] "Generic (PLEG): container finished" podID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerID="ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317" exitCode=0 Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369066 4826 generic.go:334] "Generic (PLEG): container finished" podID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerID="57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24" exitCode=0 Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369108 4826 generic.go:334] "Generic (PLEG): container finished" podID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerID="436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a" exitCode=0 Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369116 4826 generic.go:334] "Generic (PLEG): container finished" podID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerID="a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a" exitCode=0 Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369125 4826 generic.go:334] "Generic (PLEG): container finished" podID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerID="8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220" exitCode=143 Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369133 4826 generic.go:334] "Generic (PLEG): container finished" podID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" containerID="82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b" exitCode=143 Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369181 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerDied","Data":"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369216 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerDied","Data":"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369233 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerDied","Data":"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369274 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerDied","Data":"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369377 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerDied","Data":"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369429 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerDied","Data":"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369446 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369459 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369469 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369476 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369512 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369521 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369528 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369535 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369543 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369550 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369561 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerDied","Data":"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369600 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369611 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369618 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369625 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369632 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369638 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369645 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369656 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369689 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369695 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369706 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerDied","Data":"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369719 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369727 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369734 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369770 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369779 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369787 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369794 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369803 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369811 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369818 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369853 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" event={"ID":"6f0c380f-ebc1-482f-9a91-8b08033eadf2","Type":"ContainerDied","Data":"a84227bc1252087022d1aaf8e51d27e3d58ecd26a7c7ca6ff62b401818131963"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369869 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369878 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369885 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369894 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369901 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369908 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369943 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369951 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369958 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.369965 4826 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7"} Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.370141 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s7xfk" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.422095 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-s7xfk"] Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.427763 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-s7xfk"] Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.475772 4826 scope.go:117] "RemoveContainer" containerID="883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.493278 4826 scope.go:117] "RemoveContainer" containerID="08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.524303 4826 scope.go:117] "RemoveContainer" containerID="4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.544744 4826 scope.go:117] "RemoveContainer" containerID="ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.561727 4826 scope.go:117] "RemoveContainer" containerID="57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.573696 4826 scope.go:117] "RemoveContainer" containerID="436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.587736 4826 scope.go:117] "RemoveContainer" containerID="a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.603832 4826 scope.go:117] "RemoveContainer" containerID="8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.617031 4826 scope.go:117] "RemoveContainer" containerID="82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.636019 4826 scope.go:117] "RemoveContainer" containerID="6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.652513 4826 scope.go:117] "RemoveContainer" containerID="883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.653030 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21\": container with ID starting with 883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21 not found: ID does not exist" containerID="883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.653073 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21"} err="failed to get container status \"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21\": rpc error: code = NotFound desc = could not find container \"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21\": container with ID starting with 883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.653100 4826 scope.go:117] "RemoveContainer" containerID="08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.653412 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\": container with ID starting with 08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb not found: ID does not exist" containerID="08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.653443 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb"} err="failed to get container status \"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\": rpc error: code = NotFound desc = could not find container \"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\": container with ID starting with 08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.653462 4826 scope.go:117] "RemoveContainer" containerID="4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.653714 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\": container with ID starting with 4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2 not found: ID does not exist" containerID="4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.653740 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2"} err="failed to get container status \"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\": rpc error: code = NotFound desc = could not find container \"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\": container with ID starting with 4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.653757 4826 scope.go:117] "RemoveContainer" containerID="ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.653985 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\": container with ID starting with ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317 not found: ID does not exist" containerID="ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.654013 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317"} err="failed to get container status \"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\": rpc error: code = NotFound desc = could not find container \"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\": container with ID starting with ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.654028 4826 scope.go:117] "RemoveContainer" containerID="57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.654323 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\": container with ID starting with 57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24 not found: ID does not exist" containerID="57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.654366 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24"} err="failed to get container status \"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\": rpc error: code = NotFound desc = could not find container \"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\": container with ID starting with 57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.654382 4826 scope.go:117] "RemoveContainer" containerID="436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.654589 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\": container with ID starting with 436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a not found: ID does not exist" containerID="436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.654616 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a"} err="failed to get container status \"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\": rpc error: code = NotFound desc = could not find container \"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\": container with ID starting with 436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.654631 4826 scope.go:117] "RemoveContainer" containerID="a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.654853 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\": container with ID starting with a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a not found: ID does not exist" containerID="a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.654879 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a"} err="failed to get container status \"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\": rpc error: code = NotFound desc = could not find container \"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\": container with ID starting with a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.654898 4826 scope.go:117] "RemoveContainer" containerID="8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.655123 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\": container with ID starting with 8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220 not found: ID does not exist" containerID="8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.655149 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220"} err="failed to get container status \"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\": rpc error: code = NotFound desc = could not find container \"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\": container with ID starting with 8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.655163 4826 scope.go:117] "RemoveContainer" containerID="82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.655625 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\": container with ID starting with 82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b not found: ID does not exist" containerID="82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.655645 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b"} err="failed to get container status \"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\": rpc error: code = NotFound desc = could not find container \"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\": container with ID starting with 82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.655662 4826 scope.go:117] "RemoveContainer" containerID="6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7" Jan 29 06:55:18 crc kubenswrapper[4826]: E0129 06:55:18.656171 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\": container with ID starting with 6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7 not found: ID does not exist" containerID="6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.656203 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7"} err="failed to get container status \"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\": rpc error: code = NotFound desc = could not find container \"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\": container with ID starting with 6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.656237 4826 scope.go:117] "RemoveContainer" containerID="883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.656569 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21"} err="failed to get container status \"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21\": rpc error: code = NotFound desc = could not find container \"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21\": container with ID starting with 883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.656598 4826 scope.go:117] "RemoveContainer" containerID="08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.656806 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb"} err="failed to get container status \"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\": rpc error: code = NotFound desc = could not find container \"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\": container with ID starting with 08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.656830 4826 scope.go:117] "RemoveContainer" containerID="4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.657014 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2"} err="failed to get container status \"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\": rpc error: code = NotFound desc = could not find container \"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\": container with ID starting with 4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.657037 4826 scope.go:117] "RemoveContainer" containerID="ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.657227 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317"} err="failed to get container status \"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\": rpc error: code = NotFound desc = could not find container \"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\": container with ID starting with ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.657251 4826 scope.go:117] "RemoveContainer" containerID="57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.657474 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24"} err="failed to get container status \"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\": rpc error: code = NotFound desc = could not find container \"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\": container with ID starting with 57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.657493 4826 scope.go:117] "RemoveContainer" containerID="436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.657755 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a"} err="failed to get container status \"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\": rpc error: code = NotFound desc = could not find container \"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\": container with ID starting with 436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.657779 4826 scope.go:117] "RemoveContainer" containerID="a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.658610 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a"} err="failed to get container status \"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\": rpc error: code = NotFound desc = could not find container \"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\": container with ID starting with a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.658638 4826 scope.go:117] "RemoveContainer" containerID="8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.658916 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220"} err="failed to get container status \"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\": rpc error: code = NotFound desc = could not find container \"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\": container with ID starting with 8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.658940 4826 scope.go:117] "RemoveContainer" containerID="82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.659353 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b"} err="failed to get container status \"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\": rpc error: code = NotFound desc = could not find container \"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\": container with ID starting with 82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.659378 4826 scope.go:117] "RemoveContainer" containerID="6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.659845 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7"} err="failed to get container status \"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\": rpc error: code = NotFound desc = could not find container \"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\": container with ID starting with 6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.659872 4826 scope.go:117] "RemoveContainer" containerID="883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.660165 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21"} err="failed to get container status \"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21\": rpc error: code = NotFound desc = could not find container \"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21\": container with ID starting with 883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.660190 4826 scope.go:117] "RemoveContainer" containerID="08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.660497 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb"} err="failed to get container status \"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\": rpc error: code = NotFound desc = could not find container \"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\": container with ID starting with 08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.660535 4826 scope.go:117] "RemoveContainer" containerID="4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.660938 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2"} err="failed to get container status \"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\": rpc error: code = NotFound desc = could not find container \"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\": container with ID starting with 4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.660967 4826 scope.go:117] "RemoveContainer" containerID="ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.661253 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317"} err="failed to get container status \"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\": rpc error: code = NotFound desc = could not find container \"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\": container with ID starting with ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.661280 4826 scope.go:117] "RemoveContainer" containerID="57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.661635 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24"} err="failed to get container status \"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\": rpc error: code = NotFound desc = could not find container \"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\": container with ID starting with 57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.661697 4826 scope.go:117] "RemoveContainer" containerID="436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.662039 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a"} err="failed to get container status \"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\": rpc error: code = NotFound desc = could not find container \"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\": container with ID starting with 436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.662069 4826 scope.go:117] "RemoveContainer" containerID="a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.662556 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a"} err="failed to get container status \"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\": rpc error: code = NotFound desc = could not find container \"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\": container with ID starting with a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.662586 4826 scope.go:117] "RemoveContainer" containerID="8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.662851 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220"} err="failed to get container status \"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\": rpc error: code = NotFound desc = could not find container \"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\": container with ID starting with 8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.662875 4826 scope.go:117] "RemoveContainer" containerID="82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.663101 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b"} err="failed to get container status \"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\": rpc error: code = NotFound desc = could not find container \"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\": container with ID starting with 82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.663125 4826 scope.go:117] "RemoveContainer" containerID="6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.663360 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7"} err="failed to get container status \"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\": rpc error: code = NotFound desc = could not find container \"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\": container with ID starting with 6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.663382 4826 scope.go:117] "RemoveContainer" containerID="883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.663655 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21"} err="failed to get container status \"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21\": rpc error: code = NotFound desc = could not find container \"883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21\": container with ID starting with 883ae177aed2c9f33e7e63846c8d0025ba4ea1a0d83a8811c00255e1f2ca6f21 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.663680 4826 scope.go:117] "RemoveContainer" containerID="08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.663917 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb"} err="failed to get container status \"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\": rpc error: code = NotFound desc = could not find container \"08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb\": container with ID starting with 08e66465840e0a56e33acff5c55b688dab7bb71907a4ae4ece0b53cfa96616eb not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.663946 4826 scope.go:117] "RemoveContainer" containerID="4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.664249 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2"} err="failed to get container status \"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\": rpc error: code = NotFound desc = could not find container \"4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2\": container with ID starting with 4a4feb66b17c90238b3267c4acd5cee5d21b54f54e336e80dcb0683396a425e2 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.664273 4826 scope.go:117] "RemoveContainer" containerID="ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.664540 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317"} err="failed to get container status \"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\": rpc error: code = NotFound desc = could not find container \"ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317\": container with ID starting with ebb3dd0a57544911c8a58708f2dccfe24fe350ac881de56cd4f1589585bb2317 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.664566 4826 scope.go:117] "RemoveContainer" containerID="57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.664846 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24"} err="failed to get container status \"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\": rpc error: code = NotFound desc = could not find container \"57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24\": container with ID starting with 57374315c5a6925f3b00bc50b0d1aabf3e9a5b5b97eef5e9c7dfa21e2e49de24 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.664900 4826 scope.go:117] "RemoveContainer" containerID="436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.665249 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a"} err="failed to get container status \"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\": rpc error: code = NotFound desc = could not find container \"436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a\": container with ID starting with 436177a60ace473e756dc96e38493993ccc5c12f2bc2521867891ea04e42581a not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.665303 4826 scope.go:117] "RemoveContainer" containerID="a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.665582 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a"} err="failed to get container status \"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\": rpc error: code = NotFound desc = could not find container \"a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a\": container with ID starting with a927f147fe052966ae72d59630c98e96f69c793ed81c285cdecfb2da142cd15a not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.665608 4826 scope.go:117] "RemoveContainer" containerID="8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.665902 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220"} err="failed to get container status \"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\": rpc error: code = NotFound desc = could not find container \"8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220\": container with ID starting with 8370fad2ba5e698bcbf91fcb077f88e08f585181b6d42cbcd77cbd2043bcd220 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.665927 4826 scope.go:117] "RemoveContainer" containerID="82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.666106 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b"} err="failed to get container status \"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\": rpc error: code = NotFound desc = could not find container \"82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b\": container with ID starting with 82eb2ed52b81d9f8342a87ea3c527307b40d08cf1783255025a0ddfc6998ee9b not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.666129 4826 scope.go:117] "RemoveContainer" containerID="6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.666380 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7"} err="failed to get container status \"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\": rpc error: code = NotFound desc = could not find container \"6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7\": container with ID starting with 6523445c1c399c40fa822846abb46a43d3fd2e0507b078be3dfd8f0676f695f7 not found: ID does not exist" Jan 29 06:55:18 crc kubenswrapper[4826]: I0129 06:55:18.816810 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f0c380f-ebc1-482f-9a91-8b08033eadf2" path="/var/lib/kubelet/pods/6f0c380f-ebc1-482f-9a91-8b08033eadf2/volumes" Jan 29 06:55:19 crc kubenswrapper[4826]: I0129 06:55:19.380682 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kdv64_fa65a108-1826-4e74-8e8a-1eae605298f3/kube-multus/1.log" Jan 29 06:55:19 crc kubenswrapper[4826]: I0129 06:55:19.381096 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kdv64" event={"ID":"fa65a108-1826-4e74-8e8a-1eae605298f3","Type":"ContainerStarted","Data":"ce24a5ae6a49ea15703391a1a11c7e850fe12c0a207d236bf56871bf80f39f52"} Jan 29 06:55:19 crc kubenswrapper[4826]: I0129 06:55:19.383794 4826 generic.go:334] "Generic (PLEG): container finished" podID="25d04361-6620-4379-90ae-b88e5a8cd03d" containerID="62a5fec2593c8a4c5f67d9ba2e1692d4b031b8fcf9c853e283a91a6ac0b0b7e2" exitCode=0 Jan 29 06:55:19 crc kubenswrapper[4826]: I0129 06:55:19.383844 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" event={"ID":"25d04361-6620-4379-90ae-b88e5a8cd03d","Type":"ContainerDied","Data":"62a5fec2593c8a4c5f67d9ba2e1692d4b031b8fcf9c853e283a91a6ac0b0b7e2"} Jan 29 06:55:19 crc kubenswrapper[4826]: I0129 06:55:19.383915 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" event={"ID":"25d04361-6620-4379-90ae-b88e5a8cd03d","Type":"ContainerStarted","Data":"9e8cea47ed9da9fc9b58a61e5b0b7892f6844db337cc13484d4f156acef7c877"} Jan 29 06:55:20 crc kubenswrapper[4826]: I0129 06:55:20.393160 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" event={"ID":"25d04361-6620-4379-90ae-b88e5a8cd03d","Type":"ContainerStarted","Data":"5b5b697e48c26b0e09d6118e660b3c17026521b47fb219f70073a6fed8f9d544"} Jan 29 06:55:20 crc kubenswrapper[4826]: I0129 06:55:20.393838 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" event={"ID":"25d04361-6620-4379-90ae-b88e5a8cd03d","Type":"ContainerStarted","Data":"836af170c42f6be67aec1d23457c5993f2f17a02459f0e1086b1561dbb1c5928"} Jan 29 06:55:20 crc kubenswrapper[4826]: I0129 06:55:20.393850 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" event={"ID":"25d04361-6620-4379-90ae-b88e5a8cd03d","Type":"ContainerStarted","Data":"1f7ef4a837ca7252cb0e35805270d7850de01fc695cc47733900b5ebe4a811ac"} Jan 29 06:55:20 crc kubenswrapper[4826]: I0129 06:55:20.393883 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" event={"ID":"25d04361-6620-4379-90ae-b88e5a8cd03d","Type":"ContainerStarted","Data":"0e84bf3a27a7c0b37bab015af79ce54ee365ab47eff2a9681895444ed0a89d03"} Jan 29 06:55:20 crc kubenswrapper[4826]: I0129 06:55:20.393920 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" event={"ID":"25d04361-6620-4379-90ae-b88e5a8cd03d","Type":"ContainerStarted","Data":"5df8d67dfbe6a709fd5ea0c3532198cb5135d3866f71639f69a1e79c9a6d39ef"} Jan 29 06:55:20 crc kubenswrapper[4826]: I0129 06:55:20.393930 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" event={"ID":"25d04361-6620-4379-90ae-b88e5a8cd03d","Type":"ContainerStarted","Data":"92c9899e06d50491005f991cd28fba14dcda97287393ba317ad7110367e9264a"} Jan 29 06:55:22 crc kubenswrapper[4826]: I0129 06:55:22.143343 4826 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 06:55:23 crc kubenswrapper[4826]: I0129 06:55:23.419556 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" event={"ID":"25d04361-6620-4379-90ae-b88e5a8cd03d","Type":"ContainerStarted","Data":"96b07410d8243fad44eccde60f48d902dbc23323459bab515a26c7ff782844f8"} Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.633577 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-z4wvn"] Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.634741 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.638038 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.638637 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.640163 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.640230 4826 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-9wdl9" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.723605 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kss58\" (UniqueName: \"kubernetes.io/projected/1d160df0-c2ef-4694-a83f-bc8fef8a272f-kube-api-access-kss58\") pod \"crc-storage-crc-z4wvn\" (UID: \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\") " pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.723700 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1d160df0-c2ef-4694-a83f-bc8fef8a272f-crc-storage\") pod \"crc-storage-crc-z4wvn\" (UID: \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\") " pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.723788 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1d160df0-c2ef-4694-a83f-bc8fef8a272f-node-mnt\") pod \"crc-storage-crc-z4wvn\" (UID: \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\") " pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.824871 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1d160df0-c2ef-4694-a83f-bc8fef8a272f-node-mnt\") pod \"crc-storage-crc-z4wvn\" (UID: \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\") " pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.824966 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kss58\" (UniqueName: \"kubernetes.io/projected/1d160df0-c2ef-4694-a83f-bc8fef8a272f-kube-api-access-kss58\") pod \"crc-storage-crc-z4wvn\" (UID: \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\") " pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.824996 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1d160df0-c2ef-4694-a83f-bc8fef8a272f-crc-storage\") pod \"crc-storage-crc-z4wvn\" (UID: \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\") " pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.825186 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1d160df0-c2ef-4694-a83f-bc8fef8a272f-node-mnt\") pod \"crc-storage-crc-z4wvn\" (UID: \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\") " pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.825900 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1d160df0-c2ef-4694-a83f-bc8fef8a272f-crc-storage\") pod \"crc-storage-crc-z4wvn\" (UID: \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\") " pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.856915 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kss58\" (UniqueName: \"kubernetes.io/projected/1d160df0-c2ef-4694-a83f-bc8fef8a272f-kube-api-access-kss58\") pod \"crc-storage-crc-z4wvn\" (UID: \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\") " pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:24 crc kubenswrapper[4826]: I0129 06:55:24.974450 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:25 crc kubenswrapper[4826]: E0129 06:55:25.005208 4826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-z4wvn_crc-storage_1d160df0-c2ef-4694-a83f-bc8fef8a272f_0(28ace9c78b8846100021002261599f8a51f923002b6bd662968cecbce0bf4cee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 06:55:25 crc kubenswrapper[4826]: E0129 06:55:25.005352 4826 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-z4wvn_crc-storage_1d160df0-c2ef-4694-a83f-bc8fef8a272f_0(28ace9c78b8846100021002261599f8a51f923002b6bd662968cecbce0bf4cee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:25 crc kubenswrapper[4826]: E0129 06:55:25.005394 4826 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-z4wvn_crc-storage_1d160df0-c2ef-4694-a83f-bc8fef8a272f_0(28ace9c78b8846100021002261599f8a51f923002b6bd662968cecbce0bf4cee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:25 crc kubenswrapper[4826]: E0129 06:55:25.005481 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-z4wvn_crc-storage(1d160df0-c2ef-4694-a83f-bc8fef8a272f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-z4wvn_crc-storage(1d160df0-c2ef-4694-a83f-bc8fef8a272f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-z4wvn_crc-storage_1d160df0-c2ef-4694-a83f-bc8fef8a272f_0(28ace9c78b8846100021002261599f8a51f923002b6bd662968cecbce0bf4cee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-z4wvn" podUID="1d160df0-c2ef-4694-a83f-bc8fef8a272f" Jan 29 06:55:25 crc kubenswrapper[4826]: I0129 06:55:25.441392 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" event={"ID":"25d04361-6620-4379-90ae-b88e5a8cd03d","Type":"ContainerStarted","Data":"6cfe89015989ef955771cc381d10f29e88cc018c93d8bf08781ba836002db55f"} Jan 29 06:55:25 crc kubenswrapper[4826]: I0129 06:55:25.441946 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:25 crc kubenswrapper[4826]: I0129 06:55:25.442028 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:25 crc kubenswrapper[4826]: I0129 06:55:25.442060 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:25 crc kubenswrapper[4826]: I0129 06:55:25.483787 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:25 crc kubenswrapper[4826]: I0129 06:55:25.500698 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" podStartSLOduration=7.500668445 podStartE2EDuration="7.500668445s" podCreationTimestamp="2026-01-29 06:55:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:55:25.489071285 +0000 UTC m=+709.350864394" watchObservedRunningTime="2026-01-29 06:55:25.500668445 +0000 UTC m=+709.362461554" Jan 29 06:55:25 crc kubenswrapper[4826]: I0129 06:55:25.507519 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:26 crc kubenswrapper[4826]: I0129 06:55:26.775336 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-z4wvn"] Jan 29 06:55:26 crc kubenswrapper[4826]: I0129 06:55:26.775783 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:26 crc kubenswrapper[4826]: I0129 06:55:26.776362 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:26 crc kubenswrapper[4826]: E0129 06:55:26.833605 4826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-z4wvn_crc-storage_1d160df0-c2ef-4694-a83f-bc8fef8a272f_0(ff1055827bc54e60fbbb032de11c07c88272ca2c67c554ffedfbbcce5a0604cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 06:55:26 crc kubenswrapper[4826]: E0129 06:55:26.833677 4826 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-z4wvn_crc-storage_1d160df0-c2ef-4694-a83f-bc8fef8a272f_0(ff1055827bc54e60fbbb032de11c07c88272ca2c67c554ffedfbbcce5a0604cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:26 crc kubenswrapper[4826]: E0129 06:55:26.833702 4826 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-z4wvn_crc-storage_1d160df0-c2ef-4694-a83f-bc8fef8a272f_0(ff1055827bc54e60fbbb032de11c07c88272ca2c67c554ffedfbbcce5a0604cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:26 crc kubenswrapper[4826]: E0129 06:55:26.833750 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-z4wvn_crc-storage(1d160df0-c2ef-4694-a83f-bc8fef8a272f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-z4wvn_crc-storage(1d160df0-c2ef-4694-a83f-bc8fef8a272f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-z4wvn_crc-storage_1d160df0-c2ef-4694-a83f-bc8fef8a272f_0(ff1055827bc54e60fbbb032de11c07c88272ca2c67c554ffedfbbcce5a0604cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-z4wvn" podUID="1d160df0-c2ef-4694-a83f-bc8fef8a272f" Jan 29 06:55:39 crc kubenswrapper[4826]: I0129 06:55:39.808122 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:39 crc kubenswrapper[4826]: I0129 06:55:39.810754 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:40 crc kubenswrapper[4826]: I0129 06:55:40.099119 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 06:55:40 crc kubenswrapper[4826]: I0129 06:55:40.101788 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-z4wvn"] Jan 29 06:55:40 crc kubenswrapper[4826]: I0129 06:55:40.560850 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-z4wvn" event={"ID":"1d160df0-c2ef-4694-a83f-bc8fef8a272f","Type":"ContainerStarted","Data":"bd840d53f259f4c1dba84a50b0987459f7da3455b3cc0e89cfde2b56c4af5bc0"} Jan 29 06:55:42 crc kubenswrapper[4826]: I0129 06:55:42.577627 4826 generic.go:334] "Generic (PLEG): container finished" podID="1d160df0-c2ef-4694-a83f-bc8fef8a272f" containerID="6c05d6f827d0d2f9a0b07269fc670a11f080cc6df58f7041b420ccc38ecbd9d8" exitCode=0 Jan 29 06:55:42 crc kubenswrapper[4826]: I0129 06:55:42.577740 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-z4wvn" event={"ID":"1d160df0-c2ef-4694-a83f-bc8fef8a272f","Type":"ContainerDied","Data":"6c05d6f827d0d2f9a0b07269fc670a11f080cc6df58f7041b420ccc38ecbd9d8"} Jan 29 06:55:43 crc kubenswrapper[4826]: I0129 06:55:43.860128 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:43 crc kubenswrapper[4826]: I0129 06:55:43.956138 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1d160df0-c2ef-4694-a83f-bc8fef8a272f-crc-storage\") pod \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\" (UID: \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\") " Jan 29 06:55:43 crc kubenswrapper[4826]: I0129 06:55:43.956838 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kss58\" (UniqueName: \"kubernetes.io/projected/1d160df0-c2ef-4694-a83f-bc8fef8a272f-kube-api-access-kss58\") pod \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\" (UID: \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\") " Jan 29 06:55:43 crc kubenswrapper[4826]: I0129 06:55:43.956995 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1d160df0-c2ef-4694-a83f-bc8fef8a272f-node-mnt\") pod \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\" (UID: \"1d160df0-c2ef-4694-a83f-bc8fef8a272f\") " Jan 29 06:55:43 crc kubenswrapper[4826]: I0129 06:55:43.957070 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d160df0-c2ef-4694-a83f-bc8fef8a272f-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "1d160df0-c2ef-4694-a83f-bc8fef8a272f" (UID: "1d160df0-c2ef-4694-a83f-bc8fef8a272f"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 06:55:43 crc kubenswrapper[4826]: I0129 06:55:43.957586 4826 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1d160df0-c2ef-4694-a83f-bc8fef8a272f-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:43 crc kubenswrapper[4826]: I0129 06:55:43.965510 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d160df0-c2ef-4694-a83f-bc8fef8a272f-kube-api-access-kss58" (OuterVolumeSpecName: "kube-api-access-kss58") pod "1d160df0-c2ef-4694-a83f-bc8fef8a272f" (UID: "1d160df0-c2ef-4694-a83f-bc8fef8a272f"). InnerVolumeSpecName "kube-api-access-kss58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:43 crc kubenswrapper[4826]: I0129 06:55:43.968819 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d160df0-c2ef-4694-a83f-bc8fef8a272f-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "1d160df0-c2ef-4694-a83f-bc8fef8a272f" (UID: "1d160df0-c2ef-4694-a83f-bc8fef8a272f"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:55:44 crc kubenswrapper[4826]: I0129 06:55:44.059430 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kss58\" (UniqueName: \"kubernetes.io/projected/1d160df0-c2ef-4694-a83f-bc8fef8a272f-kube-api-access-kss58\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:44 crc kubenswrapper[4826]: I0129 06:55:44.059492 4826 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1d160df0-c2ef-4694-a83f-bc8fef8a272f-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:44 crc kubenswrapper[4826]: I0129 06:55:44.593583 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-z4wvn" event={"ID":"1d160df0-c2ef-4694-a83f-bc8fef8a272f","Type":"ContainerDied","Data":"bd840d53f259f4c1dba84a50b0987459f7da3455b3cc0e89cfde2b56c4af5bc0"} Jan 29 06:55:44 crc kubenswrapper[4826]: I0129 06:55:44.593642 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd840d53f259f4c1dba84a50b0987459f7da3455b3cc0e89cfde2b56c4af5bc0" Jan 29 06:55:44 crc kubenswrapper[4826]: I0129 06:55:44.593641 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-z4wvn" Jan 29 06:55:48 crc kubenswrapper[4826]: I0129 06:55:48.389696 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nc95s" Jan 29 06:55:51 crc kubenswrapper[4826]: I0129 06:55:51.862581 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5"] Jan 29 06:55:51 crc kubenswrapper[4826]: E0129 06:55:51.863201 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d160df0-c2ef-4694-a83f-bc8fef8a272f" containerName="storage" Jan 29 06:55:51 crc kubenswrapper[4826]: I0129 06:55:51.863219 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d160df0-c2ef-4694-a83f-bc8fef8a272f" containerName="storage" Jan 29 06:55:51 crc kubenswrapper[4826]: I0129 06:55:51.863386 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d160df0-c2ef-4694-a83f-bc8fef8a272f" containerName="storage" Jan 29 06:55:51 crc kubenswrapper[4826]: I0129 06:55:51.864555 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" Jan 29 06:55:51 crc kubenswrapper[4826]: I0129 06:55:51.869963 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 06:55:51 crc kubenswrapper[4826]: I0129 06:55:51.884097 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5"] Jan 29 06:55:51 crc kubenswrapper[4826]: I0129 06:55:51.985566 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssmkc\" (UniqueName: \"kubernetes.io/projected/615a47fa-75b7-4c22-9e69-cc70f9e3a132-kube-api-access-ssmkc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5\" (UID: \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" Jan 29 06:55:51 crc kubenswrapper[4826]: I0129 06:55:51.985617 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/615a47fa-75b7-4c22-9e69-cc70f9e3a132-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5\" (UID: \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" Jan 29 06:55:51 crc kubenswrapper[4826]: I0129 06:55:51.985639 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/615a47fa-75b7-4c22-9e69-cc70f9e3a132-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5\" (UID: \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" Jan 29 06:55:52 crc kubenswrapper[4826]: I0129 06:55:52.086629 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssmkc\" (UniqueName: \"kubernetes.io/projected/615a47fa-75b7-4c22-9e69-cc70f9e3a132-kube-api-access-ssmkc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5\" (UID: \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" Jan 29 06:55:52 crc kubenswrapper[4826]: I0129 06:55:52.086702 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/615a47fa-75b7-4c22-9e69-cc70f9e3a132-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5\" (UID: \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" Jan 29 06:55:52 crc kubenswrapper[4826]: I0129 06:55:52.086735 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/615a47fa-75b7-4c22-9e69-cc70f9e3a132-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5\" (UID: \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" Jan 29 06:55:52 crc kubenswrapper[4826]: I0129 06:55:52.087364 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/615a47fa-75b7-4c22-9e69-cc70f9e3a132-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5\" (UID: \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" Jan 29 06:55:52 crc kubenswrapper[4826]: I0129 06:55:52.087605 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/615a47fa-75b7-4c22-9e69-cc70f9e3a132-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5\" (UID: \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" Jan 29 06:55:52 crc kubenswrapper[4826]: I0129 06:55:52.121254 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssmkc\" (UniqueName: \"kubernetes.io/projected/615a47fa-75b7-4c22-9e69-cc70f9e3a132-kube-api-access-ssmkc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5\" (UID: \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" Jan 29 06:55:52 crc kubenswrapper[4826]: I0129 06:55:52.179890 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" Jan 29 06:55:52 crc kubenswrapper[4826]: I0129 06:55:52.419489 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5"] Jan 29 06:55:52 crc kubenswrapper[4826]: I0129 06:55:52.648660 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" event={"ID":"615a47fa-75b7-4c22-9e69-cc70f9e3a132","Type":"ContainerStarted","Data":"2dea28dc531dad1503a45de2da8b4704ce30f3aea0590f25fa50bc080c5aed2d"} Jan 29 06:55:52 crc kubenswrapper[4826]: I0129 06:55:52.648755 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" event={"ID":"615a47fa-75b7-4c22-9e69-cc70f9e3a132","Type":"ContainerStarted","Data":"fd8c4adb772b81e9ee974dbd37c4aa260f90e835a6d6e229ca6ea378a400311e"} Jan 29 06:55:53 crc kubenswrapper[4826]: I0129 06:55:53.659261 4826 generic.go:334] "Generic (PLEG): container finished" podID="615a47fa-75b7-4c22-9e69-cc70f9e3a132" containerID="2dea28dc531dad1503a45de2da8b4704ce30f3aea0590f25fa50bc080c5aed2d" exitCode=0 Jan 29 06:55:53 crc kubenswrapper[4826]: I0129 06:55:53.659383 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" event={"ID":"615a47fa-75b7-4c22-9e69-cc70f9e3a132","Type":"ContainerDied","Data":"2dea28dc531dad1503a45de2da8b4704ce30f3aea0590f25fa50bc080c5aed2d"} Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.040357 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x2w8g"] Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.042334 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.052362 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x2w8g"] Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.122808 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztkjv\" (UniqueName: \"kubernetes.io/projected/4c71fb9d-1000-4eb9-8418-7959457d051f-kube-api-access-ztkjv\") pod \"redhat-operators-x2w8g\" (UID: \"4c71fb9d-1000-4eb9-8418-7959457d051f\") " pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.122993 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c71fb9d-1000-4eb9-8418-7959457d051f-utilities\") pod \"redhat-operators-x2w8g\" (UID: \"4c71fb9d-1000-4eb9-8418-7959457d051f\") " pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.123035 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c71fb9d-1000-4eb9-8418-7959457d051f-catalog-content\") pod \"redhat-operators-x2w8g\" (UID: \"4c71fb9d-1000-4eb9-8418-7959457d051f\") " pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.225006 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c71fb9d-1000-4eb9-8418-7959457d051f-utilities\") pod \"redhat-operators-x2w8g\" (UID: \"4c71fb9d-1000-4eb9-8418-7959457d051f\") " pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.225179 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c71fb9d-1000-4eb9-8418-7959457d051f-catalog-content\") pod \"redhat-operators-x2w8g\" (UID: \"4c71fb9d-1000-4eb9-8418-7959457d051f\") " pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.225405 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztkjv\" (UniqueName: \"kubernetes.io/projected/4c71fb9d-1000-4eb9-8418-7959457d051f-kube-api-access-ztkjv\") pod \"redhat-operators-x2w8g\" (UID: \"4c71fb9d-1000-4eb9-8418-7959457d051f\") " pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.227022 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c71fb9d-1000-4eb9-8418-7959457d051f-utilities\") pod \"redhat-operators-x2w8g\" (UID: \"4c71fb9d-1000-4eb9-8418-7959457d051f\") " pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.227531 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c71fb9d-1000-4eb9-8418-7959457d051f-catalog-content\") pod \"redhat-operators-x2w8g\" (UID: \"4c71fb9d-1000-4eb9-8418-7959457d051f\") " pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.274157 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztkjv\" (UniqueName: \"kubernetes.io/projected/4c71fb9d-1000-4eb9-8418-7959457d051f-kube-api-access-ztkjv\") pod \"redhat-operators-x2w8g\" (UID: \"4c71fb9d-1000-4eb9-8418-7959457d051f\") " pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.384126 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.574580 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x2w8g"] Jan 29 06:55:54 crc kubenswrapper[4826]: I0129 06:55:54.665069 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2w8g" event={"ID":"4c71fb9d-1000-4eb9-8418-7959457d051f","Type":"ContainerStarted","Data":"19e6561727ac1a16f7264af7e86cd6cf066ca1d7b63a714b8c686906e664424a"} Jan 29 06:55:55 crc kubenswrapper[4826]: I0129 06:55:55.674393 4826 generic.go:334] "Generic (PLEG): container finished" podID="4c71fb9d-1000-4eb9-8418-7959457d051f" containerID="bfe65fd6638a12202519c6fee2eb639f68d9a5ac4864d36affaa3d793158b03d" exitCode=0 Jan 29 06:55:55 crc kubenswrapper[4826]: I0129 06:55:55.674513 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2w8g" event={"ID":"4c71fb9d-1000-4eb9-8418-7959457d051f","Type":"ContainerDied","Data":"bfe65fd6638a12202519c6fee2eb639f68d9a5ac4864d36affaa3d793158b03d"} Jan 29 06:55:55 crc kubenswrapper[4826]: I0129 06:55:55.678586 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" event={"ID":"615a47fa-75b7-4c22-9e69-cc70f9e3a132","Type":"ContainerStarted","Data":"2f56dfdebdc1e14f274b62b9216ec36ec415f50187f1e3b47a2743d1a3d7770d"} Jan 29 06:55:56 crc kubenswrapper[4826]: I0129 06:55:56.688147 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2w8g" event={"ID":"4c71fb9d-1000-4eb9-8418-7959457d051f","Type":"ContainerStarted","Data":"134feb682c8faa0059b9ae6fb1a52181abde925fd97d40e6379b3bf3f4fb3d70"} Jan 29 06:55:56 crc kubenswrapper[4826]: I0129 06:55:56.691469 4826 generic.go:334] "Generic (PLEG): container finished" podID="615a47fa-75b7-4c22-9e69-cc70f9e3a132" containerID="2f56dfdebdc1e14f274b62b9216ec36ec415f50187f1e3b47a2743d1a3d7770d" exitCode=0 Jan 29 06:55:56 crc kubenswrapper[4826]: I0129 06:55:56.691536 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" event={"ID":"615a47fa-75b7-4c22-9e69-cc70f9e3a132","Type":"ContainerDied","Data":"2f56dfdebdc1e14f274b62b9216ec36ec415f50187f1e3b47a2743d1a3d7770d"} Jan 29 06:55:57 crc kubenswrapper[4826]: I0129 06:55:57.699520 4826 generic.go:334] "Generic (PLEG): container finished" podID="4c71fb9d-1000-4eb9-8418-7959457d051f" containerID="134feb682c8faa0059b9ae6fb1a52181abde925fd97d40e6379b3bf3f4fb3d70" exitCode=0 Jan 29 06:55:57 crc kubenswrapper[4826]: I0129 06:55:57.699610 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2w8g" event={"ID":"4c71fb9d-1000-4eb9-8418-7959457d051f","Type":"ContainerDied","Data":"134feb682c8faa0059b9ae6fb1a52181abde925fd97d40e6379b3bf3f4fb3d70"} Jan 29 06:55:57 crc kubenswrapper[4826]: I0129 06:55:57.703546 4826 generic.go:334] "Generic (PLEG): container finished" podID="615a47fa-75b7-4c22-9e69-cc70f9e3a132" containerID="6aa46a498c78131b44fc9ea8bc9f7bc6e2be0abb7269650c08da0838a720468a" exitCode=0 Jan 29 06:55:57 crc kubenswrapper[4826]: I0129 06:55:57.703621 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" event={"ID":"615a47fa-75b7-4c22-9e69-cc70f9e3a132","Type":"ContainerDied","Data":"6aa46a498c78131b44fc9ea8bc9f7bc6e2be0abb7269650c08da0838a720468a"} Jan 29 06:55:58 crc kubenswrapper[4826]: I0129 06:55:58.715882 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2w8g" event={"ID":"4c71fb9d-1000-4eb9-8418-7959457d051f","Type":"ContainerStarted","Data":"02a74db8385a661e7956d24873e805aaa5adbdf4a1674cad403d27b34cb245c6"} Jan 29 06:55:58 crc kubenswrapper[4826]: I0129 06:55:58.746574 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x2w8g" podStartSLOduration=2.101408182 podStartE2EDuration="4.746545802s" podCreationTimestamp="2026-01-29 06:55:54 +0000 UTC" firstStartedPulling="2026-01-29 06:55:55.67636061 +0000 UTC m=+739.538153679" lastFinishedPulling="2026-01-29 06:55:58.3214982 +0000 UTC m=+742.183291299" observedRunningTime="2026-01-29 06:55:58.743506503 +0000 UTC m=+742.605299632" watchObservedRunningTime="2026-01-29 06:55:58.746545802 +0000 UTC m=+742.608338911" Jan 29 06:55:59 crc kubenswrapper[4826]: I0129 06:55:59.084702 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" Jan 29 06:55:59 crc kubenswrapper[4826]: I0129 06:55:59.215130 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/615a47fa-75b7-4c22-9e69-cc70f9e3a132-bundle\") pod \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\" (UID: \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\") " Jan 29 06:55:59 crc kubenswrapper[4826]: I0129 06:55:59.215570 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/615a47fa-75b7-4c22-9e69-cc70f9e3a132-util\") pod \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\" (UID: \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\") " Jan 29 06:55:59 crc kubenswrapper[4826]: I0129 06:55:59.215653 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/615a47fa-75b7-4c22-9e69-cc70f9e3a132-bundle" (OuterVolumeSpecName: "bundle") pod "615a47fa-75b7-4c22-9e69-cc70f9e3a132" (UID: "615a47fa-75b7-4c22-9e69-cc70f9e3a132"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:55:59 crc kubenswrapper[4826]: I0129 06:55:59.215690 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssmkc\" (UniqueName: \"kubernetes.io/projected/615a47fa-75b7-4c22-9e69-cc70f9e3a132-kube-api-access-ssmkc\") pod \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\" (UID: \"615a47fa-75b7-4c22-9e69-cc70f9e3a132\") " Jan 29 06:55:59 crc kubenswrapper[4826]: I0129 06:55:59.215991 4826 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/615a47fa-75b7-4c22-9e69-cc70f9e3a132-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:59 crc kubenswrapper[4826]: I0129 06:55:59.223764 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/615a47fa-75b7-4c22-9e69-cc70f9e3a132-kube-api-access-ssmkc" (OuterVolumeSpecName: "kube-api-access-ssmkc") pod "615a47fa-75b7-4c22-9e69-cc70f9e3a132" (UID: "615a47fa-75b7-4c22-9e69-cc70f9e3a132"). InnerVolumeSpecName "kube-api-access-ssmkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:55:59 crc kubenswrapper[4826]: I0129 06:55:59.232821 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/615a47fa-75b7-4c22-9e69-cc70f9e3a132-util" (OuterVolumeSpecName: "util") pod "615a47fa-75b7-4c22-9e69-cc70f9e3a132" (UID: "615a47fa-75b7-4c22-9e69-cc70f9e3a132"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:55:59 crc kubenswrapper[4826]: I0129 06:55:59.317746 4826 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/615a47fa-75b7-4c22-9e69-cc70f9e3a132-util\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:59 crc kubenswrapper[4826]: I0129 06:55:59.317790 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssmkc\" (UniqueName: \"kubernetes.io/projected/615a47fa-75b7-4c22-9e69-cc70f9e3a132-kube-api-access-ssmkc\") on node \"crc\" DevicePath \"\"" Jan 29 06:55:59 crc kubenswrapper[4826]: I0129 06:55:59.722645 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" Jan 29 06:55:59 crc kubenswrapper[4826]: I0129 06:55:59.722692 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5" event={"ID":"615a47fa-75b7-4c22-9e69-cc70f9e3a132","Type":"ContainerDied","Data":"fd8c4adb772b81e9ee974dbd37c4aa260f90e835a6d6e229ca6ea378a400311e"} Jan 29 06:55:59 crc kubenswrapper[4826]: I0129 06:55:59.722717 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd8c4adb772b81e9ee974dbd37c4aa260f90e835a6d6e229ca6ea378a400311e" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.362522 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-kvqn2"] Jan 29 06:56:02 crc kubenswrapper[4826]: E0129 06:56:02.362768 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615a47fa-75b7-4c22-9e69-cc70f9e3a132" containerName="pull" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.362783 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="615a47fa-75b7-4c22-9e69-cc70f9e3a132" containerName="pull" Jan 29 06:56:02 crc kubenswrapper[4826]: E0129 06:56:02.362797 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615a47fa-75b7-4c22-9e69-cc70f9e3a132" containerName="extract" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.362806 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="615a47fa-75b7-4c22-9e69-cc70f9e3a132" containerName="extract" Jan 29 06:56:02 crc kubenswrapper[4826]: E0129 06:56:02.362832 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615a47fa-75b7-4c22-9e69-cc70f9e3a132" containerName="util" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.362839 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="615a47fa-75b7-4c22-9e69-cc70f9e3a132" containerName="util" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.362965 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="615a47fa-75b7-4c22-9e69-cc70f9e3a132" containerName="extract" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.363515 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-kvqn2" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.366498 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-n2dc5" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.366608 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.367071 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.380112 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-kvqn2"] Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.462514 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw2bp\" (UniqueName: \"kubernetes.io/projected/a80a15e8-5d05-41b2-b558-b0e73d1114c9-kube-api-access-dw2bp\") pod \"nmstate-operator-646758c888-kvqn2\" (UID: \"a80a15e8-5d05-41b2-b558-b0e73d1114c9\") " pod="openshift-nmstate/nmstate-operator-646758c888-kvqn2" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.563518 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw2bp\" (UniqueName: \"kubernetes.io/projected/a80a15e8-5d05-41b2-b558-b0e73d1114c9-kube-api-access-dw2bp\") pod \"nmstate-operator-646758c888-kvqn2\" (UID: \"a80a15e8-5d05-41b2-b558-b0e73d1114c9\") " pod="openshift-nmstate/nmstate-operator-646758c888-kvqn2" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.589446 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw2bp\" (UniqueName: \"kubernetes.io/projected/a80a15e8-5d05-41b2-b558-b0e73d1114c9-kube-api-access-dw2bp\") pod \"nmstate-operator-646758c888-kvqn2\" (UID: \"a80a15e8-5d05-41b2-b558-b0e73d1114c9\") " pod="openshift-nmstate/nmstate-operator-646758c888-kvqn2" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.681452 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-kvqn2" Jan 29 06:56:02 crc kubenswrapper[4826]: I0129 06:56:02.914807 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-kvqn2"] Jan 29 06:56:02 crc kubenswrapper[4826]: W0129 06:56:02.917742 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda80a15e8_5d05_41b2_b558_b0e73d1114c9.slice/crio-e0e2a0000e9790c68f549ff666459144b880d8f75518137f2c9e38609444ccac WatchSource:0}: Error finding container e0e2a0000e9790c68f549ff666459144b880d8f75518137f2c9e38609444ccac: Status 404 returned error can't find the container with id e0e2a0000e9790c68f549ff666459144b880d8f75518137f2c9e38609444ccac Jan 29 06:56:03 crc kubenswrapper[4826]: I0129 06:56:03.744665 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-kvqn2" event={"ID":"a80a15e8-5d05-41b2-b558-b0e73d1114c9","Type":"ContainerStarted","Data":"e0e2a0000e9790c68f549ff666459144b880d8f75518137f2c9e38609444ccac"} Jan 29 06:56:04 crc kubenswrapper[4826]: I0129 06:56:04.384844 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:56:04 crc kubenswrapper[4826]: I0129 06:56:04.384938 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:56:05 crc kubenswrapper[4826]: I0129 06:56:05.453173 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x2w8g" podUID="4c71fb9d-1000-4eb9-8418-7959457d051f" containerName="registry-server" probeResult="failure" output=< Jan 29 06:56:05 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 06:56:05 crc kubenswrapper[4826]: > Jan 29 06:56:05 crc kubenswrapper[4826]: I0129 06:56:05.656880 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:56:05 crc kubenswrapper[4826]: I0129 06:56:05.656993 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:56:05 crc kubenswrapper[4826]: I0129 06:56:05.757923 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-kvqn2" event={"ID":"a80a15e8-5d05-41b2-b558-b0e73d1114c9","Type":"ContainerStarted","Data":"78d949a78bd2badcaabac3af23ee9a62983c17d68314af7c2169b64f1cc0bfd2"} Jan 29 06:56:05 crc kubenswrapper[4826]: I0129 06:56:05.795085 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-kvqn2" podStartSLOduration=1.454747318 podStartE2EDuration="3.795062061s" podCreationTimestamp="2026-01-29 06:56:02 +0000 UTC" firstStartedPulling="2026-01-29 06:56:02.919055158 +0000 UTC m=+746.780848237" lastFinishedPulling="2026-01-29 06:56:05.259369911 +0000 UTC m=+749.121162980" observedRunningTime="2026-01-29 06:56:05.78543438 +0000 UTC m=+749.647227499" watchObservedRunningTime="2026-01-29 06:56:05.795062061 +0000 UTC m=+749.656855160" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.061544 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-pbp9h"] Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.063508 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-pbp9h" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.073422 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-d4gxj" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.074045 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-pbp9h"] Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.092493 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5dnl7"] Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.093504 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.102844 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6"] Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.104142 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.107271 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.125475 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6"] Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.208913 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v"] Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.209452 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfbkx\" (UniqueName: \"kubernetes.io/projected/7acf9634-de8f-42e2-b40b-7b7fb4f354c9-kube-api-access-wfbkx\") pod \"nmstate-handler-5dnl7\" (UID: \"7acf9634-de8f-42e2-b40b-7b7fb4f354c9\") " pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.209562 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d9m8\" (UniqueName: \"kubernetes.io/projected/4297d4dd-0999-4ab5-87a0-f0190de20a82-kube-api-access-2d9m8\") pod \"nmstate-metrics-54757c584b-pbp9h\" (UID: \"4297d4dd-0999-4ab5-87a0-f0190de20a82\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-pbp9h" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.209725 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tb8l6\" (UID: \"57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.209785 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7acf9634-de8f-42e2-b40b-7b7fb4f354c9-ovs-socket\") pod \"nmstate-handler-5dnl7\" (UID: \"7acf9634-de8f-42e2-b40b-7b7fb4f354c9\") " pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.209732 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.209818 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhk49\" (UniqueName: \"kubernetes.io/projected/57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d-kube-api-access-qhk49\") pod \"nmstate-webhook-8474b5b9d8-tb8l6\" (UID: \"57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.209859 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7acf9634-de8f-42e2-b40b-7b7fb4f354c9-nmstate-lock\") pod \"nmstate-handler-5dnl7\" (UID: \"7acf9634-de8f-42e2-b40b-7b7fb4f354c9\") " pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.209933 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7acf9634-de8f-42e2-b40b-7b7fb4f354c9-dbus-socket\") pod \"nmstate-handler-5dnl7\" (UID: \"7acf9634-de8f-42e2-b40b-7b7fb4f354c9\") " pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.213395 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-mnmzw" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.213660 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.216636 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.224868 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v"] Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.312287 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfbkx\" (UniqueName: \"kubernetes.io/projected/7acf9634-de8f-42e2-b40b-7b7fb4f354c9-kube-api-access-wfbkx\") pod \"nmstate-handler-5dnl7\" (UID: \"7acf9634-de8f-42e2-b40b-7b7fb4f354c9\") " pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.312378 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d9m8\" (UniqueName: \"kubernetes.io/projected/4297d4dd-0999-4ab5-87a0-f0190de20a82-kube-api-access-2d9m8\") pod \"nmstate-metrics-54757c584b-pbp9h\" (UID: \"4297d4dd-0999-4ab5-87a0-f0190de20a82\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-pbp9h" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.312448 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh8mh\" (UniqueName: \"kubernetes.io/projected/03cc87ed-54ec-46aa-8ec1-457a68eeaf8a-kube-api-access-jh8mh\") pod \"nmstate-console-plugin-7754f76f8b-cmm6v\" (UID: \"03cc87ed-54ec-46aa-8ec1-457a68eeaf8a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.312491 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tb8l6\" (UID: \"57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.312534 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/03cc87ed-54ec-46aa-8ec1-457a68eeaf8a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cmm6v\" (UID: \"03cc87ed-54ec-46aa-8ec1-457a68eeaf8a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.312568 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7acf9634-de8f-42e2-b40b-7b7fb4f354c9-ovs-socket\") pod \"nmstate-handler-5dnl7\" (UID: \"7acf9634-de8f-42e2-b40b-7b7fb4f354c9\") " pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.312628 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhk49\" (UniqueName: \"kubernetes.io/projected/57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d-kube-api-access-qhk49\") pod \"nmstate-webhook-8474b5b9d8-tb8l6\" (UID: \"57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.312656 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7acf9634-de8f-42e2-b40b-7b7fb4f354c9-nmstate-lock\") pod \"nmstate-handler-5dnl7\" (UID: \"7acf9634-de8f-42e2-b40b-7b7fb4f354c9\") " pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.312696 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/03cc87ed-54ec-46aa-8ec1-457a68eeaf8a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cmm6v\" (UID: \"03cc87ed-54ec-46aa-8ec1-457a68eeaf8a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.312765 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7acf9634-de8f-42e2-b40b-7b7fb4f354c9-dbus-socket\") pod \"nmstate-handler-5dnl7\" (UID: \"7acf9634-de8f-42e2-b40b-7b7fb4f354c9\") " pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.313014 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7acf9634-de8f-42e2-b40b-7b7fb4f354c9-dbus-socket\") pod \"nmstate-handler-5dnl7\" (UID: \"7acf9634-de8f-42e2-b40b-7b7fb4f354c9\") " pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.313085 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7acf9634-de8f-42e2-b40b-7b7fb4f354c9-ovs-socket\") pod \"nmstate-handler-5dnl7\" (UID: \"7acf9634-de8f-42e2-b40b-7b7fb4f354c9\") " pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.313424 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7acf9634-de8f-42e2-b40b-7b7fb4f354c9-nmstate-lock\") pod \"nmstate-handler-5dnl7\" (UID: \"7acf9634-de8f-42e2-b40b-7b7fb4f354c9\") " pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.329162 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tb8l6\" (UID: \"57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.333783 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfbkx\" (UniqueName: \"kubernetes.io/projected/7acf9634-de8f-42e2-b40b-7b7fb4f354c9-kube-api-access-wfbkx\") pod \"nmstate-handler-5dnl7\" (UID: \"7acf9634-de8f-42e2-b40b-7b7fb4f354c9\") " pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.336883 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhk49\" (UniqueName: \"kubernetes.io/projected/57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d-kube-api-access-qhk49\") pod \"nmstate-webhook-8474b5b9d8-tb8l6\" (UID: \"57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.339901 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d9m8\" (UniqueName: \"kubernetes.io/projected/4297d4dd-0999-4ab5-87a0-f0190de20a82-kube-api-access-2d9m8\") pod \"nmstate-metrics-54757c584b-pbp9h\" (UID: \"4297d4dd-0999-4ab5-87a0-f0190de20a82\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-pbp9h" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.393047 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7d5877b44b-s47xp"] Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.393243 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-pbp9h" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.393815 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.415631 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh8mh\" (UniqueName: \"kubernetes.io/projected/03cc87ed-54ec-46aa-8ec1-457a68eeaf8a-kube-api-access-jh8mh\") pod \"nmstate-console-plugin-7754f76f8b-cmm6v\" (UID: \"03cc87ed-54ec-46aa-8ec1-457a68eeaf8a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.415707 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/03cc87ed-54ec-46aa-8ec1-457a68eeaf8a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cmm6v\" (UID: \"03cc87ed-54ec-46aa-8ec1-457a68eeaf8a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.415758 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/03cc87ed-54ec-46aa-8ec1-457a68eeaf8a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cmm6v\" (UID: \"03cc87ed-54ec-46aa-8ec1-457a68eeaf8a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.417511 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/03cc87ed-54ec-46aa-8ec1-457a68eeaf8a-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cmm6v\" (UID: \"03cc87ed-54ec-46aa-8ec1-457a68eeaf8a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.417836 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.420219 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d5877b44b-s47xp"] Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.429463 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.430978 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/03cc87ed-54ec-46aa-8ec1-457a68eeaf8a-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cmm6v\" (UID: \"03cc87ed-54ec-46aa-8ec1-457a68eeaf8a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.446591 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh8mh\" (UniqueName: \"kubernetes.io/projected/03cc87ed-54ec-46aa-8ec1-457a68eeaf8a-kube-api-access-jh8mh\") pod \"nmstate-console-plugin-7754f76f8b-cmm6v\" (UID: \"03cc87ed-54ec-46aa-8ec1-457a68eeaf8a\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.526046 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.526424 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09b18fe5-28b6-433d-84c5-ad810857dc36-service-ca\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.526513 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09b18fe5-28b6-433d-84c5-ad810857dc36-console-serving-cert\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.526541 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09b18fe5-28b6-433d-84c5-ad810857dc36-oauth-serving-cert\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.526582 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wg7\" (UniqueName: \"kubernetes.io/projected/09b18fe5-28b6-433d-84c5-ad810857dc36-kube-api-access-f5wg7\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.526616 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b18fe5-28b6-433d-84c5-ad810857dc36-trusted-ca-bundle\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.526761 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09b18fe5-28b6-433d-84c5-ad810857dc36-console-oauth-config\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.526798 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09b18fe5-28b6-433d-84c5-ad810857dc36-console-config\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.628169 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09b18fe5-28b6-433d-84c5-ad810857dc36-oauth-serving-cert\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.628491 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5wg7\" (UniqueName: \"kubernetes.io/projected/09b18fe5-28b6-433d-84c5-ad810857dc36-kube-api-access-f5wg7\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.628520 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b18fe5-28b6-433d-84c5-ad810857dc36-trusted-ca-bundle\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.628582 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09b18fe5-28b6-433d-84c5-ad810857dc36-console-oauth-config\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.628601 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09b18fe5-28b6-433d-84c5-ad810857dc36-console-config\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.628640 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09b18fe5-28b6-433d-84c5-ad810857dc36-service-ca\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.628676 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09b18fe5-28b6-433d-84c5-ad810857dc36-console-serving-cert\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.629366 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09b18fe5-28b6-433d-84c5-ad810857dc36-oauth-serving-cert\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.629899 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09b18fe5-28b6-433d-84c5-ad810857dc36-service-ca\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.629942 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09b18fe5-28b6-433d-84c5-ad810857dc36-console-config\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.630528 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b18fe5-28b6-433d-84c5-ad810857dc36-trusted-ca-bundle\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.633871 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09b18fe5-28b6-433d-84c5-ad810857dc36-console-serving-cert\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.634372 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09b18fe5-28b6-433d-84c5-ad810857dc36-console-oauth-config\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.646649 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5wg7\" (UniqueName: \"kubernetes.io/projected/09b18fe5-28b6-433d-84c5-ad810857dc36-kube-api-access-f5wg7\") pod \"console-7d5877b44b-s47xp\" (UID: \"09b18fe5-28b6-433d-84c5-ad810857dc36\") " pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.767173 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.817644 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5dnl7" event={"ID":"7acf9634-de8f-42e2-b40b-7b7fb4f354c9","Type":"ContainerStarted","Data":"83d62c393e25baa6b1de5092bf9249ca4663ffbcba1382591f3a4acb51b2233e"} Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.889162 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-pbp9h"] Jan 29 06:56:12 crc kubenswrapper[4826]: W0129 06:56:12.894609 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4297d4dd_0999_4ab5_87a0_f0190de20a82.slice/crio-478419432fc8da6050ab1ea55a1665bda243e0917c92e0a111e7d7931aef8185 WatchSource:0}: Error finding container 478419432fc8da6050ab1ea55a1665bda243e0917c92e0a111e7d7931aef8185: Status 404 returned error can't find the container with id 478419432fc8da6050ab1ea55a1665bda243e0917c92e0a111e7d7931aef8185 Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.917930 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6"] Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.959216 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v"] Jan 29 06:56:12 crc kubenswrapper[4826]: W0129 06:56:12.962713 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03cc87ed_54ec_46aa_8ec1_457a68eeaf8a.slice/crio-ea7e183ecd560f07b87e87340029acfa3511ff062bfb26a21a73699c6039cd5c WatchSource:0}: Error finding container ea7e183ecd560f07b87e87340029acfa3511ff062bfb26a21a73699c6039cd5c: Status 404 returned error can't find the container with id ea7e183ecd560f07b87e87340029acfa3511ff062bfb26a21a73699c6039cd5c Jan 29 06:56:12 crc kubenswrapper[4826]: I0129 06:56:12.964463 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d5877b44b-s47xp"] Jan 29 06:56:12 crc kubenswrapper[4826]: W0129 06:56:12.969222 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09b18fe5_28b6_433d_84c5_ad810857dc36.slice/crio-46bde4526aef8ed5039a1d821ad07fd30ce56483565b14cd3f157c30dd513fa3 WatchSource:0}: Error finding container 46bde4526aef8ed5039a1d821ad07fd30ce56483565b14cd3f157c30dd513fa3: Status 404 returned error can't find the container with id 46bde4526aef8ed5039a1d821ad07fd30ce56483565b14cd3f157c30dd513fa3 Jan 29 06:56:13 crc kubenswrapper[4826]: I0129 06:56:13.823011 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-pbp9h" event={"ID":"4297d4dd-0999-4ab5-87a0-f0190de20a82","Type":"ContainerStarted","Data":"478419432fc8da6050ab1ea55a1665bda243e0917c92e0a111e7d7931aef8185"} Jan 29 06:56:13 crc kubenswrapper[4826]: I0129 06:56:13.824829 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" event={"ID":"03cc87ed-54ec-46aa-8ec1-457a68eeaf8a","Type":"ContainerStarted","Data":"ea7e183ecd560f07b87e87340029acfa3511ff062bfb26a21a73699c6039cd5c"} Jan 29 06:56:13 crc kubenswrapper[4826]: I0129 06:56:13.827440 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d5877b44b-s47xp" event={"ID":"09b18fe5-28b6-433d-84c5-ad810857dc36","Type":"ContainerStarted","Data":"b918506ffbd6476b440c1cafc7ccf96f3c8a322357a95be7194574e0d2f5807e"} Jan 29 06:56:13 crc kubenswrapper[4826]: I0129 06:56:13.827493 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d5877b44b-s47xp" event={"ID":"09b18fe5-28b6-433d-84c5-ad810857dc36","Type":"ContainerStarted","Data":"46bde4526aef8ed5039a1d821ad07fd30ce56483565b14cd3f157c30dd513fa3"} Jan 29 06:56:13 crc kubenswrapper[4826]: I0129 06:56:13.828586 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6" event={"ID":"57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d","Type":"ContainerStarted","Data":"dc0b882e7df6fb8b8fbdb6bd0c3f7d391cf4641164df24c47b37ffe6b8db3f36"} Jan 29 06:56:13 crc kubenswrapper[4826]: I0129 06:56:13.854504 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7d5877b44b-s47xp" podStartSLOduration=1.854485535 podStartE2EDuration="1.854485535s" podCreationTimestamp="2026-01-29 06:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:56:13.84621742 +0000 UTC m=+757.708010499" watchObservedRunningTime="2026-01-29 06:56:13.854485535 +0000 UTC m=+757.716278614" Jan 29 06:56:14 crc kubenswrapper[4826]: I0129 06:56:14.422658 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:56:14 crc kubenswrapper[4826]: I0129 06:56:14.470478 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:56:14 crc kubenswrapper[4826]: I0129 06:56:14.650842 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x2w8g"] Jan 29 06:56:15 crc kubenswrapper[4826]: I0129 06:56:15.842093 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" event={"ID":"03cc87ed-54ec-46aa-8ec1-457a68eeaf8a","Type":"ContainerStarted","Data":"e633b12da713b616e03637aaec82ea0fc8990da0ddc8a437d4be76ac641482c8"} Jan 29 06:56:15 crc kubenswrapper[4826]: I0129 06:56:15.844334 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5dnl7" event={"ID":"7acf9634-de8f-42e2-b40b-7b7fb4f354c9","Type":"ContainerStarted","Data":"951f695b16811edc70864bd4874d3bdad5579e0de87b56e36dd97a140748be9f"} Jan 29 06:56:15 crc kubenswrapper[4826]: I0129 06:56:15.844469 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:15 crc kubenswrapper[4826]: I0129 06:56:15.846009 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6" event={"ID":"57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d","Type":"ContainerStarted","Data":"54673bd0f62e03515c3a58398d23bdc633c060f1fc795bc3fddf96dae9073445"} Jan 29 06:56:15 crc kubenswrapper[4826]: I0129 06:56:15.846601 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6" Jan 29 06:56:15 crc kubenswrapper[4826]: I0129 06:56:15.852529 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-pbp9h" event={"ID":"4297d4dd-0999-4ab5-87a0-f0190de20a82","Type":"ContainerStarted","Data":"0ff6158fc14cf4031a86347286130560a8c1377286c0fad5e0925d8654adec73"} Jan 29 06:56:15 crc kubenswrapper[4826]: I0129 06:56:15.852632 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x2w8g" podUID="4c71fb9d-1000-4eb9-8418-7959457d051f" containerName="registry-server" containerID="cri-o://02a74db8385a661e7956d24873e805aaa5adbdf4a1674cad403d27b34cb245c6" gracePeriod=2 Jan 29 06:56:15 crc kubenswrapper[4826]: I0129 06:56:15.859939 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cmm6v" podStartSLOduration=1.388216411 podStartE2EDuration="3.859913831s" podCreationTimestamp="2026-01-29 06:56:12 +0000 UTC" firstStartedPulling="2026-01-29 06:56:12.964886323 +0000 UTC m=+756.826679392" lastFinishedPulling="2026-01-29 06:56:15.436583703 +0000 UTC m=+759.298376812" observedRunningTime="2026-01-29 06:56:15.859028708 +0000 UTC m=+759.720821777" watchObservedRunningTime="2026-01-29 06:56:15.859913831 +0000 UTC m=+759.721706930" Jan 29 06:56:15 crc kubenswrapper[4826]: I0129 06:56:15.902817 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6" podStartSLOduration=1.396125415 podStartE2EDuration="3.902798295s" podCreationTimestamp="2026-01-29 06:56:12 +0000 UTC" firstStartedPulling="2026-01-29 06:56:12.929814721 +0000 UTC m=+756.791607790" lastFinishedPulling="2026-01-29 06:56:15.436487561 +0000 UTC m=+759.298280670" observedRunningTime="2026-01-29 06:56:15.900439604 +0000 UTC m=+759.762232703" watchObservedRunningTime="2026-01-29 06:56:15.902798295 +0000 UTC m=+759.764591364" Jan 29 06:56:15 crc kubenswrapper[4826]: I0129 06:56:15.903751 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5dnl7" podStartSLOduration=0.968924077 podStartE2EDuration="3.9037459s" podCreationTimestamp="2026-01-29 06:56:12 +0000 UTC" firstStartedPulling="2026-01-29 06:56:12.528615409 +0000 UTC m=+756.390408478" lastFinishedPulling="2026-01-29 06:56:15.463437202 +0000 UTC m=+759.325230301" observedRunningTime="2026-01-29 06:56:15.882499078 +0000 UTC m=+759.744292147" watchObservedRunningTime="2026-01-29 06:56:15.9037459 +0000 UTC m=+759.765538969" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.207412 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.286340 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c71fb9d-1000-4eb9-8418-7959457d051f-utilities\") pod \"4c71fb9d-1000-4eb9-8418-7959457d051f\" (UID: \"4c71fb9d-1000-4eb9-8418-7959457d051f\") " Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.286497 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c71fb9d-1000-4eb9-8418-7959457d051f-catalog-content\") pod \"4c71fb9d-1000-4eb9-8418-7959457d051f\" (UID: \"4c71fb9d-1000-4eb9-8418-7959457d051f\") " Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.286568 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztkjv\" (UniqueName: \"kubernetes.io/projected/4c71fb9d-1000-4eb9-8418-7959457d051f-kube-api-access-ztkjv\") pod \"4c71fb9d-1000-4eb9-8418-7959457d051f\" (UID: \"4c71fb9d-1000-4eb9-8418-7959457d051f\") " Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.288347 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c71fb9d-1000-4eb9-8418-7959457d051f-utilities" (OuterVolumeSpecName: "utilities") pod "4c71fb9d-1000-4eb9-8418-7959457d051f" (UID: "4c71fb9d-1000-4eb9-8418-7959457d051f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.292157 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c71fb9d-1000-4eb9-8418-7959457d051f-kube-api-access-ztkjv" (OuterVolumeSpecName: "kube-api-access-ztkjv") pod "4c71fb9d-1000-4eb9-8418-7959457d051f" (UID: "4c71fb9d-1000-4eb9-8418-7959457d051f"). InnerVolumeSpecName "kube-api-access-ztkjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.388555 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztkjv\" (UniqueName: \"kubernetes.io/projected/4c71fb9d-1000-4eb9-8418-7959457d051f-kube-api-access-ztkjv\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.388604 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c71fb9d-1000-4eb9-8418-7959457d051f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.409137 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c71fb9d-1000-4eb9-8418-7959457d051f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c71fb9d-1000-4eb9-8418-7959457d051f" (UID: "4c71fb9d-1000-4eb9-8418-7959457d051f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.490201 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c71fb9d-1000-4eb9-8418-7959457d051f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.873832 4826 generic.go:334] "Generic (PLEG): container finished" podID="4c71fb9d-1000-4eb9-8418-7959457d051f" containerID="02a74db8385a661e7956d24873e805aaa5adbdf4a1674cad403d27b34cb245c6" exitCode=0 Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.874068 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2w8g" event={"ID":"4c71fb9d-1000-4eb9-8418-7959457d051f","Type":"ContainerDied","Data":"02a74db8385a661e7956d24873e805aaa5adbdf4a1674cad403d27b34cb245c6"} Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.874308 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2w8g" event={"ID":"4c71fb9d-1000-4eb9-8418-7959457d051f","Type":"ContainerDied","Data":"19e6561727ac1a16f7264af7e86cd6cf066ca1d7b63a714b8c686906e664424a"} Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.874328 4826 scope.go:117] "RemoveContainer" containerID="02a74db8385a661e7956d24873e805aaa5adbdf4a1674cad403d27b34cb245c6" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.874156 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2w8g" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.892974 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x2w8g"] Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.894894 4826 scope.go:117] "RemoveContainer" containerID="134feb682c8faa0059b9ae6fb1a52181abde925fd97d40e6379b3bf3f4fb3d70" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.898406 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x2w8g"] Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.921449 4826 scope.go:117] "RemoveContainer" containerID="bfe65fd6638a12202519c6fee2eb639f68d9a5ac4864d36affaa3d793158b03d" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.939005 4826 scope.go:117] "RemoveContainer" containerID="02a74db8385a661e7956d24873e805aaa5adbdf4a1674cad403d27b34cb245c6" Jan 29 06:56:16 crc kubenswrapper[4826]: E0129 06:56:16.939401 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a74db8385a661e7956d24873e805aaa5adbdf4a1674cad403d27b34cb245c6\": container with ID starting with 02a74db8385a661e7956d24873e805aaa5adbdf4a1674cad403d27b34cb245c6 not found: ID does not exist" containerID="02a74db8385a661e7956d24873e805aaa5adbdf4a1674cad403d27b34cb245c6" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.939435 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a74db8385a661e7956d24873e805aaa5adbdf4a1674cad403d27b34cb245c6"} err="failed to get container status \"02a74db8385a661e7956d24873e805aaa5adbdf4a1674cad403d27b34cb245c6\": rpc error: code = NotFound desc = could not find container \"02a74db8385a661e7956d24873e805aaa5adbdf4a1674cad403d27b34cb245c6\": container with ID starting with 02a74db8385a661e7956d24873e805aaa5adbdf4a1674cad403d27b34cb245c6 not found: ID does not exist" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.939461 4826 scope.go:117] "RemoveContainer" containerID="134feb682c8faa0059b9ae6fb1a52181abde925fd97d40e6379b3bf3f4fb3d70" Jan 29 06:56:16 crc kubenswrapper[4826]: E0129 06:56:16.939687 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134feb682c8faa0059b9ae6fb1a52181abde925fd97d40e6379b3bf3f4fb3d70\": container with ID starting with 134feb682c8faa0059b9ae6fb1a52181abde925fd97d40e6379b3bf3f4fb3d70 not found: ID does not exist" containerID="134feb682c8faa0059b9ae6fb1a52181abde925fd97d40e6379b3bf3f4fb3d70" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.939712 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134feb682c8faa0059b9ae6fb1a52181abde925fd97d40e6379b3bf3f4fb3d70"} err="failed to get container status \"134feb682c8faa0059b9ae6fb1a52181abde925fd97d40e6379b3bf3f4fb3d70\": rpc error: code = NotFound desc = could not find container \"134feb682c8faa0059b9ae6fb1a52181abde925fd97d40e6379b3bf3f4fb3d70\": container with ID starting with 134feb682c8faa0059b9ae6fb1a52181abde925fd97d40e6379b3bf3f4fb3d70 not found: ID does not exist" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.939724 4826 scope.go:117] "RemoveContainer" containerID="bfe65fd6638a12202519c6fee2eb639f68d9a5ac4864d36affaa3d793158b03d" Jan 29 06:56:16 crc kubenswrapper[4826]: E0129 06:56:16.940038 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfe65fd6638a12202519c6fee2eb639f68d9a5ac4864d36affaa3d793158b03d\": container with ID starting with bfe65fd6638a12202519c6fee2eb639f68d9a5ac4864d36affaa3d793158b03d not found: ID does not exist" containerID="bfe65fd6638a12202519c6fee2eb639f68d9a5ac4864d36affaa3d793158b03d" Jan 29 06:56:16 crc kubenswrapper[4826]: I0129 06:56:16.940069 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfe65fd6638a12202519c6fee2eb639f68d9a5ac4864d36affaa3d793158b03d"} err="failed to get container status \"bfe65fd6638a12202519c6fee2eb639f68d9a5ac4864d36affaa3d793158b03d\": rpc error: code = NotFound desc = could not find container \"bfe65fd6638a12202519c6fee2eb639f68d9a5ac4864d36affaa3d793158b03d\": container with ID starting with bfe65fd6638a12202519c6fee2eb639f68d9a5ac4864d36affaa3d793158b03d not found: ID does not exist" Jan 29 06:56:18 crc kubenswrapper[4826]: I0129 06:56:18.822066 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c71fb9d-1000-4eb9-8418-7959457d051f" path="/var/lib/kubelet/pods/4c71fb9d-1000-4eb9-8418-7959457d051f/volumes" Jan 29 06:56:18 crc kubenswrapper[4826]: I0129 06:56:18.892593 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-pbp9h" event={"ID":"4297d4dd-0999-4ab5-87a0-f0190de20a82","Type":"ContainerStarted","Data":"d7ef73b905cfb826cc4ea416934772df9fbeb182c7a5e97f9ce2cb2f2d67e303"} Jan 29 06:56:18 crc kubenswrapper[4826]: I0129 06:56:18.920857 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-pbp9h" podStartSLOduration=2.114849705 podStartE2EDuration="6.920839933s" podCreationTimestamp="2026-01-29 06:56:12 +0000 UTC" firstStartedPulling="2026-01-29 06:56:12.898455295 +0000 UTC m=+756.760248364" lastFinishedPulling="2026-01-29 06:56:17.704445483 +0000 UTC m=+761.566238592" observedRunningTime="2026-01-29 06:56:18.919910718 +0000 UTC m=+762.781703807" watchObservedRunningTime="2026-01-29 06:56:18.920839933 +0000 UTC m=+762.782633012" Jan 29 06:56:22 crc kubenswrapper[4826]: I0129 06:56:22.458806 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5dnl7" Jan 29 06:56:22 crc kubenswrapper[4826]: I0129 06:56:22.767824 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:22 crc kubenswrapper[4826]: I0129 06:56:22.768351 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:22 crc kubenswrapper[4826]: I0129 06:56:22.775706 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:22 crc kubenswrapper[4826]: I0129 06:56:22.926820 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7d5877b44b-s47xp" Jan 29 06:56:23 crc kubenswrapper[4826]: I0129 06:56:23.010621 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-t4qwq"] Jan 29 06:56:32 crc kubenswrapper[4826]: I0129 06:56:32.441668 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tb8l6" Jan 29 06:56:35 crc kubenswrapper[4826]: I0129 06:56:35.656479 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:56:35 crc kubenswrapper[4826]: I0129 06:56:35.657654 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.063804 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-t4qwq" podUID="9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" containerName="console" containerID="cri-o://ee0c45e48dea9fff2c053aee583d2a09c66c1df7556eda297189cb018ed5adcb" gracePeriod=15 Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.445621 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-t4qwq_9bc5b6b0-9626-4ae0-b053-1dae3c13dd47/console/0.log" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.446000 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.517882 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-trusted-ca-bundle\") pod \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.517979 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-serving-cert\") pod \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.518023 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-oauth-serving-cert\") pod \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.518099 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-service-ca\") pod \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.518155 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fht9n\" (UniqueName: \"kubernetes.io/projected/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-kube-api-access-fht9n\") pod \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.518173 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-config\") pod \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.518216 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-oauth-config\") pod \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\" (UID: \"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47\") " Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.519212 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-service-ca" (OuterVolumeSpecName: "service-ca") pod "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" (UID: "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.519204 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" (UID: "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.519379 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" (UID: "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.520212 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-config" (OuterVolumeSpecName: "console-config") pod "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" (UID: "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.524883 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" (UID: "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.524975 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-kube-api-access-fht9n" (OuterVolumeSpecName: "kube-api-access-fht9n") pod "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" (UID: "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47"). InnerVolumeSpecName "kube-api-access-fht9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.530625 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" (UID: "9bc5b6b0-9626-4ae0-b053-1dae3c13dd47"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.619568 4826 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.619632 4826 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.619654 4826 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.619673 4826 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.619692 4826 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.619711 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fht9n\" (UniqueName: \"kubernetes.io/projected/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-kube-api-access-fht9n\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:48 crc kubenswrapper[4826]: I0129 06:56:48.619730 4826 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:49 crc kubenswrapper[4826]: I0129 06:56:49.120439 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-t4qwq_9bc5b6b0-9626-4ae0-b053-1dae3c13dd47/console/0.log" Jan 29 06:56:49 crc kubenswrapper[4826]: I0129 06:56:49.120915 4826 generic.go:334] "Generic (PLEG): container finished" podID="9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" containerID="ee0c45e48dea9fff2c053aee583d2a09c66c1df7556eda297189cb018ed5adcb" exitCode=2 Jan 29 06:56:49 crc kubenswrapper[4826]: I0129 06:56:49.120961 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t4qwq" event={"ID":"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47","Type":"ContainerDied","Data":"ee0c45e48dea9fff2c053aee583d2a09c66c1df7556eda297189cb018ed5adcb"} Jan 29 06:56:49 crc kubenswrapper[4826]: I0129 06:56:49.121015 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t4qwq" event={"ID":"9bc5b6b0-9626-4ae0-b053-1dae3c13dd47","Type":"ContainerDied","Data":"ef5c124a1377eccc9cc02cd6a9365e50d5460c00116df17df61fa0d537e11254"} Jan 29 06:56:49 crc kubenswrapper[4826]: I0129 06:56:49.121014 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t4qwq" Jan 29 06:56:49 crc kubenswrapper[4826]: I0129 06:56:49.121075 4826 scope.go:117] "RemoveContainer" containerID="ee0c45e48dea9fff2c053aee583d2a09c66c1df7556eda297189cb018ed5adcb" Jan 29 06:56:49 crc kubenswrapper[4826]: I0129 06:56:49.153851 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-t4qwq"] Jan 29 06:56:49 crc kubenswrapper[4826]: I0129 06:56:49.155397 4826 scope.go:117] "RemoveContainer" containerID="ee0c45e48dea9fff2c053aee583d2a09c66c1df7556eda297189cb018ed5adcb" Jan 29 06:56:49 crc kubenswrapper[4826]: E0129 06:56:49.156004 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee0c45e48dea9fff2c053aee583d2a09c66c1df7556eda297189cb018ed5adcb\": container with ID starting with ee0c45e48dea9fff2c053aee583d2a09c66c1df7556eda297189cb018ed5adcb not found: ID does not exist" containerID="ee0c45e48dea9fff2c053aee583d2a09c66c1df7556eda297189cb018ed5adcb" Jan 29 06:56:49 crc kubenswrapper[4826]: I0129 06:56:49.156072 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee0c45e48dea9fff2c053aee583d2a09c66c1df7556eda297189cb018ed5adcb"} err="failed to get container status \"ee0c45e48dea9fff2c053aee583d2a09c66c1df7556eda297189cb018ed5adcb\": rpc error: code = NotFound desc = could not find container \"ee0c45e48dea9fff2c053aee583d2a09c66c1df7556eda297189cb018ed5adcb\": container with ID starting with ee0c45e48dea9fff2c053aee583d2a09c66c1df7556eda297189cb018ed5adcb not found: ID does not exist" Jan 29 06:56:49 crc kubenswrapper[4826]: I0129 06:56:49.161775 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-t4qwq"] Jan 29 06:56:50 crc kubenswrapper[4826]: I0129 06:56:50.823132 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" path="/var/lib/kubelet/pods/9bc5b6b0-9626-4ae0-b053-1dae3c13dd47/volumes" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.353917 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn"] Jan 29 06:56:51 crc kubenswrapper[4826]: E0129 06:56:51.354642 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c71fb9d-1000-4eb9-8418-7959457d051f" containerName="extract-content" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.354666 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c71fb9d-1000-4eb9-8418-7959457d051f" containerName="extract-content" Jan 29 06:56:51 crc kubenswrapper[4826]: E0129 06:56:51.354694 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c71fb9d-1000-4eb9-8418-7959457d051f" containerName="registry-server" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.354707 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c71fb9d-1000-4eb9-8418-7959457d051f" containerName="registry-server" Jan 29 06:56:51 crc kubenswrapper[4826]: E0129 06:56:51.354737 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" containerName="console" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.354751 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" containerName="console" Jan 29 06:56:51 crc kubenswrapper[4826]: E0129 06:56:51.354767 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c71fb9d-1000-4eb9-8418-7959457d051f" containerName="extract-utilities" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.354780 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c71fb9d-1000-4eb9-8418-7959457d051f" containerName="extract-utilities" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.354983 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c71fb9d-1000-4eb9-8418-7959457d051f" containerName="registry-server" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.355019 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc5b6b0-9626-4ae0-b053-1dae3c13dd47" containerName="console" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.356665 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.362506 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.364619 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn"] Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.470760 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bszdt\" (UniqueName: \"kubernetes.io/projected/a1efbdc4-cb00-47f3-a174-62be62cea868-kube-api-access-bszdt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn\" (UID: \"a1efbdc4-cb00-47f3-a174-62be62cea868\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.470842 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1efbdc4-cb00-47f3-a174-62be62cea868-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn\" (UID: \"a1efbdc4-cb00-47f3-a174-62be62cea868\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.470906 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1efbdc4-cb00-47f3-a174-62be62cea868-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn\" (UID: \"a1efbdc4-cb00-47f3-a174-62be62cea868\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.572254 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bszdt\" (UniqueName: \"kubernetes.io/projected/a1efbdc4-cb00-47f3-a174-62be62cea868-kube-api-access-bszdt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn\" (UID: \"a1efbdc4-cb00-47f3-a174-62be62cea868\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.572371 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1efbdc4-cb00-47f3-a174-62be62cea868-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn\" (UID: \"a1efbdc4-cb00-47f3-a174-62be62cea868\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.572415 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1efbdc4-cb00-47f3-a174-62be62cea868-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn\" (UID: \"a1efbdc4-cb00-47f3-a174-62be62cea868\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.572929 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1efbdc4-cb00-47f3-a174-62be62cea868-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn\" (UID: \"a1efbdc4-cb00-47f3-a174-62be62cea868\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.573140 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1efbdc4-cb00-47f3-a174-62be62cea868-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn\" (UID: \"a1efbdc4-cb00-47f3-a174-62be62cea868\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.610774 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bszdt\" (UniqueName: \"kubernetes.io/projected/a1efbdc4-cb00-47f3-a174-62be62cea868-kube-api-access-bszdt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn\" (UID: \"a1efbdc4-cb00-47f3-a174-62be62cea868\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" Jan 29 06:56:51 crc kubenswrapper[4826]: I0129 06:56:51.690293 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" Jan 29 06:56:52 crc kubenswrapper[4826]: I0129 06:56:52.067843 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn"] Jan 29 06:56:52 crc kubenswrapper[4826]: I0129 06:56:52.141852 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" event={"ID":"a1efbdc4-cb00-47f3-a174-62be62cea868","Type":"ContainerStarted","Data":"176011c457a1c62ca63f98b7991366f447aedc82b0538dc14442ad1030aab95c"} Jan 29 06:56:53 crc kubenswrapper[4826]: I0129 06:56:53.152447 4826 generic.go:334] "Generic (PLEG): container finished" podID="a1efbdc4-cb00-47f3-a174-62be62cea868" containerID="a70f7de0fea5d577866841b144ebb5c142e2b4e99bf5850b068fc3a47cb04b1f" exitCode=0 Jan 29 06:56:53 crc kubenswrapper[4826]: I0129 06:56:53.152667 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" event={"ID":"a1efbdc4-cb00-47f3-a174-62be62cea868","Type":"ContainerDied","Data":"a70f7de0fea5d577866841b144ebb5c142e2b4e99bf5850b068fc3a47cb04b1f"} Jan 29 06:56:55 crc kubenswrapper[4826]: I0129 06:56:55.180175 4826 generic.go:334] "Generic (PLEG): container finished" podID="a1efbdc4-cb00-47f3-a174-62be62cea868" containerID="08d251aee342155ff7eacdb59a5ac08c8554b114bc9479d8bca014bdfdb29ed8" exitCode=0 Jan 29 06:56:55 crc kubenswrapper[4826]: I0129 06:56:55.180231 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" event={"ID":"a1efbdc4-cb00-47f3-a174-62be62cea868","Type":"ContainerDied","Data":"08d251aee342155ff7eacdb59a5ac08c8554b114bc9479d8bca014bdfdb29ed8"} Jan 29 06:56:56 crc kubenswrapper[4826]: I0129 06:56:56.191156 4826 generic.go:334] "Generic (PLEG): container finished" podID="a1efbdc4-cb00-47f3-a174-62be62cea868" containerID="df18a87e7260db780f54dd994f65a6f7b934a78614114fb9c355eec9092836fc" exitCode=0 Jan 29 06:56:56 crc kubenswrapper[4826]: I0129 06:56:56.191220 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" event={"ID":"a1efbdc4-cb00-47f3-a174-62be62cea868","Type":"ContainerDied","Data":"df18a87e7260db780f54dd994f65a6f7b934a78614114fb9c355eec9092836fc"} Jan 29 06:56:57 crc kubenswrapper[4826]: I0129 06:56:57.485026 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" Jan 29 06:56:57 crc kubenswrapper[4826]: I0129 06:56:57.590495 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1efbdc4-cb00-47f3-a174-62be62cea868-bundle\") pod \"a1efbdc4-cb00-47f3-a174-62be62cea868\" (UID: \"a1efbdc4-cb00-47f3-a174-62be62cea868\") " Jan 29 06:56:57 crc kubenswrapper[4826]: I0129 06:56:57.590590 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1efbdc4-cb00-47f3-a174-62be62cea868-util\") pod \"a1efbdc4-cb00-47f3-a174-62be62cea868\" (UID: \"a1efbdc4-cb00-47f3-a174-62be62cea868\") " Jan 29 06:56:57 crc kubenswrapper[4826]: I0129 06:56:57.590649 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bszdt\" (UniqueName: \"kubernetes.io/projected/a1efbdc4-cb00-47f3-a174-62be62cea868-kube-api-access-bszdt\") pod \"a1efbdc4-cb00-47f3-a174-62be62cea868\" (UID: \"a1efbdc4-cb00-47f3-a174-62be62cea868\") " Jan 29 06:56:57 crc kubenswrapper[4826]: I0129 06:56:57.592477 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1efbdc4-cb00-47f3-a174-62be62cea868-bundle" (OuterVolumeSpecName: "bundle") pod "a1efbdc4-cb00-47f3-a174-62be62cea868" (UID: "a1efbdc4-cb00-47f3-a174-62be62cea868"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:56:57 crc kubenswrapper[4826]: I0129 06:56:57.601806 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1efbdc4-cb00-47f3-a174-62be62cea868-kube-api-access-bszdt" (OuterVolumeSpecName: "kube-api-access-bszdt") pod "a1efbdc4-cb00-47f3-a174-62be62cea868" (UID: "a1efbdc4-cb00-47f3-a174-62be62cea868"). InnerVolumeSpecName "kube-api-access-bszdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:56:57 crc kubenswrapper[4826]: I0129 06:56:57.693556 4826 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1efbdc4-cb00-47f3-a174-62be62cea868-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:57 crc kubenswrapper[4826]: I0129 06:56:57.693623 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bszdt\" (UniqueName: \"kubernetes.io/projected/a1efbdc4-cb00-47f3-a174-62be62cea868-kube-api-access-bszdt\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:57 crc kubenswrapper[4826]: I0129 06:56:57.700474 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1efbdc4-cb00-47f3-a174-62be62cea868-util" (OuterVolumeSpecName: "util") pod "a1efbdc4-cb00-47f3-a174-62be62cea868" (UID: "a1efbdc4-cb00-47f3-a174-62be62cea868"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:56:57 crc kubenswrapper[4826]: I0129 06:56:57.795473 4826 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1efbdc4-cb00-47f3-a174-62be62cea868-util\") on node \"crc\" DevicePath \"\"" Jan 29 06:56:58 crc kubenswrapper[4826]: I0129 06:56:58.207179 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" event={"ID":"a1efbdc4-cb00-47f3-a174-62be62cea868","Type":"ContainerDied","Data":"176011c457a1c62ca63f98b7991366f447aedc82b0538dc14442ad1030aab95c"} Jan 29 06:56:58 crc kubenswrapper[4826]: I0129 06:56:58.207252 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="176011c457a1c62ca63f98b7991366f447aedc82b0538dc14442ad1030aab95c" Jan 29 06:56:58 crc kubenswrapper[4826]: I0129 06:56:58.207326 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn" Jan 29 06:57:05 crc kubenswrapper[4826]: I0129 06:57:05.656479 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:57:05 crc kubenswrapper[4826]: I0129 06:57:05.657102 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:57:05 crc kubenswrapper[4826]: I0129 06:57:05.657180 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 06:57:05 crc kubenswrapper[4826]: I0129 06:57:05.658147 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3e78bc337b51dad14a09b0691d31de9770dca1238f79d864ee283914dbce58e"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 06:57:05 crc kubenswrapper[4826]: I0129 06:57:05.658333 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://a3e78bc337b51dad14a09b0691d31de9770dca1238f79d864ee283914dbce58e" gracePeriod=600 Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.257713 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="a3e78bc337b51dad14a09b0691d31de9770dca1238f79d864ee283914dbce58e" exitCode=0 Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.257765 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"a3e78bc337b51dad14a09b0691d31de9770dca1238f79d864ee283914dbce58e"} Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.258271 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"d3303398d9dd82a2bcfef8e8991ab372b491761ad48de0e25a106a7c53d77566"} Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.258313 4826 scope.go:117] "RemoveContainer" containerID="fcf1ea2f7e5c5fbf4091e579b4a862769cc5342172862a00f7331d08867ff6a2" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.674572 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp"] Jan 29 06:57:06 crc kubenswrapper[4826]: E0129 06:57:06.674814 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1efbdc4-cb00-47f3-a174-62be62cea868" containerName="util" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.674828 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1efbdc4-cb00-47f3-a174-62be62cea868" containerName="util" Jan 29 06:57:06 crc kubenswrapper[4826]: E0129 06:57:06.674851 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1efbdc4-cb00-47f3-a174-62be62cea868" containerName="pull" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.674860 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1efbdc4-cb00-47f3-a174-62be62cea868" containerName="pull" Jan 29 06:57:06 crc kubenswrapper[4826]: E0129 06:57:06.674875 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1efbdc4-cb00-47f3-a174-62be62cea868" containerName="extract" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.674884 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1efbdc4-cb00-47f3-a174-62be62cea868" containerName="extract" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.675014 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1efbdc4-cb00-47f3-a174-62be62cea868" containerName="extract" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.675543 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.677631 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.677631 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.677915 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.678324 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4hvf5" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.679002 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.744788 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp"] Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.827158 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06706f9d-60f4-4e7b-8038-d7fcfc82999a-webhook-cert\") pod \"metallb-operator-controller-manager-5bdcdf948b-6jpmp\" (UID: \"06706f9d-60f4-4e7b-8038-d7fcfc82999a\") " pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.827225 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06706f9d-60f4-4e7b-8038-d7fcfc82999a-apiservice-cert\") pod \"metallb-operator-controller-manager-5bdcdf948b-6jpmp\" (UID: \"06706f9d-60f4-4e7b-8038-d7fcfc82999a\") " pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.827538 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5kns\" (UniqueName: \"kubernetes.io/projected/06706f9d-60f4-4e7b-8038-d7fcfc82999a-kube-api-access-g5kns\") pod \"metallb-operator-controller-manager-5bdcdf948b-6jpmp\" (UID: \"06706f9d-60f4-4e7b-8038-d7fcfc82999a\") " pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.929343 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06706f9d-60f4-4e7b-8038-d7fcfc82999a-webhook-cert\") pod \"metallb-operator-controller-manager-5bdcdf948b-6jpmp\" (UID: \"06706f9d-60f4-4e7b-8038-d7fcfc82999a\") " pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.930228 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06706f9d-60f4-4e7b-8038-d7fcfc82999a-apiservice-cert\") pod \"metallb-operator-controller-manager-5bdcdf948b-6jpmp\" (UID: \"06706f9d-60f4-4e7b-8038-d7fcfc82999a\") " pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.930262 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5kns\" (UniqueName: \"kubernetes.io/projected/06706f9d-60f4-4e7b-8038-d7fcfc82999a-kube-api-access-g5kns\") pod \"metallb-operator-controller-manager-5bdcdf948b-6jpmp\" (UID: \"06706f9d-60f4-4e7b-8038-d7fcfc82999a\") " pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.939262 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06706f9d-60f4-4e7b-8038-d7fcfc82999a-apiservice-cert\") pod \"metallb-operator-controller-manager-5bdcdf948b-6jpmp\" (UID: \"06706f9d-60f4-4e7b-8038-d7fcfc82999a\") " pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.951800 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06706f9d-60f4-4e7b-8038-d7fcfc82999a-webhook-cert\") pod \"metallb-operator-controller-manager-5bdcdf948b-6jpmp\" (UID: \"06706f9d-60f4-4e7b-8038-d7fcfc82999a\") " pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.952879 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5kns\" (UniqueName: \"kubernetes.io/projected/06706f9d-60f4-4e7b-8038-d7fcfc82999a-kube-api-access-g5kns\") pod \"metallb-operator-controller-manager-5bdcdf948b-6jpmp\" (UID: \"06706f9d-60f4-4e7b-8038-d7fcfc82999a\") " pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" Jan 29 06:57:06 crc kubenswrapper[4826]: I0129 06:57:06.994651 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.239333 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg"] Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.240418 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.257152 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.257342 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-tqwqm" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.257466 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.266137 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg"] Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.345372 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c52567fd-c599-41cf-89f2-9331a467c3bd-webhook-cert\") pod \"metallb-operator-webhook-server-599f7c78cf-nrhfg\" (UID: \"c52567fd-c599-41cf-89f2-9331a467c3bd\") " pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.345418 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c52567fd-c599-41cf-89f2-9331a467c3bd-apiservice-cert\") pod \"metallb-operator-webhook-server-599f7c78cf-nrhfg\" (UID: \"c52567fd-c599-41cf-89f2-9331a467c3bd\") " pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.345581 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5t94\" (UniqueName: \"kubernetes.io/projected/c52567fd-c599-41cf-89f2-9331a467c3bd-kube-api-access-h5t94\") pod \"metallb-operator-webhook-server-599f7c78cf-nrhfg\" (UID: \"c52567fd-c599-41cf-89f2-9331a467c3bd\") " pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.446449 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c52567fd-c599-41cf-89f2-9331a467c3bd-webhook-cert\") pod \"metallb-operator-webhook-server-599f7c78cf-nrhfg\" (UID: \"c52567fd-c599-41cf-89f2-9331a467c3bd\") " pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.446827 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c52567fd-c599-41cf-89f2-9331a467c3bd-apiservice-cert\") pod \"metallb-operator-webhook-server-599f7c78cf-nrhfg\" (UID: \"c52567fd-c599-41cf-89f2-9331a467c3bd\") " pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.447004 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5t94\" (UniqueName: \"kubernetes.io/projected/c52567fd-c599-41cf-89f2-9331a467c3bd-kube-api-access-h5t94\") pod \"metallb-operator-webhook-server-599f7c78cf-nrhfg\" (UID: \"c52567fd-c599-41cf-89f2-9331a467c3bd\") " pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.453116 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c52567fd-c599-41cf-89f2-9331a467c3bd-apiservice-cert\") pod \"metallb-operator-webhook-server-599f7c78cf-nrhfg\" (UID: \"c52567fd-c599-41cf-89f2-9331a467c3bd\") " pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.453441 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c52567fd-c599-41cf-89f2-9331a467c3bd-webhook-cert\") pod \"metallb-operator-webhook-server-599f7c78cf-nrhfg\" (UID: \"c52567fd-c599-41cf-89f2-9331a467c3bd\") " pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.462816 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5t94\" (UniqueName: \"kubernetes.io/projected/c52567fd-c599-41cf-89f2-9331a467c3bd-kube-api-access-h5t94\") pod \"metallb-operator-webhook-server-599f7c78cf-nrhfg\" (UID: \"c52567fd-c599-41cf-89f2-9331a467c3bd\") " pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.570968 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.576183 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp"] Jan 29 06:57:07 crc kubenswrapper[4826]: I0129 06:57:07.818840 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg"] Jan 29 06:57:07 crc kubenswrapper[4826]: W0129 06:57:07.828865 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc52567fd_c599_41cf_89f2_9331a467c3bd.slice/crio-96f6e0d29f4eec3dc8e4333a0c821cd878c6a04c36d85aee70c4f0a8d7ad8d28 WatchSource:0}: Error finding container 96f6e0d29f4eec3dc8e4333a0c821cd878c6a04c36d85aee70c4f0a8d7ad8d28: Status 404 returned error can't find the container with id 96f6e0d29f4eec3dc8e4333a0c821cd878c6a04c36d85aee70c4f0a8d7ad8d28 Jan 29 06:57:08 crc kubenswrapper[4826]: I0129 06:57:08.307928 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" event={"ID":"c52567fd-c599-41cf-89f2-9331a467c3bd","Type":"ContainerStarted","Data":"96f6e0d29f4eec3dc8e4333a0c821cd878c6a04c36d85aee70c4f0a8d7ad8d28"} Jan 29 06:57:08 crc kubenswrapper[4826]: I0129 06:57:08.308913 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" event={"ID":"06706f9d-60f4-4e7b-8038-d7fcfc82999a","Type":"ContainerStarted","Data":"8f27bd72bd9d1c71b59d1bdf5cee881bac27d9f487577a4128f5b123f8d5a402"} Jan 29 06:57:13 crc kubenswrapper[4826]: I0129 06:57:13.359186 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" event={"ID":"06706f9d-60f4-4e7b-8038-d7fcfc82999a","Type":"ContainerStarted","Data":"1c05ac7c969a789b693d22466f88e827f9a79300bb0340ee228cd7c7b2de7094"} Jan 29 06:57:13 crc kubenswrapper[4826]: I0129 06:57:13.360029 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" Jan 29 06:57:13 crc kubenswrapper[4826]: I0129 06:57:13.361137 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" event={"ID":"c52567fd-c599-41cf-89f2-9331a467c3bd","Type":"ContainerStarted","Data":"71f0ae1dc445a5621f7d7024b3c21d92bf2cf3ddd11fc12797c0d6c4bfd732b8"} Jan 29 06:57:13 crc kubenswrapper[4826]: I0129 06:57:13.361499 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" Jan 29 06:57:13 crc kubenswrapper[4826]: I0129 06:57:13.387553 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" podStartSLOduration=2.506606764 podStartE2EDuration="7.38753397s" podCreationTimestamp="2026-01-29 06:57:06 +0000 UTC" firstStartedPulling="2026-01-29 06:57:07.593585174 +0000 UTC m=+811.455378243" lastFinishedPulling="2026-01-29 06:57:12.47451238 +0000 UTC m=+816.336305449" observedRunningTime="2026-01-29 06:57:13.384661505 +0000 UTC m=+817.246454594" watchObservedRunningTime="2026-01-29 06:57:13.38753397 +0000 UTC m=+817.249327059" Jan 29 06:57:13 crc kubenswrapper[4826]: I0129 06:57:13.408559 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" podStartSLOduration=1.751358068 podStartE2EDuration="6.408536496s" podCreationTimestamp="2026-01-29 06:57:07 +0000 UTC" firstStartedPulling="2026-01-29 06:57:07.834669193 +0000 UTC m=+811.696462262" lastFinishedPulling="2026-01-29 06:57:12.491847621 +0000 UTC m=+816.353640690" observedRunningTime="2026-01-29 06:57:13.407879209 +0000 UTC m=+817.269672288" watchObservedRunningTime="2026-01-29 06:57:13.408536496 +0000 UTC m=+817.270329575" Jan 29 06:57:27 crc kubenswrapper[4826]: I0129 06:57:27.575983 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-599f7c78cf-nrhfg" Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.001978 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5bdcdf948b-6jpmp" Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.860516 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-zsbwk"] Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.863272 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.867003 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.867538 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-j8m2j" Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.867554 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.876924 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69"] Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.877831 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69" Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.883709 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.904392 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69"] Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.980574 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-h89x7"] Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.984916 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-h89x7" Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.987025 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.987659 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.987872 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-sfggs" Jan 29 06:57:47 crc kubenswrapper[4826]: I0129 06:57:47.988348 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.004813 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s6nk\" (UniqueName: \"kubernetes.io/projected/2eda5181-f4b2-4a86-bf3c-8ba839c80d00-kube-api-access-6s6nk\") pod \"frr-k8s-webhook-server-7df86c4f6c-m8t69\" (UID: \"2eda5181-f4b2-4a86-bf3c-8ba839c80d00\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.005259 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p44t4\" (UniqueName: \"kubernetes.io/projected/80682433-6c24-45a9-a54b-8db2233f2870-kube-api-access-p44t4\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.005335 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/80682433-6c24-45a9-a54b-8db2233f2870-frr-conf\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.005410 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2eda5181-f4b2-4a86-bf3c-8ba839c80d00-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-m8t69\" (UID: \"2eda5181-f4b2-4a86-bf3c-8ba839c80d00\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.005581 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/80682433-6c24-45a9-a54b-8db2233f2870-metrics\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.005605 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/80682433-6c24-45a9-a54b-8db2233f2870-reloader\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.005669 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/80682433-6c24-45a9-a54b-8db2233f2870-frr-sockets\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.005702 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/80682433-6c24-45a9-a54b-8db2233f2870-frr-startup\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.005757 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80682433-6c24-45a9-a54b-8db2233f2870-metrics-certs\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.010911 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-s2gzv"] Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.021257 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.024513 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-s2gzv"] Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.024582 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.107444 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/80682433-6c24-45a9-a54b-8db2233f2870-metrics\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.107490 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3405170d-fc3f-46a7-a936-c811885cc266-metrics-certs\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.107510 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/80682433-6c24-45a9-a54b-8db2233f2870-reloader\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.107541 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/80682433-6c24-45a9-a54b-8db2233f2870-frr-sockets\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.107558 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/80682433-6c24-45a9-a54b-8db2233f2870-frr-startup\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.107581 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80682433-6c24-45a9-a54b-8db2233f2870-metrics-certs\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.107607 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s6nk\" (UniqueName: \"kubernetes.io/projected/2eda5181-f4b2-4a86-bf3c-8ba839c80d00-kube-api-access-6s6nk\") pod \"frr-k8s-webhook-server-7df86c4f6c-m8t69\" (UID: \"2eda5181-f4b2-4a86-bf3c-8ba839c80d00\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.107629 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p44t4\" (UniqueName: \"kubernetes.io/projected/80682433-6c24-45a9-a54b-8db2233f2870-kube-api-access-p44t4\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.107649 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3405170d-fc3f-46a7-a936-c811885cc266-memberlist\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.107681 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/80682433-6c24-45a9-a54b-8db2233f2870-frr-conf\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.107696 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2eda5181-f4b2-4a86-bf3c-8ba839c80d00-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-m8t69\" (UID: \"2eda5181-f4b2-4a86-bf3c-8ba839c80d00\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.107712 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9b49\" (UniqueName: \"kubernetes.io/projected/3405170d-fc3f-46a7-a936-c811885cc266-kube-api-access-j9b49\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.107734 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3405170d-fc3f-46a7-a936-c811885cc266-metallb-excludel2\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.108254 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/80682433-6c24-45a9-a54b-8db2233f2870-reloader\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.108467 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/80682433-6c24-45a9-a54b-8db2233f2870-frr-sockets\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.108480 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/80682433-6c24-45a9-a54b-8db2233f2870-metrics\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: E0129 06:57:48.108676 4826 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 29 06:57:48 crc kubenswrapper[4826]: E0129 06:57:48.108749 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80682433-6c24-45a9-a54b-8db2233f2870-metrics-certs podName:80682433-6c24-45a9-a54b-8db2233f2870 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:48.608720145 +0000 UTC m=+852.470513254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80682433-6c24-45a9-a54b-8db2233f2870-metrics-certs") pod "frr-k8s-zsbwk" (UID: "80682433-6c24-45a9-a54b-8db2233f2870") : secret "frr-k8s-certs-secret" not found Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.109142 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/80682433-6c24-45a9-a54b-8db2233f2870-frr-startup\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.109339 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/80682433-6c24-45a9-a54b-8db2233f2870-frr-conf\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.121333 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2eda5181-f4b2-4a86-bf3c-8ba839c80d00-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-m8t69\" (UID: \"2eda5181-f4b2-4a86-bf3c-8ba839c80d00\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.128450 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p44t4\" (UniqueName: \"kubernetes.io/projected/80682433-6c24-45a9-a54b-8db2233f2870-kube-api-access-p44t4\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.128794 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s6nk\" (UniqueName: \"kubernetes.io/projected/2eda5181-f4b2-4a86-bf3c-8ba839c80d00-kube-api-access-6s6nk\") pod \"frr-k8s-webhook-server-7df86c4f6c-m8t69\" (UID: \"2eda5181-f4b2-4a86-bf3c-8ba839c80d00\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.208972 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t2xh\" (UniqueName: \"kubernetes.io/projected/b4f5f505-0bfd-4f06-95bc-1402bcbfd09a-kube-api-access-8t2xh\") pod \"controller-6968d8fdc4-s2gzv\" (UID: \"b4f5f505-0bfd-4f06-95bc-1402bcbfd09a\") " pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.209058 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3405170d-fc3f-46a7-a936-c811885cc266-memberlist\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.209082 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9b49\" (UniqueName: \"kubernetes.io/projected/3405170d-fc3f-46a7-a936-c811885cc266-kube-api-access-j9b49\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.209105 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b4f5f505-0bfd-4f06-95bc-1402bcbfd09a-metrics-certs\") pod \"controller-6968d8fdc4-s2gzv\" (UID: \"b4f5f505-0bfd-4f06-95bc-1402bcbfd09a\") " pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.209127 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3405170d-fc3f-46a7-a936-c811885cc266-metallb-excludel2\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.209159 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3405170d-fc3f-46a7-a936-c811885cc266-metrics-certs\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.209173 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4f5f505-0bfd-4f06-95bc-1402bcbfd09a-cert\") pod \"controller-6968d8fdc4-s2gzv\" (UID: \"b4f5f505-0bfd-4f06-95bc-1402bcbfd09a\") " pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:57:48 crc kubenswrapper[4826]: E0129 06:57:48.209317 4826 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 06:57:48 crc kubenswrapper[4826]: E0129 06:57:48.209356 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3405170d-fc3f-46a7-a936-c811885cc266-memberlist podName:3405170d-fc3f-46a7-a936-c811885cc266 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:48.709343519 +0000 UTC m=+852.571136588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3405170d-fc3f-46a7-a936-c811885cc266-memberlist") pod "speaker-h89x7" (UID: "3405170d-fc3f-46a7-a936-c811885cc266") : secret "metallb-memberlist" not found Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.210363 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3405170d-fc3f-46a7-a936-c811885cc266-metallb-excludel2\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.215785 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3405170d-fc3f-46a7-a936-c811885cc266-metrics-certs\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.228840 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9b49\" (UniqueName: \"kubernetes.io/projected/3405170d-fc3f-46a7-a936-c811885cc266-kube-api-access-j9b49\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.236693 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.312721 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b4f5f505-0bfd-4f06-95bc-1402bcbfd09a-metrics-certs\") pod \"controller-6968d8fdc4-s2gzv\" (UID: \"b4f5f505-0bfd-4f06-95bc-1402bcbfd09a\") " pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.312830 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4f5f505-0bfd-4f06-95bc-1402bcbfd09a-cert\") pod \"controller-6968d8fdc4-s2gzv\" (UID: \"b4f5f505-0bfd-4f06-95bc-1402bcbfd09a\") " pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.312910 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t2xh\" (UniqueName: \"kubernetes.io/projected/b4f5f505-0bfd-4f06-95bc-1402bcbfd09a-kube-api-access-8t2xh\") pod \"controller-6968d8fdc4-s2gzv\" (UID: \"b4f5f505-0bfd-4f06-95bc-1402bcbfd09a\") " pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:57:48 crc kubenswrapper[4826]: E0129 06:57:48.313626 4826 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 29 06:57:48 crc kubenswrapper[4826]: E0129 06:57:48.313725 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4f5f505-0bfd-4f06-95bc-1402bcbfd09a-metrics-certs podName:b4f5f505-0bfd-4f06-95bc-1402bcbfd09a nodeName:}" failed. No retries permitted until 2026-01-29 06:57:48.813696459 +0000 UTC m=+852.675489538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b4f5f505-0bfd-4f06-95bc-1402bcbfd09a-metrics-certs") pod "controller-6968d8fdc4-s2gzv" (UID: "b4f5f505-0bfd-4f06-95bc-1402bcbfd09a") : secret "controller-certs-secret" not found Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.316492 4826 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.327879 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4f5f505-0bfd-4f06-95bc-1402bcbfd09a-cert\") pod \"controller-6968d8fdc4-s2gzv\" (UID: \"b4f5f505-0bfd-4f06-95bc-1402bcbfd09a\") " pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.336282 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t2xh\" (UniqueName: \"kubernetes.io/projected/b4f5f505-0bfd-4f06-95bc-1402bcbfd09a-kube-api-access-8t2xh\") pod \"controller-6968d8fdc4-s2gzv\" (UID: \"b4f5f505-0bfd-4f06-95bc-1402bcbfd09a\") " pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.456064 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69"] Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.612637 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69" event={"ID":"2eda5181-f4b2-4a86-bf3c-8ba839c80d00","Type":"ContainerStarted","Data":"a6ced7395c4cc817f91c34279e94b52517c449345e5bebb2d09afb9ae35f8fc7"} Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.616572 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80682433-6c24-45a9-a54b-8db2233f2870-metrics-certs\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.620242 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80682433-6c24-45a9-a54b-8db2233f2870-metrics-certs\") pod \"frr-k8s-zsbwk\" (UID: \"80682433-6c24-45a9-a54b-8db2233f2870\") " pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.717753 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3405170d-fc3f-46a7-a936-c811885cc266-memberlist\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:48 crc kubenswrapper[4826]: E0129 06:57:48.718055 4826 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 06:57:48 crc kubenswrapper[4826]: E0129 06:57:48.718187 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3405170d-fc3f-46a7-a936-c811885cc266-memberlist podName:3405170d-fc3f-46a7-a936-c811885cc266 nodeName:}" failed. No retries permitted until 2026-01-29 06:57:49.718154287 +0000 UTC m=+853.579947396 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3405170d-fc3f-46a7-a936-c811885cc266-memberlist") pod "speaker-h89x7" (UID: "3405170d-fc3f-46a7-a936-c811885cc266") : secret "metallb-memberlist" not found Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.819278 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b4f5f505-0bfd-4f06-95bc-1402bcbfd09a-metrics-certs\") pod \"controller-6968d8fdc4-s2gzv\" (UID: \"b4f5f505-0bfd-4f06-95bc-1402bcbfd09a\") " pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.824411 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b4f5f505-0bfd-4f06-95bc-1402bcbfd09a-metrics-certs\") pod \"controller-6968d8fdc4-s2gzv\" (UID: \"b4f5f505-0bfd-4f06-95bc-1402bcbfd09a\") " pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.826008 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:57:48 crc kubenswrapper[4826]: I0129 06:57:48.937526 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:57:49 crc kubenswrapper[4826]: I0129 06:57:49.447445 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-s2gzv"] Jan 29 06:57:49 crc kubenswrapper[4826]: W0129 06:57:49.451468 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4f5f505_0bfd_4f06_95bc_1402bcbfd09a.slice/crio-bdae50f47045d8daf1404d29b800279f61ec0d41447ccfd44ddae74431f9101a WatchSource:0}: Error finding container bdae50f47045d8daf1404d29b800279f61ec0d41447ccfd44ddae74431f9101a: Status 404 returned error can't find the container with id bdae50f47045d8daf1404d29b800279f61ec0d41447ccfd44ddae74431f9101a Jan 29 06:57:49 crc kubenswrapper[4826]: I0129 06:57:49.623160 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zsbwk" event={"ID":"80682433-6c24-45a9-a54b-8db2233f2870","Type":"ContainerStarted","Data":"39c2857117b6e2e3362656d63af4970bc6148ab5793575c358d2dfac390c493d"} Jan 29 06:57:49 crc kubenswrapper[4826]: I0129 06:57:49.625582 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-s2gzv" event={"ID":"b4f5f505-0bfd-4f06-95bc-1402bcbfd09a","Type":"ContainerStarted","Data":"bdae50f47045d8daf1404d29b800279f61ec0d41447ccfd44ddae74431f9101a"} Jan 29 06:57:49 crc kubenswrapper[4826]: I0129 06:57:49.735534 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3405170d-fc3f-46a7-a936-c811885cc266-memberlist\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:49 crc kubenswrapper[4826]: I0129 06:57:49.745556 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3405170d-fc3f-46a7-a936-c811885cc266-memberlist\") pod \"speaker-h89x7\" (UID: \"3405170d-fc3f-46a7-a936-c811885cc266\") " pod="metallb-system/speaker-h89x7" Jan 29 06:57:49 crc kubenswrapper[4826]: I0129 06:57:49.816780 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-h89x7" Jan 29 06:57:49 crc kubenswrapper[4826]: W0129 06:57:49.845680 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3405170d_fc3f_46a7_a936_c811885cc266.slice/crio-538e0372bd4576c58646e0f78e12e6ece55c326eab55ec8720bce36bff102e05 WatchSource:0}: Error finding container 538e0372bd4576c58646e0f78e12e6ece55c326eab55ec8720bce36bff102e05: Status 404 returned error can't find the container with id 538e0372bd4576c58646e0f78e12e6ece55c326eab55ec8720bce36bff102e05 Jan 29 06:57:50 crc kubenswrapper[4826]: I0129 06:57:50.633113 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-h89x7" event={"ID":"3405170d-fc3f-46a7-a936-c811885cc266","Type":"ContainerStarted","Data":"e770e5371c0f6de7db85bb76932cbbd6e7480a96e6a13b28af9c76ac8a272614"} Jan 29 06:57:50 crc kubenswrapper[4826]: I0129 06:57:50.633399 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-h89x7" event={"ID":"3405170d-fc3f-46a7-a936-c811885cc266","Type":"ContainerStarted","Data":"bd3f908c86493d030b5f95e9b061e100fa3207becc5a3b372da653ecf5480df4"} Jan 29 06:57:50 crc kubenswrapper[4826]: I0129 06:57:50.633411 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-h89x7" event={"ID":"3405170d-fc3f-46a7-a936-c811885cc266","Type":"ContainerStarted","Data":"538e0372bd4576c58646e0f78e12e6ece55c326eab55ec8720bce36bff102e05"} Jan 29 06:57:50 crc kubenswrapper[4826]: I0129 06:57:50.633603 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-h89x7" Jan 29 06:57:50 crc kubenswrapper[4826]: I0129 06:57:50.640569 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-s2gzv" event={"ID":"b4f5f505-0bfd-4f06-95bc-1402bcbfd09a","Type":"ContainerStarted","Data":"fa4944ce686b3f7132bcfe458ff372d83750970ffd16a8df5db22dc6c2b1fe2f"} Jan 29 06:57:50 crc kubenswrapper[4826]: I0129 06:57:50.640769 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-s2gzv" event={"ID":"b4f5f505-0bfd-4f06-95bc-1402bcbfd09a","Type":"ContainerStarted","Data":"9c7cc453078146bc4e8b5793ae0841a2f8a84e6e38e7ac3fe0fe3bc602ffdef0"} Jan 29 06:57:50 crc kubenswrapper[4826]: I0129 06:57:50.640843 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:57:50 crc kubenswrapper[4826]: I0129 06:57:50.656038 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-h89x7" podStartSLOduration=3.656017098 podStartE2EDuration="3.656017098s" podCreationTimestamp="2026-01-29 06:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:57:50.651278187 +0000 UTC m=+854.513071256" watchObservedRunningTime="2026-01-29 06:57:50.656017098 +0000 UTC m=+854.517810157" Jan 29 06:57:50 crc kubenswrapper[4826]: I0129 06:57:50.674249 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-s2gzv" podStartSLOduration=3.674224556 podStartE2EDuration="3.674224556s" podCreationTimestamp="2026-01-29 06:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:57:50.669528635 +0000 UTC m=+854.531321704" watchObservedRunningTime="2026-01-29 06:57:50.674224556 +0000 UTC m=+854.536017625" Jan 29 06:57:50 crc kubenswrapper[4826]: I0129 06:57:50.947320 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t4v7m"] Jan 29 06:57:50 crc kubenswrapper[4826]: I0129 06:57:50.948307 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:57:50 crc kubenswrapper[4826]: I0129 06:57:50.968098 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4v7m"] Jan 29 06:57:51 crc kubenswrapper[4826]: I0129 06:57:51.056531 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrq7g\" (UniqueName: \"kubernetes.io/projected/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-kube-api-access-zrq7g\") pod \"redhat-marketplace-t4v7m\" (UID: \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\") " pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:57:51 crc kubenswrapper[4826]: I0129 06:57:51.056637 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-utilities\") pod \"redhat-marketplace-t4v7m\" (UID: \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\") " pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:57:51 crc kubenswrapper[4826]: I0129 06:57:51.056677 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-catalog-content\") pod \"redhat-marketplace-t4v7m\" (UID: \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\") " pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:57:51 crc kubenswrapper[4826]: I0129 06:57:51.158014 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-utilities\") pod \"redhat-marketplace-t4v7m\" (UID: \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\") " pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:57:51 crc kubenswrapper[4826]: I0129 06:57:51.158090 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-catalog-content\") pod \"redhat-marketplace-t4v7m\" (UID: \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\") " pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:57:51 crc kubenswrapper[4826]: I0129 06:57:51.158159 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrq7g\" (UniqueName: \"kubernetes.io/projected/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-kube-api-access-zrq7g\") pod \"redhat-marketplace-t4v7m\" (UID: \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\") " pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:57:51 crc kubenswrapper[4826]: I0129 06:57:51.158817 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-utilities\") pod \"redhat-marketplace-t4v7m\" (UID: \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\") " pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:57:51 crc kubenswrapper[4826]: I0129 06:57:51.158892 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-catalog-content\") pod \"redhat-marketplace-t4v7m\" (UID: \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\") " pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:57:51 crc kubenswrapper[4826]: I0129 06:57:51.184616 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrq7g\" (UniqueName: \"kubernetes.io/projected/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-kube-api-access-zrq7g\") pod \"redhat-marketplace-t4v7m\" (UID: \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\") " pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:57:51 crc kubenswrapper[4826]: I0129 06:57:51.266525 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:57:51 crc kubenswrapper[4826]: I0129 06:57:51.577824 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4v7m"] Jan 29 06:57:51 crc kubenswrapper[4826]: W0129 06:57:51.593664 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb3ba24_b1d0_4f25_b7ab_3aec797d7d08.slice/crio-bb8d8b461f1fe745b38c4714dd4ae6a98dd91ea6b70315ff2314435e60f0d33b WatchSource:0}: Error finding container bb8d8b461f1fe745b38c4714dd4ae6a98dd91ea6b70315ff2314435e60f0d33b: Status 404 returned error can't find the container with id bb8d8b461f1fe745b38c4714dd4ae6a98dd91ea6b70315ff2314435e60f0d33b Jan 29 06:57:51 crc kubenswrapper[4826]: I0129 06:57:51.681460 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4v7m" event={"ID":"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08","Type":"ContainerStarted","Data":"bb8d8b461f1fe745b38c4714dd4ae6a98dd91ea6b70315ff2314435e60f0d33b"} Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.552217 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v9hn5"] Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.553705 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.578792 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9hn5"] Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.681663 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h8kh\" (UniqueName: \"kubernetes.io/projected/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-kube-api-access-4h8kh\") pod \"certified-operators-v9hn5\" (UID: \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\") " pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.683244 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-catalog-content\") pod \"certified-operators-v9hn5\" (UID: \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\") " pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.683862 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-utilities\") pod \"certified-operators-v9hn5\" (UID: \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\") " pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.691256 4826 generic.go:334] "Generic (PLEG): container finished" podID="6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" containerID="dbf29c43147e02c81a13899a23463f46ffb8e4449a22ec8c68bb288f6379e1ab" exitCode=0 Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.691324 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4v7m" event={"ID":"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08","Type":"ContainerDied","Data":"dbf29c43147e02c81a13899a23463f46ffb8e4449a22ec8c68bb288f6379e1ab"} Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.785234 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-utilities\") pod \"certified-operators-v9hn5\" (UID: \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\") " pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.785379 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h8kh\" (UniqueName: \"kubernetes.io/projected/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-kube-api-access-4h8kh\") pod \"certified-operators-v9hn5\" (UID: \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\") " pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.785456 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-catalog-content\") pod \"certified-operators-v9hn5\" (UID: \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\") " pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.785973 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-catalog-content\") pod \"certified-operators-v9hn5\" (UID: \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\") " pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.786014 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-utilities\") pod \"certified-operators-v9hn5\" (UID: \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\") " pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.827751 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h8kh\" (UniqueName: \"kubernetes.io/projected/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-kube-api-access-4h8kh\") pod \"certified-operators-v9hn5\" (UID: \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\") " pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:57:52 crc kubenswrapper[4826]: I0129 06:57:52.868450 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:57:53 crc kubenswrapper[4826]: I0129 06:57:53.345541 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9hn5"] Jan 29 06:57:53 crc kubenswrapper[4826]: I0129 06:57:53.700337 4826 generic.go:334] "Generic (PLEG): container finished" podID="b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" containerID="0d4140962f2fcd494bd5e88cbd0d60ce1768e778c8cb7f9936c6806b5fedb529" exitCode=0 Jan 29 06:57:53 crc kubenswrapper[4826]: I0129 06:57:53.700456 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9hn5" event={"ID":"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f","Type":"ContainerDied","Data":"0d4140962f2fcd494bd5e88cbd0d60ce1768e778c8cb7f9936c6806b5fedb529"} Jan 29 06:57:53 crc kubenswrapper[4826]: I0129 06:57:53.700507 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9hn5" event={"ID":"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f","Type":"ContainerStarted","Data":"d1cbeada7fa34686f531bd8f7e9cdbfa4a62d926f64c07aa8fbf4be5647256f9"} Jan 29 06:57:53 crc kubenswrapper[4826]: I0129 06:57:53.702791 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4v7m" event={"ID":"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08","Type":"ContainerStarted","Data":"3db27b5f0eb58f7fc4e31acc8c0bcdd9b394b5db918e2281dbf79841fba54eae"} Jan 29 06:57:54 crc kubenswrapper[4826]: I0129 06:57:54.727950 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9hn5" event={"ID":"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f","Type":"ContainerStarted","Data":"b93c6eac071b76dea6f04cebe836fb5262a3acff8420690756ccfd02d45a7f90"} Jan 29 06:57:54 crc kubenswrapper[4826]: I0129 06:57:54.733582 4826 generic.go:334] "Generic (PLEG): container finished" podID="6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" containerID="3db27b5f0eb58f7fc4e31acc8c0bcdd9b394b5db918e2281dbf79841fba54eae" exitCode=0 Jan 29 06:57:54 crc kubenswrapper[4826]: I0129 06:57:54.734567 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4v7m" event={"ID":"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08","Type":"ContainerDied","Data":"3db27b5f0eb58f7fc4e31acc8c0bcdd9b394b5db918e2281dbf79841fba54eae"} Jan 29 06:57:55 crc kubenswrapper[4826]: I0129 06:57:55.740526 4826 generic.go:334] "Generic (PLEG): container finished" podID="b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" containerID="b93c6eac071b76dea6f04cebe836fb5262a3acff8420690756ccfd02d45a7f90" exitCode=0 Jan 29 06:57:55 crc kubenswrapper[4826]: I0129 06:57:55.740579 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9hn5" event={"ID":"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f","Type":"ContainerDied","Data":"b93c6eac071b76dea6f04cebe836fb5262a3acff8420690756ccfd02d45a7f90"} Jan 29 06:57:58 crc kubenswrapper[4826]: I0129 06:57:58.766276 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9hn5" event={"ID":"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f","Type":"ContainerStarted","Data":"647fb5baa57e7e934c9a27d0ce7f580fff1d92251bb4404a821f4ee60bae2183"} Jan 29 06:57:58 crc kubenswrapper[4826]: I0129 06:57:58.768619 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69" event={"ID":"2eda5181-f4b2-4a86-bf3c-8ba839c80d00","Type":"ContainerStarted","Data":"aca942e24496f4e1aaaa0b8220e719e12ba4d228c86dd9320214af8e75997961"} Jan 29 06:57:58 crc kubenswrapper[4826]: I0129 06:57:58.768755 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69" Jan 29 06:57:58 crc kubenswrapper[4826]: I0129 06:57:58.771044 4826 generic.go:334] "Generic (PLEG): container finished" podID="80682433-6c24-45a9-a54b-8db2233f2870" containerID="54df6b7f78eff56805df4d9d2e9a6aaa239f193654e538d0c6ca3f34673bce2f" exitCode=0 Jan 29 06:57:58 crc kubenswrapper[4826]: I0129 06:57:58.771145 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zsbwk" event={"ID":"80682433-6c24-45a9-a54b-8db2233f2870","Type":"ContainerDied","Data":"54df6b7f78eff56805df4d9d2e9a6aaa239f193654e538d0c6ca3f34673bce2f"} Jan 29 06:57:58 crc kubenswrapper[4826]: I0129 06:57:58.776944 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4v7m" event={"ID":"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08","Type":"ContainerStarted","Data":"2cd15d1015b36baef4602a822de1d93ba868cd5244b85e9eb5a8949542102719"} Jan 29 06:57:58 crc kubenswrapper[4826]: I0129 06:57:58.804431 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v9hn5" podStartSLOduration=2.463699483 podStartE2EDuration="6.804400498s" podCreationTimestamp="2026-01-29 06:57:52 +0000 UTC" firstStartedPulling="2026-01-29 06:57:53.70357596 +0000 UTC m=+857.565369029" lastFinishedPulling="2026-01-29 06:57:58.044276975 +0000 UTC m=+861.906070044" observedRunningTime="2026-01-29 06:57:58.792331238 +0000 UTC m=+862.654124317" watchObservedRunningTime="2026-01-29 06:57:58.804400498 +0000 UTC m=+862.666193567" Jan 29 06:57:58 crc kubenswrapper[4826]: I0129 06:57:58.860059 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69" podStartSLOduration=2.361060369 podStartE2EDuration="11.860028217s" podCreationTimestamp="2026-01-29 06:57:47 +0000 UTC" firstStartedPulling="2026-01-29 06:57:48.467405227 +0000 UTC m=+852.329198296" lastFinishedPulling="2026-01-29 06:57:57.966373075 +0000 UTC m=+861.828166144" observedRunningTime="2026-01-29 06:57:58.847411133 +0000 UTC m=+862.709204212" watchObservedRunningTime="2026-01-29 06:57:58.860028217 +0000 UTC m=+862.721821296" Jan 29 06:57:58 crc kubenswrapper[4826]: I0129 06:57:58.881605 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t4v7m" podStartSLOduration=3.542445461 podStartE2EDuration="8.88157203s" podCreationTimestamp="2026-01-29 06:57:50 +0000 UTC" firstStartedPulling="2026-01-29 06:57:52.69399313 +0000 UTC m=+856.555786199" lastFinishedPulling="2026-01-29 06:57:58.033119699 +0000 UTC m=+861.894912768" observedRunningTime="2026-01-29 06:57:58.871702027 +0000 UTC m=+862.733495106" watchObservedRunningTime="2026-01-29 06:57:58.88157203 +0000 UTC m=+862.743365119" Jan 29 06:57:59 crc kubenswrapper[4826]: I0129 06:57:59.785193 4826 generic.go:334] "Generic (PLEG): container finished" podID="80682433-6c24-45a9-a54b-8db2233f2870" containerID="85a9bf7af08df0741fa5c396b73f6b3dd3dd3ac1c0c5c58c9d5c204981bdd0fa" exitCode=0 Jan 29 06:57:59 crc kubenswrapper[4826]: I0129 06:57:59.785314 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zsbwk" event={"ID":"80682433-6c24-45a9-a54b-8db2233f2870","Type":"ContainerDied","Data":"85a9bf7af08df0741fa5c396b73f6b3dd3dd3ac1c0c5c58c9d5c204981bdd0fa"} Jan 29 06:58:00 crc kubenswrapper[4826]: I0129 06:58:00.797910 4826 generic.go:334] "Generic (PLEG): container finished" podID="80682433-6c24-45a9-a54b-8db2233f2870" containerID="ebcf19dc987362c637e0c9ba613379e961575dea9103a4cc9173dd05898b9ab0" exitCode=0 Jan 29 06:58:00 crc kubenswrapper[4826]: I0129 06:58:00.798042 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zsbwk" event={"ID":"80682433-6c24-45a9-a54b-8db2233f2870","Type":"ContainerDied","Data":"ebcf19dc987362c637e0c9ba613379e961575dea9103a4cc9173dd05898b9ab0"} Jan 29 06:58:01 crc kubenswrapper[4826]: I0129 06:58:01.266707 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:58:01 crc kubenswrapper[4826]: I0129 06:58:01.267207 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:58:01 crc kubenswrapper[4826]: I0129 06:58:01.318129 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:58:01 crc kubenswrapper[4826]: I0129 06:58:01.841942 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zsbwk" event={"ID":"80682433-6c24-45a9-a54b-8db2233f2870","Type":"ContainerStarted","Data":"eadabfe55b24bf875a9f605ab2d761dd13810112db0b9fab0fac331351922e98"} Jan 29 06:58:01 crc kubenswrapper[4826]: I0129 06:58:01.841984 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zsbwk" event={"ID":"80682433-6c24-45a9-a54b-8db2233f2870","Type":"ContainerStarted","Data":"9d9adec94d29223da8c74b71a1ee4861038ec14a3d4ab5053bef6e8faf47a239"} Jan 29 06:58:01 crc kubenswrapper[4826]: I0129 06:58:01.841995 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zsbwk" event={"ID":"80682433-6c24-45a9-a54b-8db2233f2870","Type":"ContainerStarted","Data":"f036fa9f01739c15d3a9e4711cd162f47fe16fec26d3bb8d535e7dc289fde7ba"} Jan 29 06:58:01 crc kubenswrapper[4826]: I0129 06:58:01.842007 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zsbwk" event={"ID":"80682433-6c24-45a9-a54b-8db2233f2870","Type":"ContainerStarted","Data":"99b5d4b008b2c5d4b9b093f6018ba4ad4cc4ee2b9080e957bc366024ee4e8eef"} Jan 29 06:58:01 crc kubenswrapper[4826]: I0129 06:58:01.842017 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zsbwk" event={"ID":"80682433-6c24-45a9-a54b-8db2233f2870","Type":"ContainerStarted","Data":"ea0dda4764bc3f96753b3c5fbd477897a6dc195187764891e782de7be85cebdb"} Jan 29 06:58:02 crc kubenswrapper[4826]: I0129 06:58:02.857050 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zsbwk" event={"ID":"80682433-6c24-45a9-a54b-8db2233f2870","Type":"ContainerStarted","Data":"9fa1dfebe3085947d1f8f628d14e75172c952c9b6cd0a584c6e6748ba7ba67a4"} Jan 29 06:58:02 crc kubenswrapper[4826]: I0129 06:58:02.857510 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:58:02 crc kubenswrapper[4826]: I0129 06:58:02.868633 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:58:02 crc kubenswrapper[4826]: I0129 06:58:02.868687 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:58:02 crc kubenswrapper[4826]: I0129 06:58:02.905410 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-zsbwk" podStartSLOduration=6.9703826620000005 podStartE2EDuration="15.905382926s" podCreationTimestamp="2026-01-29 06:57:47 +0000 UTC" firstStartedPulling="2026-01-29 06:57:49.04885556 +0000 UTC m=+852.910648659" lastFinishedPulling="2026-01-29 06:57:57.983855844 +0000 UTC m=+861.845648923" observedRunningTime="2026-01-29 06:58:02.904049292 +0000 UTC m=+866.765842391" watchObservedRunningTime="2026-01-29 06:58:02.905382926 +0000 UTC m=+866.767176015" Jan 29 06:58:02 crc kubenswrapper[4826]: I0129 06:58:02.932748 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:58:03 crc kubenswrapper[4826]: I0129 06:58:03.826628 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:58:03 crc kubenswrapper[4826]: I0129 06:58:03.890627 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:58:03 crc kubenswrapper[4826]: I0129 06:58:03.939227 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:58:05 crc kubenswrapper[4826]: I0129 06:58:05.136662 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9hn5"] Jan 29 06:58:05 crc kubenswrapper[4826]: I0129 06:58:05.880937 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v9hn5" podUID="b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" containerName="registry-server" containerID="cri-o://647fb5baa57e7e934c9a27d0ce7f580fff1d92251bb4404a821f4ee60bae2183" gracePeriod=2 Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.320765 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.517082 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-catalog-content\") pod \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\" (UID: \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\") " Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.517159 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h8kh\" (UniqueName: \"kubernetes.io/projected/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-kube-api-access-4h8kh\") pod \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\" (UID: \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\") " Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.517254 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-utilities\") pod \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\" (UID: \"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f\") " Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.518636 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-utilities" (OuterVolumeSpecName: "utilities") pod "b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" (UID: "b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.526249 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-kube-api-access-4h8kh" (OuterVolumeSpecName: "kube-api-access-4h8kh") pod "b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" (UID: "b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f"). InnerVolumeSpecName "kube-api-access-4h8kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.593518 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" (UID: "b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.619446 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.619531 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h8kh\" (UniqueName: \"kubernetes.io/projected/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-kube-api-access-4h8kh\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.619553 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.888919 4826 generic.go:334] "Generic (PLEG): container finished" podID="b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" containerID="647fb5baa57e7e934c9a27d0ce7f580fff1d92251bb4404a821f4ee60bae2183" exitCode=0 Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.888973 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9hn5" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.888978 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9hn5" event={"ID":"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f","Type":"ContainerDied","Data":"647fb5baa57e7e934c9a27d0ce7f580fff1d92251bb4404a821f4ee60bae2183"} Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.889326 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9hn5" event={"ID":"b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f","Type":"ContainerDied","Data":"d1cbeada7fa34686f531bd8f7e9cdbfa4a62d926f64c07aa8fbf4be5647256f9"} Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.889354 4826 scope.go:117] "RemoveContainer" containerID="647fb5baa57e7e934c9a27d0ce7f580fff1d92251bb4404a821f4ee60bae2183" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.908756 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9hn5"] Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.914691 4826 scope.go:117] "RemoveContainer" containerID="b93c6eac071b76dea6f04cebe836fb5262a3acff8420690756ccfd02d45a7f90" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.917947 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v9hn5"] Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.936147 4826 scope.go:117] "RemoveContainer" containerID="0d4140962f2fcd494bd5e88cbd0d60ce1768e778c8cb7f9936c6806b5fedb529" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.955861 4826 scope.go:117] "RemoveContainer" containerID="647fb5baa57e7e934c9a27d0ce7f580fff1d92251bb4404a821f4ee60bae2183" Jan 29 06:58:06 crc kubenswrapper[4826]: E0129 06:58:06.956426 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647fb5baa57e7e934c9a27d0ce7f580fff1d92251bb4404a821f4ee60bae2183\": container with ID starting with 647fb5baa57e7e934c9a27d0ce7f580fff1d92251bb4404a821f4ee60bae2183 not found: ID does not exist" containerID="647fb5baa57e7e934c9a27d0ce7f580fff1d92251bb4404a821f4ee60bae2183" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.956518 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647fb5baa57e7e934c9a27d0ce7f580fff1d92251bb4404a821f4ee60bae2183"} err="failed to get container status \"647fb5baa57e7e934c9a27d0ce7f580fff1d92251bb4404a821f4ee60bae2183\": rpc error: code = NotFound desc = could not find container \"647fb5baa57e7e934c9a27d0ce7f580fff1d92251bb4404a821f4ee60bae2183\": container with ID starting with 647fb5baa57e7e934c9a27d0ce7f580fff1d92251bb4404a821f4ee60bae2183 not found: ID does not exist" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.956555 4826 scope.go:117] "RemoveContainer" containerID="b93c6eac071b76dea6f04cebe836fb5262a3acff8420690756ccfd02d45a7f90" Jan 29 06:58:06 crc kubenswrapper[4826]: E0129 06:58:06.956957 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b93c6eac071b76dea6f04cebe836fb5262a3acff8420690756ccfd02d45a7f90\": container with ID starting with b93c6eac071b76dea6f04cebe836fb5262a3acff8420690756ccfd02d45a7f90 not found: ID does not exist" containerID="b93c6eac071b76dea6f04cebe836fb5262a3acff8420690756ccfd02d45a7f90" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.957014 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b93c6eac071b76dea6f04cebe836fb5262a3acff8420690756ccfd02d45a7f90"} err="failed to get container status \"b93c6eac071b76dea6f04cebe836fb5262a3acff8420690756ccfd02d45a7f90\": rpc error: code = NotFound desc = could not find container \"b93c6eac071b76dea6f04cebe836fb5262a3acff8420690756ccfd02d45a7f90\": container with ID starting with b93c6eac071b76dea6f04cebe836fb5262a3acff8420690756ccfd02d45a7f90 not found: ID does not exist" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.957055 4826 scope.go:117] "RemoveContainer" containerID="0d4140962f2fcd494bd5e88cbd0d60ce1768e778c8cb7f9936c6806b5fedb529" Jan 29 06:58:06 crc kubenswrapper[4826]: E0129 06:58:06.957429 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d4140962f2fcd494bd5e88cbd0d60ce1768e778c8cb7f9936c6806b5fedb529\": container with ID starting with 0d4140962f2fcd494bd5e88cbd0d60ce1768e778c8cb7f9936c6806b5fedb529 not found: ID does not exist" containerID="0d4140962f2fcd494bd5e88cbd0d60ce1768e778c8cb7f9936c6806b5fedb529" Jan 29 06:58:06 crc kubenswrapper[4826]: I0129 06:58:06.957482 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d4140962f2fcd494bd5e88cbd0d60ce1768e778c8cb7f9936c6806b5fedb529"} err="failed to get container status \"0d4140962f2fcd494bd5e88cbd0d60ce1768e778c8cb7f9936c6806b5fedb529\": rpc error: code = NotFound desc = could not find container \"0d4140962f2fcd494bd5e88cbd0d60ce1768e778c8cb7f9936c6806b5fedb529\": container with ID starting with 0d4140962f2fcd494bd5e88cbd0d60ce1768e778c8cb7f9936c6806b5fedb529 not found: ID does not exist" Jan 29 06:58:08 crc kubenswrapper[4826]: I0129 06:58:08.244109 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-m8t69" Jan 29 06:58:08 crc kubenswrapper[4826]: I0129 06:58:08.823052 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" path="/var/lib/kubelet/pods/b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f/volumes" Jan 29 06:58:08 crc kubenswrapper[4826]: I0129 06:58:08.949356 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-s2gzv" Jan 29 06:58:09 crc kubenswrapper[4826]: I0129 06:58:09.824581 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-h89x7" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.341129 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.636881 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn"] Jan 29 06:58:11 crc kubenswrapper[4826]: E0129 06:58:11.637192 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" containerName="registry-server" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.637213 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" containerName="registry-server" Jan 29 06:58:11 crc kubenswrapper[4826]: E0129 06:58:11.637229 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" containerName="extract-utilities" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.637237 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" containerName="extract-utilities" Jan 29 06:58:11 crc kubenswrapper[4826]: E0129 06:58:11.637255 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" containerName="extract-content" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.637262 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" containerName="extract-content" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.637422 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ab7c8c-01c3-4d7f-abf8-2e5857a5762f" containerName="registry-server" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.638440 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.651756 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn"] Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.654822 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.691007 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmx96\" (UniqueName: \"kubernetes.io/projected/75ef4038-043e-4847-b71b-818782b647ab-kube-api-access-mmx96\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn\" (UID: \"75ef4038-043e-4847-b71b-818782b647ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.691052 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75ef4038-043e-4847-b71b-818782b647ab-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn\" (UID: \"75ef4038-043e-4847-b71b-818782b647ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.691178 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75ef4038-043e-4847-b71b-818782b647ab-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn\" (UID: \"75ef4038-043e-4847-b71b-818782b647ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.792540 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmx96\" (UniqueName: \"kubernetes.io/projected/75ef4038-043e-4847-b71b-818782b647ab-kube-api-access-mmx96\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn\" (UID: \"75ef4038-043e-4847-b71b-818782b647ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.792591 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75ef4038-043e-4847-b71b-818782b647ab-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn\" (UID: \"75ef4038-043e-4847-b71b-818782b647ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.792632 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75ef4038-043e-4847-b71b-818782b647ab-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn\" (UID: \"75ef4038-043e-4847-b71b-818782b647ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.793095 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75ef4038-043e-4847-b71b-818782b647ab-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn\" (UID: \"75ef4038-043e-4847-b71b-818782b647ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.793186 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75ef4038-043e-4847-b71b-818782b647ab-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn\" (UID: \"75ef4038-043e-4847-b71b-818782b647ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.818116 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmx96\" (UniqueName: \"kubernetes.io/projected/75ef4038-043e-4847-b71b-818782b647ab-kube-api-access-mmx96\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn\" (UID: \"75ef4038-043e-4847-b71b-818782b647ab\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" Jan 29 06:58:11 crc kubenswrapper[4826]: I0129 06:58:11.954863 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" Jan 29 06:58:12 crc kubenswrapper[4826]: I0129 06:58:12.375339 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn"] Jan 29 06:58:12 crc kubenswrapper[4826]: W0129 06:58:12.379739 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75ef4038_043e_4847_b71b_818782b647ab.slice/crio-ed2f004ba871fa401de3dde6c01d33981372ccde34ec2365c6ab93f63bfd95ed WatchSource:0}: Error finding container ed2f004ba871fa401de3dde6c01d33981372ccde34ec2365c6ab93f63bfd95ed: Status 404 returned error can't find the container with id ed2f004ba871fa401de3dde6c01d33981372ccde34ec2365c6ab93f63bfd95ed Jan 29 06:58:12 crc kubenswrapper[4826]: I0129 06:58:12.932101 4826 generic.go:334] "Generic (PLEG): container finished" podID="75ef4038-043e-4847-b71b-818782b647ab" containerID="b40c3b7da2d859dfa41972805c4d52c6550cf751da0c83f0fd22adb944db83df" exitCode=0 Jan 29 06:58:12 crc kubenswrapper[4826]: I0129 06:58:12.932149 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" event={"ID":"75ef4038-043e-4847-b71b-818782b647ab","Type":"ContainerDied","Data":"b40c3b7da2d859dfa41972805c4d52c6550cf751da0c83f0fd22adb944db83df"} Jan 29 06:58:12 crc kubenswrapper[4826]: I0129 06:58:12.932390 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" event={"ID":"75ef4038-043e-4847-b71b-818782b647ab","Type":"ContainerStarted","Data":"ed2f004ba871fa401de3dde6c01d33981372ccde34ec2365c6ab93f63bfd95ed"} Jan 29 06:58:15 crc kubenswrapper[4826]: I0129 06:58:15.937952 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4v7m"] Jan 29 06:58:15 crc kubenswrapper[4826]: I0129 06:58:15.938438 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t4v7m" podUID="6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" containerName="registry-server" containerID="cri-o://2cd15d1015b36baef4602a822de1d93ba868cd5244b85e9eb5a8949542102719" gracePeriod=2 Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.448873 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.590342 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrq7g\" (UniqueName: \"kubernetes.io/projected/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-kube-api-access-zrq7g\") pod \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\" (UID: \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\") " Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.590557 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-catalog-content\") pod \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\" (UID: \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\") " Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.590625 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-utilities\") pod \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\" (UID: \"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08\") " Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.592079 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-utilities" (OuterVolumeSpecName: "utilities") pod "6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" (UID: "6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.597811 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-kube-api-access-zrq7g" (OuterVolumeSpecName: "kube-api-access-zrq7g") pod "6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" (UID: "6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08"). InnerVolumeSpecName "kube-api-access-zrq7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.620411 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" (UID: "6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.692711 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrq7g\" (UniqueName: \"kubernetes.io/projected/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-kube-api-access-zrq7g\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.692747 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.692766 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.967107 4826 generic.go:334] "Generic (PLEG): container finished" podID="6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" containerID="2cd15d1015b36baef4602a822de1d93ba868cd5244b85e9eb5a8949542102719" exitCode=0 Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.967175 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4v7m" event={"ID":"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08","Type":"ContainerDied","Data":"2cd15d1015b36baef4602a822de1d93ba868cd5244b85e9eb5a8949542102719"} Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.967273 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4v7m" event={"ID":"6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08","Type":"ContainerDied","Data":"bb8d8b461f1fe745b38c4714dd4ae6a98dd91ea6b70315ff2314435e60f0d33b"} Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.967347 4826 scope.go:117] "RemoveContainer" containerID="2cd15d1015b36baef4602a822de1d93ba868cd5244b85e9eb5a8949542102719" Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.968847 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4v7m" Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.969894 4826 generic.go:334] "Generic (PLEG): container finished" podID="75ef4038-043e-4847-b71b-818782b647ab" containerID="8c8168c25663acb94103cd6ccafeb44cb7d3c738315265fa1ab8b5595f452e82" exitCode=0 Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.969921 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" event={"ID":"75ef4038-043e-4847-b71b-818782b647ab","Type":"ContainerDied","Data":"8c8168c25663acb94103cd6ccafeb44cb7d3c738315265fa1ab8b5595f452e82"} Jan 29 06:58:16 crc kubenswrapper[4826]: I0129 06:58:16.996968 4826 scope.go:117] "RemoveContainer" containerID="3db27b5f0eb58f7fc4e31acc8c0bcdd9b394b5db918e2281dbf79841fba54eae" Jan 29 06:58:17 crc kubenswrapper[4826]: I0129 06:58:17.022506 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4v7m"] Jan 29 06:58:17 crc kubenswrapper[4826]: I0129 06:58:17.031580 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4v7m"] Jan 29 06:58:17 crc kubenswrapper[4826]: I0129 06:58:17.037939 4826 scope.go:117] "RemoveContainer" containerID="dbf29c43147e02c81a13899a23463f46ffb8e4449a22ec8c68bb288f6379e1ab" Jan 29 06:58:17 crc kubenswrapper[4826]: I0129 06:58:17.076887 4826 scope.go:117] "RemoveContainer" containerID="2cd15d1015b36baef4602a822de1d93ba868cd5244b85e9eb5a8949542102719" Jan 29 06:58:17 crc kubenswrapper[4826]: E0129 06:58:17.077672 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cd15d1015b36baef4602a822de1d93ba868cd5244b85e9eb5a8949542102719\": container with ID starting with 2cd15d1015b36baef4602a822de1d93ba868cd5244b85e9eb5a8949542102719 not found: ID does not exist" containerID="2cd15d1015b36baef4602a822de1d93ba868cd5244b85e9eb5a8949542102719" Jan 29 06:58:17 crc kubenswrapper[4826]: I0129 06:58:17.077756 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd15d1015b36baef4602a822de1d93ba868cd5244b85e9eb5a8949542102719"} err="failed to get container status \"2cd15d1015b36baef4602a822de1d93ba868cd5244b85e9eb5a8949542102719\": rpc error: code = NotFound desc = could not find container \"2cd15d1015b36baef4602a822de1d93ba868cd5244b85e9eb5a8949542102719\": container with ID starting with 2cd15d1015b36baef4602a822de1d93ba868cd5244b85e9eb5a8949542102719 not found: ID does not exist" Jan 29 06:58:17 crc kubenswrapper[4826]: I0129 06:58:17.077818 4826 scope.go:117] "RemoveContainer" containerID="3db27b5f0eb58f7fc4e31acc8c0bcdd9b394b5db918e2281dbf79841fba54eae" Jan 29 06:58:17 crc kubenswrapper[4826]: E0129 06:58:17.078739 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3db27b5f0eb58f7fc4e31acc8c0bcdd9b394b5db918e2281dbf79841fba54eae\": container with ID starting with 3db27b5f0eb58f7fc4e31acc8c0bcdd9b394b5db918e2281dbf79841fba54eae not found: ID does not exist" containerID="3db27b5f0eb58f7fc4e31acc8c0bcdd9b394b5db918e2281dbf79841fba54eae" Jan 29 06:58:17 crc kubenswrapper[4826]: I0129 06:58:17.078873 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db27b5f0eb58f7fc4e31acc8c0bcdd9b394b5db918e2281dbf79841fba54eae"} err="failed to get container status \"3db27b5f0eb58f7fc4e31acc8c0bcdd9b394b5db918e2281dbf79841fba54eae\": rpc error: code = NotFound desc = could not find container \"3db27b5f0eb58f7fc4e31acc8c0bcdd9b394b5db918e2281dbf79841fba54eae\": container with ID starting with 3db27b5f0eb58f7fc4e31acc8c0bcdd9b394b5db918e2281dbf79841fba54eae not found: ID does not exist" Jan 29 06:58:17 crc kubenswrapper[4826]: I0129 06:58:17.078917 4826 scope.go:117] "RemoveContainer" containerID="dbf29c43147e02c81a13899a23463f46ffb8e4449a22ec8c68bb288f6379e1ab" Jan 29 06:58:17 crc kubenswrapper[4826]: E0129 06:58:17.079401 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbf29c43147e02c81a13899a23463f46ffb8e4449a22ec8c68bb288f6379e1ab\": container with ID starting with dbf29c43147e02c81a13899a23463f46ffb8e4449a22ec8c68bb288f6379e1ab not found: ID does not exist" containerID="dbf29c43147e02c81a13899a23463f46ffb8e4449a22ec8c68bb288f6379e1ab" Jan 29 06:58:17 crc kubenswrapper[4826]: I0129 06:58:17.079597 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbf29c43147e02c81a13899a23463f46ffb8e4449a22ec8c68bb288f6379e1ab"} err="failed to get container status \"dbf29c43147e02c81a13899a23463f46ffb8e4449a22ec8c68bb288f6379e1ab\": rpc error: code = NotFound desc = could not find container \"dbf29c43147e02c81a13899a23463f46ffb8e4449a22ec8c68bb288f6379e1ab\": container with ID starting with dbf29c43147e02c81a13899a23463f46ffb8e4449a22ec8c68bb288f6379e1ab not found: ID does not exist" Jan 29 06:58:17 crc kubenswrapper[4826]: I0129 06:58:17.985707 4826 generic.go:334] "Generic (PLEG): container finished" podID="75ef4038-043e-4847-b71b-818782b647ab" containerID="983d4e3d1845709b5cf848f42e98f2d3f5f3721fbdbafef8e8fe442257132cf1" exitCode=0 Jan 29 06:58:17 crc kubenswrapper[4826]: I0129 06:58:17.985791 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" event={"ID":"75ef4038-043e-4847-b71b-818782b647ab","Type":"ContainerDied","Data":"983d4e3d1845709b5cf848f42e98f2d3f5f3721fbdbafef8e8fe442257132cf1"} Jan 29 06:58:18 crc kubenswrapper[4826]: I0129 06:58:18.821165 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" path="/var/lib/kubelet/pods/6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08/volumes" Jan 29 06:58:18 crc kubenswrapper[4826]: I0129 06:58:18.833187 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-zsbwk" Jan 29 06:58:19 crc kubenswrapper[4826]: I0129 06:58:19.296183 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" Jan 29 06:58:19 crc kubenswrapper[4826]: I0129 06:58:19.436551 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmx96\" (UniqueName: \"kubernetes.io/projected/75ef4038-043e-4847-b71b-818782b647ab-kube-api-access-mmx96\") pod \"75ef4038-043e-4847-b71b-818782b647ab\" (UID: \"75ef4038-043e-4847-b71b-818782b647ab\") " Jan 29 06:58:19 crc kubenswrapper[4826]: I0129 06:58:19.436645 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75ef4038-043e-4847-b71b-818782b647ab-util\") pod \"75ef4038-043e-4847-b71b-818782b647ab\" (UID: \"75ef4038-043e-4847-b71b-818782b647ab\") " Jan 29 06:58:19 crc kubenswrapper[4826]: I0129 06:58:19.436804 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75ef4038-043e-4847-b71b-818782b647ab-bundle\") pod \"75ef4038-043e-4847-b71b-818782b647ab\" (UID: \"75ef4038-043e-4847-b71b-818782b647ab\") " Jan 29 06:58:19 crc kubenswrapper[4826]: I0129 06:58:19.439160 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75ef4038-043e-4847-b71b-818782b647ab-bundle" (OuterVolumeSpecName: "bundle") pod "75ef4038-043e-4847-b71b-818782b647ab" (UID: "75ef4038-043e-4847-b71b-818782b647ab"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:58:19 crc kubenswrapper[4826]: I0129 06:58:19.449829 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ef4038-043e-4847-b71b-818782b647ab-kube-api-access-mmx96" (OuterVolumeSpecName: "kube-api-access-mmx96") pod "75ef4038-043e-4847-b71b-818782b647ab" (UID: "75ef4038-043e-4847-b71b-818782b647ab"). InnerVolumeSpecName "kube-api-access-mmx96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:58:19 crc kubenswrapper[4826]: I0129 06:58:19.456505 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75ef4038-043e-4847-b71b-818782b647ab-util" (OuterVolumeSpecName: "util") pod "75ef4038-043e-4847-b71b-818782b647ab" (UID: "75ef4038-043e-4847-b71b-818782b647ab"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:58:19 crc kubenswrapper[4826]: I0129 06:58:19.538373 4826 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75ef4038-043e-4847-b71b-818782b647ab-util\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:19 crc kubenswrapper[4826]: I0129 06:58:19.538703 4826 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75ef4038-043e-4847-b71b-818782b647ab-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:19 crc kubenswrapper[4826]: I0129 06:58:19.538731 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmx96\" (UniqueName: \"kubernetes.io/projected/75ef4038-043e-4847-b71b-818782b647ab-kube-api-access-mmx96\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:20 crc kubenswrapper[4826]: I0129 06:58:20.009638 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" event={"ID":"75ef4038-043e-4847-b71b-818782b647ab","Type":"ContainerDied","Data":"ed2f004ba871fa401de3dde6c01d33981372ccde34ec2365c6ab93f63bfd95ed"} Jan 29 06:58:20 crc kubenswrapper[4826]: I0129 06:58:20.010262 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed2f004ba871fa401de3dde6c01d33981372ccde34ec2365c6ab93f63bfd95ed" Jan 29 06:58:20 crc kubenswrapper[4826]: I0129 06:58:20.009770 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.322197 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d"] Jan 29 06:58:22 crc kubenswrapper[4826]: E0129 06:58:22.322847 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" containerName="extract-utilities" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.322864 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" containerName="extract-utilities" Jan 29 06:58:22 crc kubenswrapper[4826]: E0129 06:58:22.322877 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" containerName="registry-server" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.322886 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" containerName="registry-server" Jan 29 06:58:22 crc kubenswrapper[4826]: E0129 06:58:22.322901 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ef4038-043e-4847-b71b-818782b647ab" containerName="util" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.322914 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ef4038-043e-4847-b71b-818782b647ab" containerName="util" Jan 29 06:58:22 crc kubenswrapper[4826]: E0129 06:58:22.322945 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ef4038-043e-4847-b71b-818782b647ab" containerName="pull" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.322956 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ef4038-043e-4847-b71b-818782b647ab" containerName="pull" Jan 29 06:58:22 crc kubenswrapper[4826]: E0129 06:58:22.322975 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ef4038-043e-4847-b71b-818782b647ab" containerName="extract" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.322986 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ef4038-043e-4847-b71b-818782b647ab" containerName="extract" Jan 29 06:58:22 crc kubenswrapper[4826]: E0129 06:58:22.323009 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" containerName="extract-content" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.323019 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" containerName="extract-content" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.323159 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ef4038-043e-4847-b71b-818782b647ab" containerName="extract" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.323179 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb3ba24-b1d0-4f25-b7ab-3aec797d7d08" containerName="registry-server" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.323762 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.332209 4826 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-5qgw6" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.332980 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.333475 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.354994 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d"] Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.478935 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/66c895a3-c95f-4ed4-96eb-0a7e1200bd03-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-8bv9d\" (UID: \"66c895a3-c95f-4ed4-96eb-0a7e1200bd03\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.479622 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8m8c\" (UniqueName: \"kubernetes.io/projected/66c895a3-c95f-4ed4-96eb-0a7e1200bd03-kube-api-access-g8m8c\") pod \"cert-manager-operator-controller-manager-66c8bdd694-8bv9d\" (UID: \"66c895a3-c95f-4ed4-96eb-0a7e1200bd03\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.580412 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8m8c\" (UniqueName: \"kubernetes.io/projected/66c895a3-c95f-4ed4-96eb-0a7e1200bd03-kube-api-access-g8m8c\") pod \"cert-manager-operator-controller-manager-66c8bdd694-8bv9d\" (UID: \"66c895a3-c95f-4ed4-96eb-0a7e1200bd03\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.580507 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/66c895a3-c95f-4ed4-96eb-0a7e1200bd03-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-8bv9d\" (UID: \"66c895a3-c95f-4ed4-96eb-0a7e1200bd03\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.580945 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/66c895a3-c95f-4ed4-96eb-0a7e1200bd03-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-8bv9d\" (UID: \"66c895a3-c95f-4ed4-96eb-0a7e1200bd03\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.605688 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8m8c\" (UniqueName: \"kubernetes.io/projected/66c895a3-c95f-4ed4-96eb-0a7e1200bd03-kube-api-access-g8m8c\") pod \"cert-manager-operator-controller-manager-66c8bdd694-8bv9d\" (UID: \"66c895a3-c95f-4ed4-96eb-0a7e1200bd03\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d" Jan 29 06:58:22 crc kubenswrapper[4826]: I0129 06:58:22.641775 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d" Jan 29 06:58:23 crc kubenswrapper[4826]: I0129 06:58:23.110455 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d"] Jan 29 06:58:23 crc kubenswrapper[4826]: W0129 06:58:23.122729 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66c895a3_c95f_4ed4_96eb_0a7e1200bd03.slice/crio-c1e385e426e6366ae46840cfdaeec34d51be92128f8eac43ea482659149b8a22 WatchSource:0}: Error finding container c1e385e426e6366ae46840cfdaeec34d51be92128f8eac43ea482659149b8a22: Status 404 returned error can't find the container with id c1e385e426e6366ae46840cfdaeec34d51be92128f8eac43ea482659149b8a22 Jan 29 06:58:24 crc kubenswrapper[4826]: I0129 06:58:24.046637 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d" event={"ID":"66c895a3-c95f-4ed4-96eb-0a7e1200bd03","Type":"ContainerStarted","Data":"c1e385e426e6366ae46840cfdaeec34d51be92128f8eac43ea482659149b8a22"} Jan 29 06:58:27 crc kubenswrapper[4826]: I0129 06:58:27.080042 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d" event={"ID":"66c895a3-c95f-4ed4-96eb-0a7e1200bd03","Type":"ContainerStarted","Data":"53b96a99b2bc776d8ed571d7302d1a7f3c16cbfed0df17d7cdb6d9734ee9d9cb"} Jan 29 06:58:27 crc kubenswrapper[4826]: I0129 06:58:27.106596 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-8bv9d" podStartSLOduration=2.285971816 podStartE2EDuration="5.106577649s" podCreationTimestamp="2026-01-29 06:58:22 +0000 UTC" firstStartedPulling="2026-01-29 06:58:23.129525904 +0000 UTC m=+886.991318973" lastFinishedPulling="2026-01-29 06:58:25.950131697 +0000 UTC m=+889.811924806" observedRunningTime="2026-01-29 06:58:27.10390108 +0000 UTC m=+890.965694139" watchObservedRunningTime="2026-01-29 06:58:27.106577649 +0000 UTC m=+890.968370718" Jan 29 06:58:30 crc kubenswrapper[4826]: I0129 06:58:30.463149 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-tnjwc"] Jan 29 06:58:30 crc kubenswrapper[4826]: I0129 06:58:30.464344 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-tnjwc" Jan 29 06:58:30 crc kubenswrapper[4826]: I0129 06:58:30.466168 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 06:58:30 crc kubenswrapper[4826]: I0129 06:58:30.466246 4826 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-rwvmn" Jan 29 06:58:30 crc kubenswrapper[4826]: I0129 06:58:30.466884 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 06:58:30 crc kubenswrapper[4826]: I0129 06:58:30.477776 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-tnjwc"] Jan 29 06:58:30 crc kubenswrapper[4826]: I0129 06:58:30.503511 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38c02f33-7056-4cf2-834d-765936d1be36-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-tnjwc\" (UID: \"38c02f33-7056-4cf2-834d-765936d1be36\") " pod="cert-manager/cert-manager-webhook-6888856db4-tnjwc" Jan 29 06:58:30 crc kubenswrapper[4826]: I0129 06:58:30.503598 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xsv2\" (UniqueName: \"kubernetes.io/projected/38c02f33-7056-4cf2-834d-765936d1be36-kube-api-access-8xsv2\") pod \"cert-manager-webhook-6888856db4-tnjwc\" (UID: \"38c02f33-7056-4cf2-834d-765936d1be36\") " pod="cert-manager/cert-manager-webhook-6888856db4-tnjwc" Jan 29 06:58:30 crc kubenswrapper[4826]: I0129 06:58:30.604645 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xsv2\" (UniqueName: \"kubernetes.io/projected/38c02f33-7056-4cf2-834d-765936d1be36-kube-api-access-8xsv2\") pod \"cert-manager-webhook-6888856db4-tnjwc\" (UID: \"38c02f33-7056-4cf2-834d-765936d1be36\") " pod="cert-manager/cert-manager-webhook-6888856db4-tnjwc" Jan 29 06:58:30 crc kubenswrapper[4826]: I0129 06:58:30.604945 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38c02f33-7056-4cf2-834d-765936d1be36-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-tnjwc\" (UID: \"38c02f33-7056-4cf2-834d-765936d1be36\") " pod="cert-manager/cert-manager-webhook-6888856db4-tnjwc" Jan 29 06:58:30 crc kubenswrapper[4826]: I0129 06:58:30.628697 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38c02f33-7056-4cf2-834d-765936d1be36-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-tnjwc\" (UID: \"38c02f33-7056-4cf2-834d-765936d1be36\") " pod="cert-manager/cert-manager-webhook-6888856db4-tnjwc" Jan 29 06:58:30 crc kubenswrapper[4826]: I0129 06:58:30.629567 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xsv2\" (UniqueName: \"kubernetes.io/projected/38c02f33-7056-4cf2-834d-765936d1be36-kube-api-access-8xsv2\") pod \"cert-manager-webhook-6888856db4-tnjwc\" (UID: \"38c02f33-7056-4cf2-834d-765936d1be36\") " pod="cert-manager/cert-manager-webhook-6888856db4-tnjwc" Jan 29 06:58:30 crc kubenswrapper[4826]: I0129 06:58:30.815599 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-tnjwc" Jan 29 06:58:31 crc kubenswrapper[4826]: I0129 06:58:31.025560 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-tnjwc"] Jan 29 06:58:31 crc kubenswrapper[4826]: I0129 06:58:31.109423 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-tnjwc" event={"ID":"38c02f33-7056-4cf2-834d-765936d1be36","Type":"ContainerStarted","Data":"1757ec888afe47dac3644280adac458039a355ae29af52afd6a72b6bc07ba19c"} Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.144211 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m9fxh"] Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.145505 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.161421 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m9fxh"] Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.328226 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-utilities\") pod \"community-operators-m9fxh\" (UID: \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\") " pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.328332 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvdg\" (UniqueName: \"kubernetes.io/projected/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-kube-api-access-qgvdg\") pod \"community-operators-m9fxh\" (UID: \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\") " pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.328363 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-catalog-content\") pod \"community-operators-m9fxh\" (UID: \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\") " pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.430054 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-utilities\") pod \"community-operators-m9fxh\" (UID: \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\") " pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.430366 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgvdg\" (UniqueName: \"kubernetes.io/projected/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-kube-api-access-qgvdg\") pod \"community-operators-m9fxh\" (UID: \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\") " pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.430397 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-catalog-content\") pod \"community-operators-m9fxh\" (UID: \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\") " pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.430881 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-utilities\") pod \"community-operators-m9fxh\" (UID: \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\") " pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.430891 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-catalog-content\") pod \"community-operators-m9fxh\" (UID: \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\") " pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.453105 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgvdg\" (UniqueName: \"kubernetes.io/projected/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-kube-api-access-qgvdg\") pod \"community-operators-m9fxh\" (UID: \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\") " pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.463603 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:32 crc kubenswrapper[4826]: I0129 06:58:32.939881 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m9fxh"] Jan 29 06:58:32 crc kubenswrapper[4826]: W0129 06:58:32.948960 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac5b1d44_da2f_442e_b353_06d2cc2d60d2.slice/crio-0d4331db6e83bcdfafe5a45af1e3e88c3a4fafa310453a11d6844121a3d57098 WatchSource:0}: Error finding container 0d4331db6e83bcdfafe5a45af1e3e88c3a4fafa310453a11d6844121a3d57098: Status 404 returned error can't find the container with id 0d4331db6e83bcdfafe5a45af1e3e88c3a4fafa310453a11d6844121a3d57098 Jan 29 06:58:33 crc kubenswrapper[4826]: I0129 06:58:33.121903 4826 generic.go:334] "Generic (PLEG): container finished" podID="ac5b1d44-da2f-442e-b353-06d2cc2d60d2" containerID="bfee3a8bfecde2141db2f9c8d9d7ecaa0cbb3c08bacb4820db726b8a7c6e8b0a" exitCode=0 Jan 29 06:58:33 crc kubenswrapper[4826]: I0129 06:58:33.121949 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9fxh" event={"ID":"ac5b1d44-da2f-442e-b353-06d2cc2d60d2","Type":"ContainerDied","Data":"bfee3a8bfecde2141db2f9c8d9d7ecaa0cbb3c08bacb4820db726b8a7c6e8b0a"} Jan 29 06:58:33 crc kubenswrapper[4826]: I0129 06:58:33.121980 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9fxh" event={"ID":"ac5b1d44-da2f-442e-b353-06d2cc2d60d2","Type":"ContainerStarted","Data":"0d4331db6e83bcdfafe5a45af1e3e88c3a4fafa310453a11d6844121a3d57098"} Jan 29 06:58:34 crc kubenswrapper[4826]: I0129 06:58:34.148477 4826 generic.go:334] "Generic (PLEG): container finished" podID="ac5b1d44-da2f-442e-b353-06d2cc2d60d2" containerID="8eaa05fa869cce6ec383215cefb877a6857a9f3ce437a33102847442058d4611" exitCode=0 Jan 29 06:58:34 crc kubenswrapper[4826]: I0129 06:58:34.149825 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9fxh" event={"ID":"ac5b1d44-da2f-442e-b353-06d2cc2d60d2","Type":"ContainerDied","Data":"8eaa05fa869cce6ec383215cefb877a6857a9f3ce437a33102847442058d4611"} Jan 29 06:58:35 crc kubenswrapper[4826]: I0129 06:58:35.113060 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-nqvpr"] Jan 29 06:58:35 crc kubenswrapper[4826]: I0129 06:58:35.113819 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-nqvpr" Jan 29 06:58:35 crc kubenswrapper[4826]: I0129 06:58:35.116173 4826 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-jlw7w" Jan 29 06:58:35 crc kubenswrapper[4826]: I0129 06:58:35.127019 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-nqvpr"] Jan 29 06:58:35 crc kubenswrapper[4826]: I0129 06:58:35.174085 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d57ce65b-8ed9-4f74-b907-978d1ef8911c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-nqvpr\" (UID: \"d57ce65b-8ed9-4f74-b907-978d1ef8911c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-nqvpr" Jan 29 06:58:35 crc kubenswrapper[4826]: I0129 06:58:35.174216 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc2jf\" (UniqueName: \"kubernetes.io/projected/d57ce65b-8ed9-4f74-b907-978d1ef8911c-kube-api-access-fc2jf\") pod \"cert-manager-cainjector-5545bd876-nqvpr\" (UID: \"d57ce65b-8ed9-4f74-b907-978d1ef8911c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-nqvpr" Jan 29 06:58:35 crc kubenswrapper[4826]: I0129 06:58:35.278114 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc2jf\" (UniqueName: \"kubernetes.io/projected/d57ce65b-8ed9-4f74-b907-978d1ef8911c-kube-api-access-fc2jf\") pod \"cert-manager-cainjector-5545bd876-nqvpr\" (UID: \"d57ce65b-8ed9-4f74-b907-978d1ef8911c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-nqvpr" Jan 29 06:58:35 crc kubenswrapper[4826]: I0129 06:58:35.278229 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d57ce65b-8ed9-4f74-b907-978d1ef8911c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-nqvpr\" (UID: \"d57ce65b-8ed9-4f74-b907-978d1ef8911c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-nqvpr" Jan 29 06:58:35 crc kubenswrapper[4826]: I0129 06:58:35.305912 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d57ce65b-8ed9-4f74-b907-978d1ef8911c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-nqvpr\" (UID: \"d57ce65b-8ed9-4f74-b907-978d1ef8911c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-nqvpr" Jan 29 06:58:35 crc kubenswrapper[4826]: I0129 06:58:35.315133 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc2jf\" (UniqueName: \"kubernetes.io/projected/d57ce65b-8ed9-4f74-b907-978d1ef8911c-kube-api-access-fc2jf\") pod \"cert-manager-cainjector-5545bd876-nqvpr\" (UID: \"d57ce65b-8ed9-4f74-b907-978d1ef8911c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-nqvpr" Jan 29 06:58:35 crc kubenswrapper[4826]: I0129 06:58:35.428596 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-nqvpr" Jan 29 06:58:36 crc kubenswrapper[4826]: I0129 06:58:36.187922 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9fxh" event={"ID":"ac5b1d44-da2f-442e-b353-06d2cc2d60d2","Type":"ContainerStarted","Data":"875e4c6b8a80512d237c84fb4cfa0f62289d21e44f54a8551c34d39c1fe1be4f"} Jan 29 06:58:36 crc kubenswrapper[4826]: I0129 06:58:36.189463 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-tnjwc" event={"ID":"38c02f33-7056-4cf2-834d-765936d1be36","Type":"ContainerStarted","Data":"56409450f9ffa95f739a53deca0dff9dd39ada061f7377168d276fb288cedd22"} Jan 29 06:58:36 crc kubenswrapper[4826]: I0129 06:58:36.190184 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-tnjwc" Jan 29 06:58:36 crc kubenswrapper[4826]: I0129 06:58:36.209536 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m9fxh" podStartSLOduration=1.38574223 podStartE2EDuration="4.209514804s" podCreationTimestamp="2026-01-29 06:58:32 +0000 UTC" firstStartedPulling="2026-01-29 06:58:33.124757767 +0000 UTC m=+896.986550856" lastFinishedPulling="2026-01-29 06:58:35.948530321 +0000 UTC m=+899.810323430" observedRunningTime="2026-01-29 06:58:36.207427291 +0000 UTC m=+900.069220360" watchObservedRunningTime="2026-01-29 06:58:36.209514804 +0000 UTC m=+900.071307873" Jan 29 06:58:36 crc kubenswrapper[4826]: I0129 06:58:36.227215 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-tnjwc" podStartSLOduration=1.279527375 podStartE2EDuration="6.227197289s" podCreationTimestamp="2026-01-29 06:58:30 +0000 UTC" firstStartedPulling="2026-01-29 06:58:31.036957305 +0000 UTC m=+894.898750374" lastFinishedPulling="2026-01-29 06:58:35.984627219 +0000 UTC m=+899.846420288" observedRunningTime="2026-01-29 06:58:36.224760866 +0000 UTC m=+900.086553925" watchObservedRunningTime="2026-01-29 06:58:36.227197289 +0000 UTC m=+900.088990358" Jan 29 06:58:36 crc kubenswrapper[4826]: W0129 06:58:36.299571 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd57ce65b_8ed9_4f74_b907_978d1ef8911c.slice/crio-5e3fe8e6f4de528ce1eaad2b216e54ab3fb79e55afaeea1391a2a30bcba92462 WatchSource:0}: Error finding container 5e3fe8e6f4de528ce1eaad2b216e54ab3fb79e55afaeea1391a2a30bcba92462: Status 404 returned error can't find the container with id 5e3fe8e6f4de528ce1eaad2b216e54ab3fb79e55afaeea1391a2a30bcba92462 Jan 29 06:58:36 crc kubenswrapper[4826]: I0129 06:58:36.307418 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-nqvpr"] Jan 29 06:58:37 crc kubenswrapper[4826]: I0129 06:58:37.198648 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-nqvpr" event={"ID":"d57ce65b-8ed9-4f74-b907-978d1ef8911c","Type":"ContainerStarted","Data":"9505eec60127eaf6bb8311dc6fd4535d1624f3d02ea4ce9b3219704a55d3348a"} Jan 29 06:58:37 crc kubenswrapper[4826]: I0129 06:58:37.198923 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-nqvpr" event={"ID":"d57ce65b-8ed9-4f74-b907-978d1ef8911c","Type":"ContainerStarted","Data":"5e3fe8e6f4de528ce1eaad2b216e54ab3fb79e55afaeea1391a2a30bcba92462"} Jan 29 06:58:37 crc kubenswrapper[4826]: I0129 06:58:37.222936 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-nqvpr" podStartSLOduration=2.222897412 podStartE2EDuration="2.222897412s" podCreationTimestamp="2026-01-29 06:58:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:58:37.222459121 +0000 UTC m=+901.084252210" watchObservedRunningTime="2026-01-29 06:58:37.222897412 +0000 UTC m=+901.084690481" Jan 29 06:58:42 crc kubenswrapper[4826]: I0129 06:58:42.464548 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:42 crc kubenswrapper[4826]: I0129 06:58:42.465231 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:42 crc kubenswrapper[4826]: I0129 06:58:42.521925 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:43 crc kubenswrapper[4826]: I0129 06:58:43.297700 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:44 crc kubenswrapper[4826]: I0129 06:58:44.933139 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m9fxh"] Jan 29 06:58:45 crc kubenswrapper[4826]: I0129 06:58:45.252195 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m9fxh" podUID="ac5b1d44-da2f-442e-b353-06d2cc2d60d2" containerName="registry-server" containerID="cri-o://875e4c6b8a80512d237c84fb4cfa0f62289d21e44f54a8551c34d39c1fe1be4f" gracePeriod=2 Jan 29 06:58:45 crc kubenswrapper[4826]: I0129 06:58:45.820365 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-tnjwc" Jan 29 06:58:46 crc kubenswrapper[4826]: I0129 06:58:46.262990 4826 generic.go:334] "Generic (PLEG): container finished" podID="ac5b1d44-da2f-442e-b353-06d2cc2d60d2" containerID="875e4c6b8a80512d237c84fb4cfa0f62289d21e44f54a8551c34d39c1fe1be4f" exitCode=0 Jan 29 06:58:46 crc kubenswrapper[4826]: I0129 06:58:46.263067 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9fxh" event={"ID":"ac5b1d44-da2f-442e-b353-06d2cc2d60d2","Type":"ContainerDied","Data":"875e4c6b8a80512d237c84fb4cfa0f62289d21e44f54a8551c34d39c1fe1be4f"} Jan 29 06:58:47 crc kubenswrapper[4826]: I0129 06:58:47.455736 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:47 crc kubenswrapper[4826]: I0129 06:58:47.561049 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-catalog-content\") pod \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\" (UID: \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\") " Jan 29 06:58:47 crc kubenswrapper[4826]: I0129 06:58:47.561137 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgvdg\" (UniqueName: \"kubernetes.io/projected/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-kube-api-access-qgvdg\") pod \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\" (UID: \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\") " Jan 29 06:58:47 crc kubenswrapper[4826]: I0129 06:58:47.561166 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-utilities\") pod \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\" (UID: \"ac5b1d44-da2f-442e-b353-06d2cc2d60d2\") " Jan 29 06:58:47 crc kubenswrapper[4826]: I0129 06:58:47.562799 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-utilities" (OuterVolumeSpecName: "utilities") pod "ac5b1d44-da2f-442e-b353-06d2cc2d60d2" (UID: "ac5b1d44-da2f-442e-b353-06d2cc2d60d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:58:47 crc kubenswrapper[4826]: I0129 06:58:47.569939 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-kube-api-access-qgvdg" (OuterVolumeSpecName: "kube-api-access-qgvdg") pod "ac5b1d44-da2f-442e-b353-06d2cc2d60d2" (UID: "ac5b1d44-da2f-442e-b353-06d2cc2d60d2"). InnerVolumeSpecName "kube-api-access-qgvdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:58:47 crc kubenswrapper[4826]: I0129 06:58:47.638385 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac5b1d44-da2f-442e-b353-06d2cc2d60d2" (UID: "ac5b1d44-da2f-442e-b353-06d2cc2d60d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:58:47 crc kubenswrapper[4826]: I0129 06:58:47.663217 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgvdg\" (UniqueName: \"kubernetes.io/projected/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-kube-api-access-qgvdg\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:47 crc kubenswrapper[4826]: I0129 06:58:47.663260 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:47 crc kubenswrapper[4826]: I0129 06:58:47.663280 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac5b1d44-da2f-442e-b353-06d2cc2d60d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.280540 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9fxh" event={"ID":"ac5b1d44-da2f-442e-b353-06d2cc2d60d2","Type":"ContainerDied","Data":"0d4331db6e83bcdfafe5a45af1e3e88c3a4fafa310453a11d6844121a3d57098"} Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.280627 4826 scope.go:117] "RemoveContainer" containerID="875e4c6b8a80512d237c84fb4cfa0f62289d21e44f54a8551c34d39c1fe1be4f" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.280641 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9fxh" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.301356 4826 scope.go:117] "RemoveContainer" containerID="8eaa05fa869cce6ec383215cefb877a6857a9f3ce437a33102847442058d4611" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.380328 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m9fxh"] Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.382952 4826 scope.go:117] "RemoveContainer" containerID="bfee3a8bfecde2141db2f9c8d9d7ecaa0cbb3c08bacb4820db726b8a7c6e8b0a" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.391929 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m9fxh"] Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.818387 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac5b1d44-da2f-442e-b353-06d2cc2d60d2" path="/var/lib/kubelet/pods/ac5b1d44-da2f-442e-b353-06d2cc2d60d2/volumes" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.870686 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-tk5nw"] Jan 29 06:58:48 crc kubenswrapper[4826]: E0129 06:58:48.870951 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5b1d44-da2f-442e-b353-06d2cc2d60d2" containerName="registry-server" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.870965 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5b1d44-da2f-442e-b353-06d2cc2d60d2" containerName="registry-server" Jan 29 06:58:48 crc kubenswrapper[4826]: E0129 06:58:48.870976 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5b1d44-da2f-442e-b353-06d2cc2d60d2" containerName="extract-content" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.870982 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5b1d44-da2f-442e-b353-06d2cc2d60d2" containerName="extract-content" Jan 29 06:58:48 crc kubenswrapper[4826]: E0129 06:58:48.870990 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5b1d44-da2f-442e-b353-06d2cc2d60d2" containerName="extract-utilities" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.870996 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5b1d44-da2f-442e-b353-06d2cc2d60d2" containerName="extract-utilities" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.871104 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5b1d44-da2f-442e-b353-06d2cc2d60d2" containerName="registry-server" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.871538 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-tk5nw" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.873059 4826 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-kml8b" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.889520 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-tk5nw"] Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.979681 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mbbv\" (UniqueName: \"kubernetes.io/projected/f1a39541-4e51-4239-ae9e-e58ef3ff63b9-kube-api-access-9mbbv\") pod \"cert-manager-545d4d4674-tk5nw\" (UID: \"f1a39541-4e51-4239-ae9e-e58ef3ff63b9\") " pod="cert-manager/cert-manager-545d4d4674-tk5nw" Jan 29 06:58:48 crc kubenswrapper[4826]: I0129 06:58:48.979801 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1a39541-4e51-4239-ae9e-e58ef3ff63b9-bound-sa-token\") pod \"cert-manager-545d4d4674-tk5nw\" (UID: \"f1a39541-4e51-4239-ae9e-e58ef3ff63b9\") " pod="cert-manager/cert-manager-545d4d4674-tk5nw" Jan 29 06:58:49 crc kubenswrapper[4826]: I0129 06:58:49.081672 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1a39541-4e51-4239-ae9e-e58ef3ff63b9-bound-sa-token\") pod \"cert-manager-545d4d4674-tk5nw\" (UID: \"f1a39541-4e51-4239-ae9e-e58ef3ff63b9\") " pod="cert-manager/cert-manager-545d4d4674-tk5nw" Jan 29 06:58:49 crc kubenswrapper[4826]: I0129 06:58:49.081840 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mbbv\" (UniqueName: \"kubernetes.io/projected/f1a39541-4e51-4239-ae9e-e58ef3ff63b9-kube-api-access-9mbbv\") pod \"cert-manager-545d4d4674-tk5nw\" (UID: \"f1a39541-4e51-4239-ae9e-e58ef3ff63b9\") " pod="cert-manager/cert-manager-545d4d4674-tk5nw" Jan 29 06:58:49 crc kubenswrapper[4826]: I0129 06:58:49.103060 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mbbv\" (UniqueName: \"kubernetes.io/projected/f1a39541-4e51-4239-ae9e-e58ef3ff63b9-kube-api-access-9mbbv\") pod \"cert-manager-545d4d4674-tk5nw\" (UID: \"f1a39541-4e51-4239-ae9e-e58ef3ff63b9\") " pod="cert-manager/cert-manager-545d4d4674-tk5nw" Jan 29 06:58:49 crc kubenswrapper[4826]: I0129 06:58:49.106011 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1a39541-4e51-4239-ae9e-e58ef3ff63b9-bound-sa-token\") pod \"cert-manager-545d4d4674-tk5nw\" (UID: \"f1a39541-4e51-4239-ae9e-e58ef3ff63b9\") " pod="cert-manager/cert-manager-545d4d4674-tk5nw" Jan 29 06:58:49 crc kubenswrapper[4826]: I0129 06:58:49.198698 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-tk5nw" Jan 29 06:58:49 crc kubenswrapper[4826]: I0129 06:58:49.456319 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-tk5nw"] Jan 29 06:58:49 crc kubenswrapper[4826]: W0129 06:58:49.466929 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1a39541_4e51_4239_ae9e_e58ef3ff63b9.slice/crio-3da0afc17b5ddee7a0c36ff9b408ca2408769da3963dec9f66c916e95e88cadc WatchSource:0}: Error finding container 3da0afc17b5ddee7a0c36ff9b408ca2408769da3963dec9f66c916e95e88cadc: Status 404 returned error can't find the container with id 3da0afc17b5ddee7a0c36ff9b408ca2408769da3963dec9f66c916e95e88cadc Jan 29 06:58:50 crc kubenswrapper[4826]: I0129 06:58:50.297986 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-tk5nw" event={"ID":"f1a39541-4e51-4239-ae9e-e58ef3ff63b9","Type":"ContainerStarted","Data":"ed214021dcacd3c349a615a96648b0b8db20aef488e2d00028f69f6a91698915"} Jan 29 06:58:50 crc kubenswrapper[4826]: I0129 06:58:50.298377 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-tk5nw" event={"ID":"f1a39541-4e51-4239-ae9e-e58ef3ff63b9","Type":"ContainerStarted","Data":"3da0afc17b5ddee7a0c36ff9b408ca2408769da3963dec9f66c916e95e88cadc"} Jan 29 06:58:50 crc kubenswrapper[4826]: I0129 06:58:50.319624 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-tk5nw" podStartSLOduration=2.319607332 podStartE2EDuration="2.319607332s" podCreationTimestamp="2026-01-29 06:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 06:58:50.318498104 +0000 UTC m=+914.180291173" watchObservedRunningTime="2026-01-29 06:58:50.319607332 +0000 UTC m=+914.181400401" Jan 29 06:59:03 crc kubenswrapper[4826]: I0129 06:59:03.352003 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6dzlk"] Jan 29 06:59:03 crc kubenswrapper[4826]: I0129 06:59:03.353773 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6dzlk" Jan 29 06:59:03 crc kubenswrapper[4826]: I0129 06:59:03.365443 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 06:59:03 crc kubenswrapper[4826]: I0129 06:59:03.366671 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 06:59:03 crc kubenswrapper[4826]: I0129 06:59:03.367355 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-hffnv" Jan 29 06:59:03 crc kubenswrapper[4826]: I0129 06:59:03.379249 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6dzlk"] Jan 29 06:59:03 crc kubenswrapper[4826]: I0129 06:59:03.412824 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmwfq\" (UniqueName: \"kubernetes.io/projected/438c09e3-7fe5-47a9-b339-622b798a6cee-kube-api-access-rmwfq\") pod \"openstack-operator-index-6dzlk\" (UID: \"438c09e3-7fe5-47a9-b339-622b798a6cee\") " pod="openstack-operators/openstack-operator-index-6dzlk" Jan 29 06:59:03 crc kubenswrapper[4826]: I0129 06:59:03.513580 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmwfq\" (UniqueName: \"kubernetes.io/projected/438c09e3-7fe5-47a9-b339-622b798a6cee-kube-api-access-rmwfq\") pod \"openstack-operator-index-6dzlk\" (UID: \"438c09e3-7fe5-47a9-b339-622b798a6cee\") " pod="openstack-operators/openstack-operator-index-6dzlk" Jan 29 06:59:03 crc kubenswrapper[4826]: I0129 06:59:03.535406 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmwfq\" (UniqueName: \"kubernetes.io/projected/438c09e3-7fe5-47a9-b339-622b798a6cee-kube-api-access-rmwfq\") pod \"openstack-operator-index-6dzlk\" (UID: \"438c09e3-7fe5-47a9-b339-622b798a6cee\") " pod="openstack-operators/openstack-operator-index-6dzlk" Jan 29 06:59:03 crc kubenswrapper[4826]: I0129 06:59:03.722406 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6dzlk" Jan 29 06:59:04 crc kubenswrapper[4826]: I0129 06:59:04.160978 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6dzlk"] Jan 29 06:59:04 crc kubenswrapper[4826]: I0129 06:59:04.420870 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6dzlk" event={"ID":"438c09e3-7fe5-47a9-b339-622b798a6cee","Type":"ContainerStarted","Data":"0ab44dc08b3b6a253fa08c3f83c0f035b27a8a6a6eef15216dd34c41f2741783"} Jan 29 06:59:05 crc kubenswrapper[4826]: I0129 06:59:05.432746 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6dzlk" event={"ID":"438c09e3-7fe5-47a9-b339-622b798a6cee","Type":"ContainerStarted","Data":"be7423b671d95d03f2fe071a624bdc47a7b69576c378b2f92919ce58577950ce"} Jan 29 06:59:05 crc kubenswrapper[4826]: I0129 06:59:05.460352 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6dzlk" podStartSLOduration=1.5849144050000001 podStartE2EDuration="2.460324609s" podCreationTimestamp="2026-01-29 06:59:03 +0000 UTC" firstStartedPulling="2026-01-29 06:59:04.172959435 +0000 UTC m=+928.034752534" lastFinishedPulling="2026-01-29 06:59:05.048369629 +0000 UTC m=+928.910162738" observedRunningTime="2026-01-29 06:59:05.455051174 +0000 UTC m=+929.316844283" watchObservedRunningTime="2026-01-29 06:59:05.460324609 +0000 UTC m=+929.322117708" Jan 29 06:59:05 crc kubenswrapper[4826]: I0129 06:59:05.656563 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:59:05 crc kubenswrapper[4826]: I0129 06:59:05.656639 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:59:13 crc kubenswrapper[4826]: I0129 06:59:13.723256 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6dzlk" Jan 29 06:59:13 crc kubenswrapper[4826]: I0129 06:59:13.727332 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6dzlk" Jan 29 06:59:13 crc kubenswrapper[4826]: I0129 06:59:13.768294 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6dzlk" Jan 29 06:59:14 crc kubenswrapper[4826]: I0129 06:59:14.547557 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6dzlk" Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.004741 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6"] Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.007195 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.010167 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-gwd8m" Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.028817 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6"] Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.060654 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41595216-47a0-4aa0-8485-894c44ca5b07-util\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6\" (UID: \"41595216-47a0-4aa0-8485-894c44ca5b07\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.060705 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lswpw\" (UniqueName: \"kubernetes.io/projected/41595216-47a0-4aa0-8485-894c44ca5b07-kube-api-access-lswpw\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6\" (UID: \"41595216-47a0-4aa0-8485-894c44ca5b07\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.060754 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41595216-47a0-4aa0-8485-894c44ca5b07-bundle\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6\" (UID: \"41595216-47a0-4aa0-8485-894c44ca5b07\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.162828 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41595216-47a0-4aa0-8485-894c44ca5b07-util\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6\" (UID: \"41595216-47a0-4aa0-8485-894c44ca5b07\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.162887 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lswpw\" (UniqueName: \"kubernetes.io/projected/41595216-47a0-4aa0-8485-894c44ca5b07-kube-api-access-lswpw\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6\" (UID: \"41595216-47a0-4aa0-8485-894c44ca5b07\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.162938 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41595216-47a0-4aa0-8485-894c44ca5b07-bundle\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6\" (UID: \"41595216-47a0-4aa0-8485-894c44ca5b07\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.163517 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41595216-47a0-4aa0-8485-894c44ca5b07-bundle\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6\" (UID: \"41595216-47a0-4aa0-8485-894c44ca5b07\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.163860 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41595216-47a0-4aa0-8485-894c44ca5b07-util\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6\" (UID: \"41595216-47a0-4aa0-8485-894c44ca5b07\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.197370 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lswpw\" (UniqueName: \"kubernetes.io/projected/41595216-47a0-4aa0-8485-894c44ca5b07-kube-api-access-lswpw\") pod \"da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6\" (UID: \"41595216-47a0-4aa0-8485-894c44ca5b07\") " pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.327605 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" Jan 29 06:59:17 crc kubenswrapper[4826]: I0129 06:59:17.841962 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6"] Jan 29 06:59:17 crc kubenswrapper[4826]: W0129 06:59:17.849401 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41595216_47a0_4aa0_8485_894c44ca5b07.slice/crio-1ba43bdb71d5bd7fb9cc0a6d52b81d96585cee4a5faf7f7e9068531110608ec8 WatchSource:0}: Error finding container 1ba43bdb71d5bd7fb9cc0a6d52b81d96585cee4a5faf7f7e9068531110608ec8: Status 404 returned error can't find the container with id 1ba43bdb71d5bd7fb9cc0a6d52b81d96585cee4a5faf7f7e9068531110608ec8 Jan 29 06:59:18 crc kubenswrapper[4826]: I0129 06:59:18.538593 4826 generic.go:334] "Generic (PLEG): container finished" podID="41595216-47a0-4aa0-8485-894c44ca5b07" containerID="808a03a0d5c8cbf250af78409336fdd642040454f8ce174c8fd42bd47d9a20e9" exitCode=0 Jan 29 06:59:18 crc kubenswrapper[4826]: I0129 06:59:18.538683 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" event={"ID":"41595216-47a0-4aa0-8485-894c44ca5b07","Type":"ContainerDied","Data":"808a03a0d5c8cbf250af78409336fdd642040454f8ce174c8fd42bd47d9a20e9"} Jan 29 06:59:18 crc kubenswrapper[4826]: I0129 06:59:18.539259 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" event={"ID":"41595216-47a0-4aa0-8485-894c44ca5b07","Type":"ContainerStarted","Data":"1ba43bdb71d5bd7fb9cc0a6d52b81d96585cee4a5faf7f7e9068531110608ec8"} Jan 29 06:59:19 crc kubenswrapper[4826]: I0129 06:59:19.549258 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" event={"ID":"41595216-47a0-4aa0-8485-894c44ca5b07","Type":"ContainerStarted","Data":"ce5a15d5063c97cc6a0912a50d29880a709d85a9f41ff0ced6cb4230aee5dd8f"} Jan 29 06:59:20 crc kubenswrapper[4826]: I0129 06:59:20.575234 4826 generic.go:334] "Generic (PLEG): container finished" podID="41595216-47a0-4aa0-8485-894c44ca5b07" containerID="ce5a15d5063c97cc6a0912a50d29880a709d85a9f41ff0ced6cb4230aee5dd8f" exitCode=0 Jan 29 06:59:20 crc kubenswrapper[4826]: I0129 06:59:20.575331 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" event={"ID":"41595216-47a0-4aa0-8485-894c44ca5b07","Type":"ContainerDied","Data":"ce5a15d5063c97cc6a0912a50d29880a709d85a9f41ff0ced6cb4230aee5dd8f"} Jan 29 06:59:20 crc kubenswrapper[4826]: E0129 06:59:20.886777 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41595216_47a0_4aa0_8485_894c44ca5b07.slice/crio-conmon-2650817cbf6ae6acce2737b67e05ac0cdf80fda775298b6e4061753307a4a3fc.scope\": RecentStats: unable to find data in memory cache]" Jan 29 06:59:21 crc kubenswrapper[4826]: I0129 06:59:21.588366 4826 generic.go:334] "Generic (PLEG): container finished" podID="41595216-47a0-4aa0-8485-894c44ca5b07" containerID="2650817cbf6ae6acce2737b67e05ac0cdf80fda775298b6e4061753307a4a3fc" exitCode=0 Jan 29 06:59:21 crc kubenswrapper[4826]: I0129 06:59:21.588615 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" event={"ID":"41595216-47a0-4aa0-8485-894c44ca5b07","Type":"ContainerDied","Data":"2650817cbf6ae6acce2737b67e05ac0cdf80fda775298b6e4061753307a4a3fc"} Jan 29 06:59:22 crc kubenswrapper[4826]: I0129 06:59:22.979842 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" Jan 29 06:59:23 crc kubenswrapper[4826]: I0129 06:59:23.068873 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41595216-47a0-4aa0-8485-894c44ca5b07-bundle\") pod \"41595216-47a0-4aa0-8485-894c44ca5b07\" (UID: \"41595216-47a0-4aa0-8485-894c44ca5b07\") " Jan 29 06:59:23 crc kubenswrapper[4826]: I0129 06:59:23.069049 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lswpw\" (UniqueName: \"kubernetes.io/projected/41595216-47a0-4aa0-8485-894c44ca5b07-kube-api-access-lswpw\") pod \"41595216-47a0-4aa0-8485-894c44ca5b07\" (UID: \"41595216-47a0-4aa0-8485-894c44ca5b07\") " Jan 29 06:59:23 crc kubenswrapper[4826]: I0129 06:59:23.069189 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41595216-47a0-4aa0-8485-894c44ca5b07-util\") pod \"41595216-47a0-4aa0-8485-894c44ca5b07\" (UID: \"41595216-47a0-4aa0-8485-894c44ca5b07\") " Jan 29 06:59:23 crc kubenswrapper[4826]: I0129 06:59:23.070551 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41595216-47a0-4aa0-8485-894c44ca5b07-bundle" (OuterVolumeSpecName: "bundle") pod "41595216-47a0-4aa0-8485-894c44ca5b07" (UID: "41595216-47a0-4aa0-8485-894c44ca5b07"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:59:23 crc kubenswrapper[4826]: I0129 06:59:23.078329 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41595216-47a0-4aa0-8485-894c44ca5b07-kube-api-access-lswpw" (OuterVolumeSpecName: "kube-api-access-lswpw") pod "41595216-47a0-4aa0-8485-894c44ca5b07" (UID: "41595216-47a0-4aa0-8485-894c44ca5b07"). InnerVolumeSpecName "kube-api-access-lswpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 06:59:23 crc kubenswrapper[4826]: I0129 06:59:23.097749 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41595216-47a0-4aa0-8485-894c44ca5b07-util" (OuterVolumeSpecName: "util") pod "41595216-47a0-4aa0-8485-894c44ca5b07" (UID: "41595216-47a0-4aa0-8485-894c44ca5b07"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 06:59:23 crc kubenswrapper[4826]: I0129 06:59:23.171013 4826 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41595216-47a0-4aa0-8485-894c44ca5b07-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 06:59:23 crc kubenswrapper[4826]: I0129 06:59:23.171071 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lswpw\" (UniqueName: \"kubernetes.io/projected/41595216-47a0-4aa0-8485-894c44ca5b07-kube-api-access-lswpw\") on node \"crc\" DevicePath \"\"" Jan 29 06:59:23 crc kubenswrapper[4826]: I0129 06:59:23.171095 4826 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41595216-47a0-4aa0-8485-894c44ca5b07-util\") on node \"crc\" DevicePath \"\"" Jan 29 06:59:23 crc kubenswrapper[4826]: I0129 06:59:23.611230 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" event={"ID":"41595216-47a0-4aa0-8485-894c44ca5b07","Type":"ContainerDied","Data":"1ba43bdb71d5bd7fb9cc0a6d52b81d96585cee4a5faf7f7e9068531110608ec8"} Jan 29 06:59:23 crc kubenswrapper[4826]: I0129 06:59:23.611895 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ba43bdb71d5bd7fb9cc0a6d52b81d96585cee4a5faf7f7e9068531110608ec8" Jan 29 06:59:23 crc kubenswrapper[4826]: I0129 06:59:23.611294 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6" Jan 29 06:59:27 crc kubenswrapper[4826]: I0129 06:59:27.093104 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-zh2gm"] Jan 29 06:59:27 crc kubenswrapper[4826]: E0129 06:59:27.093648 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41595216-47a0-4aa0-8485-894c44ca5b07" containerName="extract" Jan 29 06:59:27 crc kubenswrapper[4826]: I0129 06:59:27.093660 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="41595216-47a0-4aa0-8485-894c44ca5b07" containerName="extract" Jan 29 06:59:27 crc kubenswrapper[4826]: E0129 06:59:27.093673 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41595216-47a0-4aa0-8485-894c44ca5b07" containerName="pull" Jan 29 06:59:27 crc kubenswrapper[4826]: I0129 06:59:27.093678 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="41595216-47a0-4aa0-8485-894c44ca5b07" containerName="pull" Jan 29 06:59:27 crc kubenswrapper[4826]: E0129 06:59:27.093687 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41595216-47a0-4aa0-8485-894c44ca5b07" containerName="util" Jan 29 06:59:27 crc kubenswrapper[4826]: I0129 06:59:27.093693 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="41595216-47a0-4aa0-8485-894c44ca5b07" containerName="util" Jan 29 06:59:27 crc kubenswrapper[4826]: I0129 06:59:27.093804 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="41595216-47a0-4aa0-8485-894c44ca5b07" containerName="extract" Jan 29 06:59:27 crc kubenswrapper[4826]: I0129 06:59:27.094223 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-zh2gm" Jan 29 06:59:27 crc kubenswrapper[4826]: I0129 06:59:27.096163 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-cvdsr" Jan 29 06:59:27 crc kubenswrapper[4826]: I0129 06:59:27.134537 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6hvm\" (UniqueName: \"kubernetes.io/projected/4248339a-3124-4593-8364-7e09ae20bd06-kube-api-access-g6hvm\") pod \"openstack-operator-controller-init-5c4cd4c8c8-zh2gm\" (UID: \"4248339a-3124-4593-8364-7e09ae20bd06\") " pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-zh2gm" Jan 29 06:59:27 crc kubenswrapper[4826]: I0129 06:59:27.142438 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-zh2gm"] Jan 29 06:59:27 crc kubenswrapper[4826]: I0129 06:59:27.235550 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6hvm\" (UniqueName: \"kubernetes.io/projected/4248339a-3124-4593-8364-7e09ae20bd06-kube-api-access-g6hvm\") pod \"openstack-operator-controller-init-5c4cd4c8c8-zh2gm\" (UID: \"4248339a-3124-4593-8364-7e09ae20bd06\") " pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-zh2gm" Jan 29 06:59:27 crc kubenswrapper[4826]: I0129 06:59:27.254018 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6hvm\" (UniqueName: \"kubernetes.io/projected/4248339a-3124-4593-8364-7e09ae20bd06-kube-api-access-g6hvm\") pod \"openstack-operator-controller-init-5c4cd4c8c8-zh2gm\" (UID: \"4248339a-3124-4593-8364-7e09ae20bd06\") " pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-zh2gm" Jan 29 06:59:27 crc kubenswrapper[4826]: I0129 06:59:27.409974 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-zh2gm" Jan 29 06:59:27 crc kubenswrapper[4826]: I0129 06:59:27.886283 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-zh2gm"] Jan 29 06:59:28 crc kubenswrapper[4826]: I0129 06:59:28.643764 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-zh2gm" event={"ID":"4248339a-3124-4593-8364-7e09ae20bd06","Type":"ContainerStarted","Data":"75e66dc68d8c97d5f668692dd3610e13b51273d05e33c2e9d694deed8ad61f56"} Jan 29 06:59:33 crc kubenswrapper[4826]: I0129 06:59:33.702920 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-zh2gm" event={"ID":"4248339a-3124-4593-8364-7e09ae20bd06","Type":"ContainerStarted","Data":"47eb607ded984fb2c147a6460fc021f2858fefc7744ce172b6b80afab7f0d9f7"} Jan 29 06:59:33 crc kubenswrapper[4826]: I0129 06:59:33.704236 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-zh2gm" Jan 29 06:59:33 crc kubenswrapper[4826]: I0129 06:59:33.744644 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-zh2gm" podStartSLOduration=2.033733428 podStartE2EDuration="6.74461333s" podCreationTimestamp="2026-01-29 06:59:27 +0000 UTC" firstStartedPulling="2026-01-29 06:59:27.901529939 +0000 UTC m=+951.763323018" lastFinishedPulling="2026-01-29 06:59:32.612409851 +0000 UTC m=+956.474202920" observedRunningTime="2026-01-29 06:59:33.733528025 +0000 UTC m=+957.595321134" watchObservedRunningTime="2026-01-29 06:59:33.74461333 +0000 UTC m=+957.606406439" Jan 29 06:59:35 crc kubenswrapper[4826]: I0129 06:59:35.656363 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 06:59:35 crc kubenswrapper[4826]: I0129 06:59:35.656459 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 06:59:37 crc kubenswrapper[4826]: I0129 06:59:37.412604 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5c4cd4c8c8-zh2gm" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.665009 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wctjq"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.666125 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wctjq" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.671541 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-wpzpl" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.678916 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-q7fkx"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.680115 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-q7fkx" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.682746 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-78sr7" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.695613 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wctjq"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.700102 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-q7fkx"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.705256 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-rq6tp"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.706232 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rq6tp" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.714825 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tvwxc" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.729440 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-r5zgz"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.730325 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-r5zgz" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.736946 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-dqb4n"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.737956 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dqb4n" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.738566 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-cjvtj" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.740602 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-pznkv" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.754104 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-rq6tp"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.755865 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-dqb4n"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.763656 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-r5zgz"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.774254 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.776832 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgl8f\" (UniqueName: \"kubernetes.io/projected/5a0e4f93-9116-4e18-b4ad-5e6b15571199-kube-api-access-bgl8f\") pod \"glance-operator-controller-manager-8886f4c47-r5zgz\" (UID: \"5a0e4f93-9116-4e18-b4ad-5e6b15571199\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-r5zgz" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.776904 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5khr\" (UniqueName: \"kubernetes.io/projected/968a346c-43bd-4d96-b609-71b838d9d5b8-kube-api-access-z5khr\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-wctjq\" (UID: \"968a346c-43bd-4d96-b609-71b838d9d5b8\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wctjq" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.776924 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qf7n\" (UniqueName: \"kubernetes.io/projected/c34d58a7-7472-43b5-a067-f8d98a83714c-kube-api-access-7qf7n\") pod \"cinder-operator-controller-manager-8d874c8fc-q7fkx\" (UID: \"c34d58a7-7472-43b5-a067-f8d98a83714c\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-q7fkx" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.776945 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s7wk\" (UniqueName: \"kubernetes.io/projected/b9652583-64a6-4726-986b-3d61a52ff7a9-kube-api-access-8s7wk\") pod \"designate-operator-controller-manager-6d9697b7f4-rq6tp\" (UID: \"b9652583-64a6-4726-986b-3d61a52ff7a9\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rq6tp" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.776979 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwrjh\" (UniqueName: \"kubernetes.io/projected/2d8d4014-bbb6-40e0-b2bb-235adaae50a3-kube-api-access-kwrjh\") pod \"heat-operator-controller-manager-69d6db494d-dqb4n\" (UID: \"2d8d4014-bbb6-40e0-b2bb-235adaae50a3\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dqb4n" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.785860 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.801746 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xllr5" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.824508 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.831794 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.832719 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.837220 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.837460 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-9j5d5"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.838616 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-9j5d5" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.839667 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7ql6h" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.849646 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-86n5m" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.859382 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.877741 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgl8f\" (UniqueName: \"kubernetes.io/projected/5a0e4f93-9116-4e18-b4ad-5e6b15571199-kube-api-access-bgl8f\") pod \"glance-operator-controller-manager-8886f4c47-r5zgz\" (UID: \"5a0e4f93-9116-4e18-b4ad-5e6b15571199\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-r5zgz" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.878472 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5khr\" (UniqueName: \"kubernetes.io/projected/968a346c-43bd-4d96-b609-71b838d9d5b8-kube-api-access-z5khr\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-wctjq\" (UID: \"968a346c-43bd-4d96-b609-71b838d9d5b8\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wctjq" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.878499 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qf7n\" (UniqueName: \"kubernetes.io/projected/c34d58a7-7472-43b5-a067-f8d98a83714c-kube-api-access-7qf7n\") pod \"cinder-operator-controller-manager-8d874c8fc-q7fkx\" (UID: \"c34d58a7-7472-43b5-a067-f8d98a83714c\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-q7fkx" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.878522 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s7wk\" (UniqueName: \"kubernetes.io/projected/b9652583-64a6-4726-986b-3d61a52ff7a9-kube-api-access-8s7wk\") pod \"designate-operator-controller-manager-6d9697b7f4-rq6tp\" (UID: \"b9652583-64a6-4726-986b-3d61a52ff7a9\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rq6tp" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.878562 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwrjh\" (UniqueName: \"kubernetes.io/projected/2d8d4014-bbb6-40e0-b2bb-235adaae50a3-kube-api-access-kwrjh\") pod \"heat-operator-controller-manager-69d6db494d-dqb4n\" (UID: \"2d8d4014-bbb6-40e0-b2bb-235adaae50a3\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dqb4n" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.882419 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-9j5d5"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.884245 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.884984 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.891758 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-cgqc8" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.897163 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.913154 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5khr\" (UniqueName: \"kubernetes.io/projected/968a346c-43bd-4d96-b609-71b838d9d5b8-kube-api-access-z5khr\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-wctjq\" (UID: \"968a346c-43bd-4d96-b609-71b838d9d5b8\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wctjq" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.918964 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s7wk\" (UniqueName: \"kubernetes.io/projected/b9652583-64a6-4726-986b-3d61a52ff7a9-kube-api-access-8s7wk\") pod \"designate-operator-controller-manager-6d9697b7f4-rq6tp\" (UID: \"b9652583-64a6-4726-986b-3d61a52ff7a9\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rq6tp" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.919610 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-dbcc7"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.921815 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dbcc7" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.920575 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgl8f\" (UniqueName: \"kubernetes.io/projected/5a0e4f93-9116-4e18-b4ad-5e6b15571199-kube-api-access-bgl8f\") pod \"glance-operator-controller-manager-8886f4c47-r5zgz\" (UID: \"5a0e4f93-9116-4e18-b4ad-5e6b15571199\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-r5zgz" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.926058 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-772r9" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.930733 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-qxvv5"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.935703 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwrjh\" (UniqueName: \"kubernetes.io/projected/2d8d4014-bbb6-40e0-b2bb-235adaae50a3-kube-api-access-kwrjh\") pod \"heat-operator-controller-manager-69d6db494d-dqb4n\" (UID: \"2d8d4014-bbb6-40e0-b2bb-235adaae50a3\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dqb4n" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.943845 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qxvv5" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.949498 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-qxvv5"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.950282 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xk6qc" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.956458 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qf7n\" (UniqueName: \"kubernetes.io/projected/c34d58a7-7472-43b5-a067-f8d98a83714c-kube-api-access-7qf7n\") pod \"cinder-operator-controller-manager-8d874c8fc-q7fkx\" (UID: \"c34d58a7-7472-43b5-a067-f8d98a83714c\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-q7fkx" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.959367 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-dbcc7"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.969265 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-r8fr7"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.970251 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-r8fr7" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.978253 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5djtx" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.979370 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44rrt\" (UniqueName: \"kubernetes.io/projected/6a1bcc88-899b-4ada-9560-3e388b5d1a82-kube-api-access-44rrt\") pod \"infra-operator-controller-manager-79955696d6-8t8z8\" (UID: \"6a1bcc88-899b-4ada-9560-3e388b5d1a82\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.979402 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert\") pod \"infra-operator-controller-manager-79955696d6-8t8z8\" (UID: \"6a1bcc88-899b-4ada-9560-3e388b5d1a82\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.979443 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27wrm\" (UniqueName: \"kubernetes.io/projected/784b3ddb-6085-405e-a492-78fa2d30f902-kube-api-access-27wrm\") pod \"ironic-operator-controller-manager-5f4b8bd54d-9j5d5\" (UID: \"784b3ddb-6085-405e-a492-78fa2d30f902\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-9j5d5" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.979464 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwrzw\" (UniqueName: \"kubernetes.io/projected/0943ef83-b318-4a15-baf0-858ffb1eec9f-kube-api-access-nwrzw\") pod \"horizon-operator-controller-manager-5fb775575f-x7x52\" (UID: \"0943ef83-b318-4a15-baf0-858ffb1eec9f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.982763 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wctjq" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.992178 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2"] Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.993108 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.997056 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6w27d" Jan 29 06:59:57 crc kubenswrapper[4826]: I0129 06:59:57.997554 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-q7fkx" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.032621 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rq6tp" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.033152 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.051432 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-r8fr7"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.072312 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dqb4n" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.089609 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd57l\" (UniqueName: \"kubernetes.io/projected/a90b0148-229f-46b4-ac43-2f2d8a89c167-kube-api-access-pd57l\") pod \"manila-operator-controller-manager-7dd968899f-dbcc7\" (UID: \"a90b0148-229f-46b4-ac43-2f2d8a89c167\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dbcc7" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.089674 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm7lg\" (UniqueName: \"kubernetes.io/projected/790d8543-7995-41f5-a1d9-86424c85102b-kube-api-access-cm7lg\") pod \"keystone-operator-controller-manager-84f48565d4-j4bnt\" (UID: \"790d8543-7995-41f5-a1d9-86424c85102b\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.089760 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44rrt\" (UniqueName: \"kubernetes.io/projected/6a1bcc88-899b-4ada-9560-3e388b5d1a82-kube-api-access-44rrt\") pod \"infra-operator-controller-manager-79955696d6-8t8z8\" (UID: \"6a1bcc88-899b-4ada-9560-3e388b5d1a82\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.089782 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mkz6\" (UniqueName: \"kubernetes.io/projected/f5d88392-3443-4b78-a27c-7594b71de46d-kube-api-access-6mkz6\") pod \"nova-operator-controller-manager-55bff696bd-r8fr7\" (UID: \"f5d88392-3443-4b78-a27c-7594b71de46d\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-r8fr7" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.089820 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert\") pod \"infra-operator-controller-manager-79955696d6-8t8z8\" (UID: \"6a1bcc88-899b-4ada-9560-3e388b5d1a82\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.089906 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqtf6\" (UniqueName: \"kubernetes.io/projected/e6027062-6010-4269-9798-6d31d88831ca-kube-api-access-zqtf6\") pod \"mariadb-operator-controller-manager-67bf948998-qxvv5\" (UID: \"e6027062-6010-4269-9798-6d31d88831ca\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qxvv5" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.089927 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27wrm\" (UniqueName: \"kubernetes.io/projected/784b3ddb-6085-405e-a492-78fa2d30f902-kube-api-access-27wrm\") pod \"ironic-operator-controller-manager-5f4b8bd54d-9j5d5\" (UID: \"784b3ddb-6085-405e-a492-78fa2d30f902\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-9j5d5" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.089951 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwrzw\" (UniqueName: \"kubernetes.io/projected/0943ef83-b318-4a15-baf0-858ffb1eec9f-kube-api-access-nwrzw\") pod \"horizon-operator-controller-manager-5fb775575f-x7x52\" (UID: \"0943ef83-b318-4a15-baf0-858ffb1eec9f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.093682 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-r5zgz" Jan 29 06:59:58 crc kubenswrapper[4826]: E0129 06:59:58.113542 4826 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 06:59:58 crc kubenswrapper[4826]: E0129 06:59:58.113798 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert podName:6a1bcc88-899b-4ada-9560-3e388b5d1a82 nodeName:}" failed. No retries permitted until 2026-01-29 06:59:58.613745757 +0000 UTC m=+982.475538826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert") pod "infra-operator-controller-manager-79955696d6-8t8z8" (UID: "6a1bcc88-899b-4ada-9560-3e388b5d1a82") : secret "infra-operator-webhook-server-cert" not found Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.126802 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwrzw\" (UniqueName: \"kubernetes.io/projected/0943ef83-b318-4a15-baf0-858ffb1eec9f-kube-api-access-nwrzw\") pod \"horizon-operator-controller-manager-5fb775575f-x7x52\" (UID: \"0943ef83-b318-4a15-baf0-858ffb1eec9f\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.133713 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.136881 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.157615 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44rrt\" (UniqueName: \"kubernetes.io/projected/6a1bcc88-899b-4ada-9560-3e388b5d1a82-kube-api-access-44rrt\") pod \"infra-operator-controller-manager-79955696d6-8t8z8\" (UID: \"6a1bcc88-899b-4ada-9560-3e388b5d1a82\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.166041 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27wrm\" (UniqueName: \"kubernetes.io/projected/784b3ddb-6085-405e-a492-78fa2d30f902-kube-api-access-27wrm\") pod \"ironic-operator-controller-manager-5f4b8bd54d-9j5d5\" (UID: \"784b3ddb-6085-405e-a492-78fa2d30f902\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-9j5d5" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.184256 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-q725k" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.189059 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-9j5d5" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.195984 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.196721 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khklm\" (UniqueName: \"kubernetes.io/projected/f1f53c69-e17d-44df-839f-9f354d1b5b24-kube-api-access-khklm\") pod \"neutron-operator-controller-manager-585dbc889-wdlx2\" (UID: \"f1f53c69-e17d-44df-839f-9f354d1b5b24\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.201000 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqtf6\" (UniqueName: \"kubernetes.io/projected/e6027062-6010-4269-9798-6d31d88831ca-kube-api-access-zqtf6\") pod \"mariadb-operator-controller-manager-67bf948998-qxvv5\" (UID: \"e6027062-6010-4269-9798-6d31d88831ca\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qxvv5" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.201199 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd57l\" (UniqueName: \"kubernetes.io/projected/a90b0148-229f-46b4-ac43-2f2d8a89c167-kube-api-access-pd57l\") pod \"manila-operator-controller-manager-7dd968899f-dbcc7\" (UID: \"a90b0148-229f-46b4-ac43-2f2d8a89c167\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dbcc7" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.201390 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm7lg\" (UniqueName: \"kubernetes.io/projected/790d8543-7995-41f5-a1d9-86424c85102b-kube-api-access-cm7lg\") pod \"keystone-operator-controller-manager-84f48565d4-j4bnt\" (UID: \"790d8543-7995-41f5-a1d9-86424c85102b\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.201555 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mkz6\" (UniqueName: \"kubernetes.io/projected/f5d88392-3443-4b78-a27c-7594b71de46d-kube-api-access-6mkz6\") pod \"nova-operator-controller-manager-55bff696bd-r8fr7\" (UID: \"f5d88392-3443-4b78-a27c-7594b71de46d\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-r8fr7" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.218726 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.221190 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqtf6\" (UniqueName: \"kubernetes.io/projected/e6027062-6010-4269-9798-6d31d88831ca-kube-api-access-zqtf6\") pod \"mariadb-operator-controller-manager-67bf948998-qxvv5\" (UID: \"e6027062-6010-4269-9798-6d31d88831ca\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qxvv5" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.222972 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.225726 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.231117 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vq42c" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.231987 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm7lg\" (UniqueName: \"kubernetes.io/projected/790d8543-7995-41f5-a1d9-86424c85102b-kube-api-access-cm7lg\") pod \"keystone-operator-controller-manager-84f48565d4-j4bnt\" (UID: \"790d8543-7995-41f5-a1d9-86424c85102b\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.231345 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd57l\" (UniqueName: \"kubernetes.io/projected/a90b0148-229f-46b4-ac43-2f2d8a89c167-kube-api-access-pd57l\") pod \"manila-operator-controller-manager-7dd968899f-dbcc7\" (UID: \"a90b0148-229f-46b4-ac43-2f2d8a89c167\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dbcc7" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.235254 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.235377 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.237157 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.246272 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.249901 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.250719 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.254213 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rphtx" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.254390 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.254549 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mkz6\" (UniqueName: \"kubernetes.io/projected/f5d88392-3443-4b78-a27c-7594b71de46d-kube-api-access-6mkz6\") pod \"nova-operator-controller-manager-55bff696bd-r8fr7\" (UID: \"f5d88392-3443-4b78-a27c-7594b71de46d\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-r8fr7" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.254839 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-t9vf6" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.255078 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-lc9xb" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.266289 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.285952 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.286031 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.292123 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dbcc7" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.292461 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xglxb"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.297781 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xglxb" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.302361 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-g5v64"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.303552 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g5v64" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.303797 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jg46q" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.308603 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khklm\" (UniqueName: \"kubernetes.io/projected/f1f53c69-e17d-44df-839f-9f354d1b5b24-kube-api-access-khklm\") pod \"neutron-operator-controller-manager-585dbc889-wdlx2\" (UID: \"f1f53c69-e17d-44df-839f-9f354d1b5b24\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.308761 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkfzt\" (UniqueName: \"kubernetes.io/projected/fa039621-ed69-4bc7-8d1c-233b03283aa0-kube-api-access-gkfzt\") pod \"octavia-operator-controller-manager-6687f8d877-jrkdp\" (UID: \"fa039621-ed69-4bc7-8d1c-233b03283aa0\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.313170 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xglxb"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.313514 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qxvv5" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.314657 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-g5v64"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.314940 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-rcfjh" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.338505 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-kk4wp"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.339426 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-kk4wp" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.341188 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-t67qh" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.345584 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-kk4wp"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.345778 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khklm\" (UniqueName: \"kubernetes.io/projected/f1f53c69-e17d-44df-839f-9f354d1b5b24-kube-api-access-khklm\") pod \"neutron-operator-controller-manager-585dbc889-wdlx2\" (UID: \"f1f53c69-e17d-44df-839f-9f354d1b5b24\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.388076 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-r8fr7" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.401342 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.402388 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.405225 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.410357 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-p6255" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.410552 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.411374 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bbwx\" (UniqueName: \"kubernetes.io/projected/2136722e-10e4-4fdd-8c1c-513be2cda722-kube-api-access-8bbwx\") pod \"placement-operator-controller-manager-5b964cf4cd-9vkmm\" (UID: \"2136722e-10e4-4fdd-8c1c-513be2cda722\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.411431 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntncb\" (UniqueName: \"kubernetes.io/projected/c81e6ebc-17e4-4b52-8e78-d7fc20195c4b-kube-api-access-ntncb\") pod \"telemetry-operator-controller-manager-64b5b76f97-xglxb\" (UID: \"c81e6ebc-17e4-4b52-8e78-d7fc20195c4b\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xglxb" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.411477 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6mn\" (UniqueName: \"kubernetes.io/projected/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-kube-api-access-9d6mn\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj\" (UID: \"34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.411496 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bft6g\" (UniqueName: \"kubernetes.io/projected/5723d99a-cfa2-4f46-9e29-a7f075e7d5fa-kube-api-access-bft6g\") pod \"swift-operator-controller-manager-68fc8c869-hkgbs\" (UID: \"5723d99a-cfa2-4f46-9e29-a7f075e7d5fa\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.411520 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj5lb\" (UniqueName: \"kubernetes.io/projected/3a41b9d7-4fd6-4f71-af8f-06751f6cb0dd-kube-api-access-nj5lb\") pod \"ovn-operator-controller-manager-788c46999f-p9gf8\" (UID: \"3a41b9d7-4fd6-4f71-af8f-06751f6cb0dd\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.411544 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj\" (UID: \"34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.411566 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkfzt\" (UniqueName: \"kubernetes.io/projected/fa039621-ed69-4bc7-8d1c-233b03283aa0-kube-api-access-gkfzt\") pod \"octavia-operator-controller-manager-6687f8d877-jrkdp\" (UID: \"fa039621-ed69-4bc7-8d1c-233b03283aa0\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.411592 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqrgj\" (UniqueName: \"kubernetes.io/projected/3a490a0b-5880-4b3a-847b-a5ffbdd2329b-kube-api-access-mqrgj\") pod \"test-operator-controller-manager-56f8bfcd9f-g5v64\" (UID: \"3a490a0b-5880-4b3a-847b-a5ffbdd2329b\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g5v64" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.424676 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.438151 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.445000 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkfzt\" (UniqueName: \"kubernetes.io/projected/fa039621-ed69-4bc7-8d1c-233b03283aa0-kube-api-access-gkfzt\") pod \"octavia-operator-controller-manager-6687f8d877-jrkdp\" (UID: \"fa039621-ed69-4bc7-8d1c-233b03283aa0\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.447153 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.490387 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.491506 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.496591 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jnpxk" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.509036 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.511279 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.512143 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvzb4\" (UniqueName: \"kubernetes.io/projected/ca001b72-69f7-488e-8410-0f046fb810bb-kube-api-access-gvzb4\") pod \"watcher-operator-controller-manager-564965969-kk4wp\" (UID: \"ca001b72-69f7-488e-8410-0f046fb810bb\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-kk4wp" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.512180 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.512215 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6mn\" (UniqueName: \"kubernetes.io/projected/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-kube-api-access-9d6mn\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj\" (UID: \"34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.512234 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bft6g\" (UniqueName: \"kubernetes.io/projected/5723d99a-cfa2-4f46-9e29-a7f075e7d5fa-kube-api-access-bft6g\") pod \"swift-operator-controller-manager-68fc8c869-hkgbs\" (UID: \"5723d99a-cfa2-4f46-9e29-a7f075e7d5fa\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.512256 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj5lb\" (UniqueName: \"kubernetes.io/projected/3a41b9d7-4fd6-4f71-af8f-06751f6cb0dd-kube-api-access-nj5lb\") pod \"ovn-operator-controller-manager-788c46999f-p9gf8\" (UID: \"3a41b9d7-4fd6-4f71-af8f-06751f6cb0dd\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.512278 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj\" (UID: \"34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.512313 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75t5z\" (UniqueName: \"kubernetes.io/projected/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-kube-api-access-75t5z\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.512337 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqrgj\" (UniqueName: \"kubernetes.io/projected/3a490a0b-5880-4b3a-847b-a5ffbdd2329b-kube-api-access-mqrgj\") pod \"test-operator-controller-manager-56f8bfcd9f-g5v64\" (UID: \"3a490a0b-5880-4b3a-847b-a5ffbdd2329b\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g5v64" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.512422 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.512704 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bbwx\" (UniqueName: \"kubernetes.io/projected/2136722e-10e4-4fdd-8c1c-513be2cda722-kube-api-access-8bbwx\") pod \"placement-operator-controller-manager-5b964cf4cd-9vkmm\" (UID: \"2136722e-10e4-4fdd-8c1c-513be2cda722\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm" Jan 29 06:59:58 crc kubenswrapper[4826]: E0129 06:59:58.512716 4826 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.512737 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntncb\" (UniqueName: \"kubernetes.io/projected/c81e6ebc-17e4-4b52-8e78-d7fc20195c4b-kube-api-access-ntncb\") pod \"telemetry-operator-controller-manager-64b5b76f97-xglxb\" (UID: \"c81e6ebc-17e4-4b52-8e78-d7fc20195c4b\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xglxb" Jan 29 06:59:58 crc kubenswrapper[4826]: E0129 06:59:58.512758 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert podName:34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6 nodeName:}" failed. No retries permitted until 2026-01-29 06:59:59.012744608 +0000 UTC m=+982.874537667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert") pod "openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" (UID: "34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.550886 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm"] Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.560853 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bbwx\" (UniqueName: \"kubernetes.io/projected/2136722e-10e4-4fdd-8c1c-513be2cda722-kube-api-access-8bbwx\") pod \"placement-operator-controller-manager-5b964cf4cd-9vkmm\" (UID: \"2136722e-10e4-4fdd-8c1c-513be2cda722\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.562197 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6mn\" (UniqueName: \"kubernetes.io/projected/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-kube-api-access-9d6mn\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj\" (UID: \"34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.581405 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj5lb\" (UniqueName: \"kubernetes.io/projected/3a41b9d7-4fd6-4f71-af8f-06751f6cb0dd-kube-api-access-nj5lb\") pod \"ovn-operator-controller-manager-788c46999f-p9gf8\" (UID: \"3a41b9d7-4fd6-4f71-af8f-06751f6cb0dd\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.581440 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqrgj\" (UniqueName: \"kubernetes.io/projected/3a490a0b-5880-4b3a-847b-a5ffbdd2329b-kube-api-access-mqrgj\") pod \"test-operator-controller-manager-56f8bfcd9f-g5v64\" (UID: \"3a490a0b-5880-4b3a-847b-a5ffbdd2329b\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g5v64" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.581961 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntncb\" (UniqueName: \"kubernetes.io/projected/c81e6ebc-17e4-4b52-8e78-d7fc20195c4b-kube-api-access-ntncb\") pod \"telemetry-operator-controller-manager-64b5b76f97-xglxb\" (UID: \"c81e6ebc-17e4-4b52-8e78-d7fc20195c4b\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xglxb" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.594727 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bft6g\" (UniqueName: \"kubernetes.io/projected/5723d99a-cfa2-4f46-9e29-a7f075e7d5fa-kube-api-access-bft6g\") pod \"swift-operator-controller-manager-68fc8c869-hkgbs\" (UID: \"5723d99a-cfa2-4f46-9e29-a7f075e7d5fa\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.597796 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g5v64" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.609625 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xglxb" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.614054 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.614131 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmwtb\" (UniqueName: \"kubernetes.io/projected/6f966d5b-f4c8-4921-9161-096ca391b81f-kube-api-access-fmwtb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-q6pbm\" (UID: \"6f966d5b-f4c8-4921-9161-096ca391b81f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.614182 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75t5z\" (UniqueName: \"kubernetes.io/projected/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-kube-api-access-75t5z\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.614224 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.614261 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert\") pod \"infra-operator-controller-manager-79955696d6-8t8z8\" (UID: \"6a1bcc88-899b-4ada-9560-3e388b5d1a82\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.614312 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvzb4\" (UniqueName: \"kubernetes.io/projected/ca001b72-69f7-488e-8410-0f046fb810bb-kube-api-access-gvzb4\") pod \"watcher-operator-controller-manager-564965969-kk4wp\" (UID: \"ca001b72-69f7-488e-8410-0f046fb810bb\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-kk4wp" Jan 29 06:59:58 crc kubenswrapper[4826]: E0129 06:59:58.614632 4826 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 06:59:58 crc kubenswrapper[4826]: E0129 06:59:58.614688 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs podName:e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5 nodeName:}" failed. No retries permitted until 2026-01-29 06:59:59.114669764 +0000 UTC m=+982.976462833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs") pod "openstack-operator-controller-manager-7b54f464f6-dfl95" (UID: "e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5") : secret "metrics-server-cert" not found Jan 29 06:59:58 crc kubenswrapper[4826]: E0129 06:59:58.614963 4826 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 06:59:58 crc kubenswrapper[4826]: E0129 06:59:58.615038 4826 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 06:59:58 crc kubenswrapper[4826]: E0129 06:59:58.615121 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert podName:6a1bcc88-899b-4ada-9560-3e388b5d1a82 nodeName:}" failed. No retries permitted until 2026-01-29 06:59:59.615088194 +0000 UTC m=+983.476881263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert") pod "infra-operator-controller-manager-79955696d6-8t8z8" (UID: "6a1bcc88-899b-4ada-9560-3e388b5d1a82") : secret "infra-operator-webhook-server-cert" not found Jan 29 06:59:58 crc kubenswrapper[4826]: E0129 06:59:58.615164 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs podName:e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5 nodeName:}" failed. No retries permitted until 2026-01-29 06:59:59.115141135 +0000 UTC m=+982.976934204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs") pod "openstack-operator-controller-manager-7b54f464f6-dfl95" (UID: "e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5") : secret "webhook-server-cert" not found Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.641348 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvzb4\" (UniqueName: \"kubernetes.io/projected/ca001b72-69f7-488e-8410-0f046fb810bb-kube-api-access-gvzb4\") pod \"watcher-operator-controller-manager-564965969-kk4wp\" (UID: \"ca001b72-69f7-488e-8410-0f046fb810bb\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-kk4wp" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.645358 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75t5z\" (UniqueName: \"kubernetes.io/projected/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-kube-api-access-75t5z\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.663953 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-kk4wp" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.691265 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.715867 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmwtb\" (UniqueName: \"kubernetes.io/projected/6f966d5b-f4c8-4921-9161-096ca391b81f-kube-api-access-fmwtb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-q6pbm\" (UID: \"6f966d5b-f4c8-4921-9161-096ca391b81f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.746542 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmwtb\" (UniqueName: \"kubernetes.io/projected/6f966d5b-f4c8-4921-9161-096ca391b81f-kube-api-access-fmwtb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-q6pbm\" (UID: \"6f966d5b-f4c8-4921-9161-096ca391b81f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.780903 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm" Jan 29 06:59:58 crc kubenswrapper[4826]: I0129 06:59:58.798114 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs" Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.018349 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm" Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.022382 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj\" (UID: \"34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.022549 4826 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.022602 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert podName:34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:00.02258786 +0000 UTC m=+983.884380929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert") pod "openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" (UID: "34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.123326 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.123478 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.123509 4826 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.123579 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs podName:e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:00.123565201 +0000 UTC m=+983.985358270 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs") pod "openstack-operator-controller-manager-7b54f464f6-dfl95" (UID: "e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5") : secret "webhook-server-cert" not found Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.123700 4826 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.123799 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs podName:e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:00.123773576 +0000 UTC m=+983.985566685 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs") pod "openstack-operator-controller-manager-7b54f464f6-dfl95" (UID: "e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5") : secret "metrics-server-cert" not found Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.504218 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-q7fkx"] Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.549700 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-rq6tp"] Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.574135 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-qxvv5"] Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.590356 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-dbcc7"] Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.640531 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert\") pod \"infra-operator-controller-manager-79955696d6-8t8z8\" (UID: \"6a1bcc88-899b-4ada-9560-3e388b5d1a82\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.640755 4826 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.640828 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert podName:6a1bcc88-899b-4ada-9560-3e388b5d1a82 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:01.64080761 +0000 UTC m=+985.502600679 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert") pod "infra-operator-controller-manager-79955696d6-8t8z8" (UID: "6a1bcc88-899b-4ada-9560-3e388b5d1a82") : secret "infra-operator-webhook-server-cert" not found Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.661769 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wctjq"] Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.686025 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xglxb"] Jan 29 06:59:59 crc kubenswrapper[4826]: W0129 06:59:59.692650 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5d88392_3443_4b78_a27c_7594b71de46d.slice/crio-c5c8894cbdaf267d320c28cee77d6fd5dac5e5bd58894c85f9b86b7e31b47060 WatchSource:0}: Error finding container c5c8894cbdaf267d320c28cee77d6fd5dac5e5bd58894c85f9b86b7e31b47060: Status 404 returned error can't find the container with id c5c8894cbdaf267d320c28cee77d6fd5dac5e5bd58894c85f9b86b7e31b47060 Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.695388 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-dqb4n"] Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.719083 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-r8fr7"] Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.733140 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-r5zgz"] Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.747154 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-g5v64"] Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.755384 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nwrzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-x7x52_openstack-operators(0943ef83-b318-4a15-baf0-858ffb1eec9f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.756728 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-9j5d5"] Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.757021 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52" podUID="0943ef83-b318-4a15-baf0-858ffb1eec9f" Jan 29 06:59:59 crc kubenswrapper[4826]: W0129 06:59:59.764024 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5723d99a_cfa2_4f46_9e29_a7f075e7d5fa.slice/crio-1f0984a216196db1384cca27b03f8444b82c2d5c251cb9520912be0f84ca7640 WatchSource:0}: Error finding container 1f0984a216196db1384cca27b03f8444b82c2d5c251cb9520912be0f84ca7640: Status 404 returned error can't find the container with id 1f0984a216196db1384cca27b03f8444b82c2d5c251cb9520912be0f84ca7640 Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.764522 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt"] Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.765404 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cm7lg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-j4bnt_openstack-operators(790d8543-7995-41f5-a1d9-86424c85102b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:59:59 crc kubenswrapper[4826]: W0129 06:59:59.765934 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca001b72_69f7_488e_8410_0f046fb810bb.slice/crio-19c49314f59db98da3cea4d2291c358c0342c1c84b14e7da87341e54f28d927f WatchSource:0}: Error finding container 19c49314f59db98da3cea4d2291c358c0342c1c84b14e7da87341e54f28d927f: Status 404 returned error can't find the container with id 19c49314f59db98da3cea4d2291c358c0342c1c84b14e7da87341e54f28d927f Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.766494 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt" podUID="790d8543-7995-41f5-a1d9-86424c85102b" Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.767765 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bft6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-hkgbs_openstack-operators(5723d99a-cfa2-4f46-9e29-a7f075e7d5fa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.769032 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs" podUID="5723d99a-cfa2-4f46-9e29-a7f075e7d5fa" Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.769624 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nj5lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-p9gf8_openstack-operators(3a41b9d7-4fd6-4f71-af8f-06751f6cb0dd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.770701 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8" podUID="3a41b9d7-4fd6-4f71-af8f-06751f6cb0dd" Jan 29 06:59:59 crc kubenswrapper[4826]: W0129 06:59:59.770787 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f53c69_e17d_44df_839f_9f354d1b5b24.slice/crio-f0c9db6155da4201e2aee3b503cf3768d5739275487b7036fc942f3b763e66b3 WatchSource:0}: Error finding container f0c9db6155da4201e2aee3b503cf3768d5739275487b7036fc942f3b763e66b3: Status 404 returned error can't find the container with id f0c9db6155da4201e2aee3b503cf3768d5739275487b7036fc942f3b763e66b3 Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.771738 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52"] Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.772474 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gvzb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-kk4wp_openstack-operators(ca001b72-69f7-488e-8410-0f046fb810bb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.773812 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-kk4wp" podUID="ca001b72-69f7-488e-8410-0f046fb810bb" Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.775938 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-khklm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-wdlx2_openstack-operators(f1f53c69-e17d-44df-839f-9f354d1b5b24): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.776120 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8"] Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.778206 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2" podUID="f1f53c69-e17d-44df-839f-9f354d1b5b24" Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.780655 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs"] Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.784586 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-kk4wp"] Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.791488 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2"] Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.822159 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp"] Jan 29 06:59:59 crc kubenswrapper[4826]: W0129 06:59:59.824402 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa039621_ed69_4bc7_8d1c_233b03283aa0.slice/crio-88271aec3ef9f977e7b825729f68d149783e6edb9d66e9459b2541c395ae3027 WatchSource:0}: Error finding container 88271aec3ef9f977e7b825729f68d149783e6edb9d66e9459b2541c395ae3027: Status 404 returned error can't find the container with id 88271aec3ef9f977e7b825729f68d149783e6edb9d66e9459b2541c395ae3027 Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.829711 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gkfzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-jrkdp_openstack-operators(fa039621-ed69-4bc7-8d1c-233b03283aa0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.831597 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp" podUID="fa039621-ed69-4bc7-8d1c-233b03283aa0" Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.839131 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm"] Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.843576 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm"] Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.854150 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8bbwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-9vkmm_openstack-operators(2136722e-10e4-4fdd-8c1c-513be2cda722): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.855353 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm" podUID="2136722e-10e4-4fdd-8c1c-513be2cda722" Jan 29 06:59:59 crc kubenswrapper[4826]: W0129 06:59:59.858449 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f966d5b_f4c8_4921_9161_096ca391b81f.slice/crio-bb119340348fdfba5b87fa9239bb5c8ea7db5213ef535eaf4eb36448746ee7d7 WatchSource:0}: Error finding container bb119340348fdfba5b87fa9239bb5c8ea7db5213ef535eaf4eb36448746ee7d7: Status 404 returned error can't find the container with id bb119340348fdfba5b87fa9239bb5c8ea7db5213ef535eaf4eb36448746ee7d7 Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.861713 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fmwtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-q6pbm_openstack-operators(6f966d5b-f4c8-4921-9161-096ca391b81f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.863041 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm" podUID="6f966d5b-f4c8-4921-9161-096ca391b81f" Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.939993 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs" event={"ID":"5723d99a-cfa2-4f46-9e29-a7f075e7d5fa","Type":"ContainerStarted","Data":"1f0984a216196db1384cca27b03f8444b82c2d5c251cb9520912be0f84ca7640"} Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.943207 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs" podUID="5723d99a-cfa2-4f46-9e29-a7f075e7d5fa" Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.943697 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8" event={"ID":"3a41b9d7-4fd6-4f71-af8f-06751f6cb0dd","Type":"ContainerStarted","Data":"8865e8a0eda639b55dfbeb343bf5a7bdb325045214bc7a23cfcab09537b38ae4"} Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.946030 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8" podUID="3a41b9d7-4fd6-4f71-af8f-06751f6cb0dd" Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.951596 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xglxb" event={"ID":"c81e6ebc-17e4-4b52-8e78-d7fc20195c4b","Type":"ContainerStarted","Data":"8c3ac0f122ae3bfea712a1888d1db928a30e3da5ed86901e8d6867cbb4adfc51"} Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.953329 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-r8fr7" event={"ID":"f5d88392-3443-4b78-a27c-7594b71de46d","Type":"ContainerStarted","Data":"c5c8894cbdaf267d320c28cee77d6fd5dac5e5bd58894c85f9b86b7e31b47060"} Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.956035 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dbcc7" event={"ID":"a90b0148-229f-46b4-ac43-2f2d8a89c167","Type":"ContainerStarted","Data":"66e9eb562a325dd27cc35360f4f22cf1f7fa4271f01c1364fcd857e4328b666c"} Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.960882 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qxvv5" event={"ID":"e6027062-6010-4269-9798-6d31d88831ca","Type":"ContainerStarted","Data":"a4c896ea90f2ef47f28f65306758faff34169818a94f394aad6160393f2cac1d"} Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.964958 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-9j5d5" event={"ID":"784b3ddb-6085-405e-a492-78fa2d30f902","Type":"ContainerStarted","Data":"462dc30f3e7d17c8e7e80723572727b32a414ed631d6ec194d087a2205ffc2fc"} Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.968288 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt" event={"ID":"790d8543-7995-41f5-a1d9-86424c85102b","Type":"ContainerStarted","Data":"118eb205b885ca538c719e5aad501d10364e20ab1de9f5f4029bb696fe389d98"} Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.973460 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt" podUID="790d8543-7995-41f5-a1d9-86424c85102b" Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.974785 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm" event={"ID":"2136722e-10e4-4fdd-8c1c-513be2cda722","Type":"ContainerStarted","Data":"d6ca90df9ccb245e9776dd633ff1540e9c110f8adccec754c7562df547d452d7"} Jan 29 06:59:59 crc kubenswrapper[4826]: E0129 06:59:59.975883 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm" podUID="2136722e-10e4-4fdd-8c1c-513be2cda722" Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.978122 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wctjq" event={"ID":"968a346c-43bd-4d96-b609-71b838d9d5b8","Type":"ContainerStarted","Data":"2ec2bf25c04c79d599aea5bc8eb967a85ac6055bdc1f818939ce0077891dd668"} Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.987246 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g5v64" event={"ID":"3a490a0b-5880-4b3a-847b-a5ffbdd2329b","Type":"ContainerStarted","Data":"f297d943b8717e36a61c1c393e9a14372829ac737df66fc708fed7f24c726c9e"} Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.994917 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dqb4n" event={"ID":"2d8d4014-bbb6-40e0-b2bb-235adaae50a3","Type":"ContainerStarted","Data":"45fc0779a66f7f4abfa65b1bc3adaed99c6de0a72fac93b1302f2eabe21efad0"} Jan 29 06:59:59 crc kubenswrapper[4826]: I0129 06:59:59.998545 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-q7fkx" event={"ID":"c34d58a7-7472-43b5-a067-f8d98a83714c","Type":"ContainerStarted","Data":"2c559090a1faf41dee691f0d2174082121d11e3bb470b7b4e3117c28b608a093"} Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.001221 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52" event={"ID":"0943ef83-b318-4a15-baf0-858ffb1eec9f","Type":"ContainerStarted","Data":"476cbc9b666857d8206de74ed785e52d71a80cd3ad9aabc9a12c797a8f49fe0e"} Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.002805 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rq6tp" event={"ID":"b9652583-64a6-4726-986b-3d61a52ff7a9","Type":"ContainerStarted","Data":"87b5aeb44b352ea7c883e8ae604d47ab89d47b76d0940d3ec0bf88fa1daca5c2"} Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.005090 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm" event={"ID":"6f966d5b-f4c8-4921-9161-096ca391b81f","Type":"ContainerStarted","Data":"bb119340348fdfba5b87fa9239bb5c8ea7db5213ef535eaf4eb36448746ee7d7"} Jan 29 07:00:00 crc kubenswrapper[4826]: E0129 07:00:00.006275 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52" podUID="0943ef83-b318-4a15-baf0-858ffb1eec9f" Jan 29 07:00:00 crc kubenswrapper[4826]: E0129 07:00:00.006965 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm" podUID="6f966d5b-f4c8-4921-9161-096ca391b81f" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.008471 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2" event={"ID":"f1f53c69-e17d-44df-839f-9f354d1b5b24","Type":"ContainerStarted","Data":"f0c9db6155da4201e2aee3b503cf3768d5739275487b7036fc942f3b763e66b3"} Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.010162 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-r5zgz" event={"ID":"5a0e4f93-9116-4e18-b4ad-5e6b15571199","Type":"ContainerStarted","Data":"913d671465470386900f42dfa399c5bf7db67b747ffa91a46afc61e833b1c09b"} Jan 29 07:00:00 crc kubenswrapper[4826]: E0129 07:00:00.020133 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2" podUID="f1f53c69-e17d-44df-839f-9f354d1b5b24" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.020547 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-kk4wp" event={"ID":"ca001b72-69f7-488e-8410-0f046fb810bb","Type":"ContainerStarted","Data":"19c49314f59db98da3cea4d2291c358c0342c1c84b14e7da87341e54f28d927f"} Jan 29 07:00:00 crc kubenswrapper[4826]: E0129 07:00:00.023696 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-kk4wp" podUID="ca001b72-69f7-488e-8410-0f046fb810bb" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.023905 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp" event={"ID":"fa039621-ed69-4bc7-8d1c-233b03283aa0","Type":"ContainerStarted","Data":"88271aec3ef9f977e7b825729f68d149783e6edb9d66e9459b2541c395ae3027"} Jan 29 07:00:00 crc kubenswrapper[4826]: E0129 07:00:00.025430 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp" podUID="fa039621-ed69-4bc7-8d1c-233b03283aa0" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.049580 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj\" (UID: \"34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 07:00:00 crc kubenswrapper[4826]: E0129 07:00:00.050148 4826 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 07:00:00 crc kubenswrapper[4826]: E0129 07:00:00.050237 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert podName:34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:02.050212794 +0000 UTC m=+985.912005863 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert") pod "openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" (UID: "34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.151463 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.151616 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:00 crc kubenswrapper[4826]: E0129 07:00:00.151836 4826 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 07:00:00 crc kubenswrapper[4826]: E0129 07:00:00.151909 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs podName:e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:02.151892893 +0000 UTC m=+986.013685962 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs") pod "openstack-operator-controller-manager-7b54f464f6-dfl95" (UID: "e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5") : secret "webhook-server-cert" not found Jan 29 07:00:00 crc kubenswrapper[4826]: E0129 07:00:00.153055 4826 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 07:00:00 crc kubenswrapper[4826]: E0129 07:00:00.153088 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs podName:e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:02.153080593 +0000 UTC m=+986.014873652 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs") pod "openstack-operator-controller-manager-7b54f464f6-dfl95" (UID: "e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5") : secret "metrics-server-cert" not found Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.161470 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff"] Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.162817 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.166068 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.166374 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.168770 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff"] Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.253612 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2kxl\" (UniqueName: \"kubernetes.io/projected/5cb515ad-6a84-4938-908a-6dc478741980-kube-api-access-f2kxl\") pod \"collect-profiles-29494500-k6wff\" (UID: \"5cb515ad-6a84-4938-908a-6dc478741980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.253672 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cb515ad-6a84-4938-908a-6dc478741980-config-volume\") pod \"collect-profiles-29494500-k6wff\" (UID: \"5cb515ad-6a84-4938-908a-6dc478741980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.253797 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5cb515ad-6a84-4938-908a-6dc478741980-secret-volume\") pod \"collect-profiles-29494500-k6wff\" (UID: \"5cb515ad-6a84-4938-908a-6dc478741980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.355671 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5cb515ad-6a84-4938-908a-6dc478741980-secret-volume\") pod \"collect-profiles-29494500-k6wff\" (UID: \"5cb515ad-6a84-4938-908a-6dc478741980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.355807 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2kxl\" (UniqueName: \"kubernetes.io/projected/5cb515ad-6a84-4938-908a-6dc478741980-kube-api-access-f2kxl\") pod \"collect-profiles-29494500-k6wff\" (UID: \"5cb515ad-6a84-4938-908a-6dc478741980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.355842 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cb515ad-6a84-4938-908a-6dc478741980-config-volume\") pod \"collect-profiles-29494500-k6wff\" (UID: \"5cb515ad-6a84-4938-908a-6dc478741980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.357091 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cb515ad-6a84-4938-908a-6dc478741980-config-volume\") pod \"collect-profiles-29494500-k6wff\" (UID: \"5cb515ad-6a84-4938-908a-6dc478741980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.368205 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5cb515ad-6a84-4938-908a-6dc478741980-secret-volume\") pod \"collect-profiles-29494500-k6wff\" (UID: \"5cb515ad-6a84-4938-908a-6dc478741980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.377465 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2kxl\" (UniqueName: \"kubernetes.io/projected/5cb515ad-6a84-4938-908a-6dc478741980-kube-api-access-f2kxl\") pod \"collect-profiles-29494500-k6wff\" (UID: \"5cb515ad-6a84-4938-908a-6dc478741980\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" Jan 29 07:00:00 crc kubenswrapper[4826]: I0129 07:00:00.500007 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" Jan 29 07:00:01 crc kubenswrapper[4826]: E0129 07:00:01.037831 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-kk4wp" podUID="ca001b72-69f7-488e-8410-0f046fb810bb" Jan 29 07:00:01 crc kubenswrapper[4826]: E0129 07:00:01.039436 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8" podUID="3a41b9d7-4fd6-4f71-af8f-06751f6cb0dd" Jan 29 07:00:01 crc kubenswrapper[4826]: E0129 07:00:01.039482 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp" podUID="fa039621-ed69-4bc7-8d1c-233b03283aa0" Jan 29 07:00:01 crc kubenswrapper[4826]: E0129 07:00:01.039521 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2" podUID="f1f53c69-e17d-44df-839f-9f354d1b5b24" Jan 29 07:00:01 crc kubenswrapper[4826]: E0129 07:00:01.039596 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm" podUID="6f966d5b-f4c8-4921-9161-096ca391b81f" Jan 29 07:00:01 crc kubenswrapper[4826]: E0129 07:00:01.039642 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm" podUID="2136722e-10e4-4fdd-8c1c-513be2cda722" Jan 29 07:00:01 crc kubenswrapper[4826]: E0129 07:00:01.039742 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52" podUID="0943ef83-b318-4a15-baf0-858ffb1eec9f" Jan 29 07:00:01 crc kubenswrapper[4826]: E0129 07:00:01.039795 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt" podUID="790d8543-7995-41f5-a1d9-86424c85102b" Jan 29 07:00:01 crc kubenswrapper[4826]: E0129 07:00:01.039923 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs" podUID="5723d99a-cfa2-4f46-9e29-a7f075e7d5fa" Jan 29 07:00:01 crc kubenswrapper[4826]: I0129 07:00:01.144648 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff"] Jan 29 07:00:01 crc kubenswrapper[4826]: E0129 07:00:01.607089 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cb515ad_6a84_4938_908a_6dc478741980.slice/crio-bace1320263e2e5532c3e0a676af294fb7a50246d49111ab9c57c2bae4e29017.scope\": RecentStats: unable to find data in memory cache]" Jan 29 07:00:01 crc kubenswrapper[4826]: I0129 07:00:01.687982 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert\") pod \"infra-operator-controller-manager-79955696d6-8t8z8\" (UID: \"6a1bcc88-899b-4ada-9560-3e388b5d1a82\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 07:00:01 crc kubenswrapper[4826]: E0129 07:00:01.688248 4826 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 07:00:01 crc kubenswrapper[4826]: E0129 07:00:01.688366 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert podName:6a1bcc88-899b-4ada-9560-3e388b5d1a82 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:05.688345251 +0000 UTC m=+989.550138320 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert") pod "infra-operator-controller-manager-79955696d6-8t8z8" (UID: "6a1bcc88-899b-4ada-9560-3e388b5d1a82") : secret "infra-operator-webhook-server-cert" not found Jan 29 07:00:02 crc kubenswrapper[4826]: I0129 07:00:02.044637 4826 generic.go:334] "Generic (PLEG): container finished" podID="5cb515ad-6a84-4938-908a-6dc478741980" containerID="bace1320263e2e5532c3e0a676af294fb7a50246d49111ab9c57c2bae4e29017" exitCode=0 Jan 29 07:00:02 crc kubenswrapper[4826]: I0129 07:00:02.044726 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" event={"ID":"5cb515ad-6a84-4938-908a-6dc478741980","Type":"ContainerDied","Data":"bace1320263e2e5532c3e0a676af294fb7a50246d49111ab9c57c2bae4e29017"} Jan 29 07:00:02 crc kubenswrapper[4826]: I0129 07:00:02.044777 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" event={"ID":"5cb515ad-6a84-4938-908a-6dc478741980","Type":"ContainerStarted","Data":"e471070e5ed2bcea374280c4c91fc4e2258629dd206100e0c49809a0ee9145ab"} Jan 29 07:00:02 crc kubenswrapper[4826]: I0129 07:00:02.092969 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj\" (UID: \"34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 07:00:02 crc kubenswrapper[4826]: E0129 07:00:02.093350 4826 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 07:00:02 crc kubenswrapper[4826]: E0129 07:00:02.093484 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert podName:34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:06.093452456 +0000 UTC m=+989.955245555 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert") pod "openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" (UID: "34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 07:00:02 crc kubenswrapper[4826]: I0129 07:00:02.194011 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:02 crc kubenswrapper[4826]: I0129 07:00:02.194205 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:02 crc kubenswrapper[4826]: E0129 07:00:02.194330 4826 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 07:00:02 crc kubenswrapper[4826]: E0129 07:00:02.194475 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs podName:e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:06.194444428 +0000 UTC m=+990.056237497 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs") pod "openstack-operator-controller-manager-7b54f464f6-dfl95" (UID: "e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5") : secret "metrics-server-cert" not found Jan 29 07:00:02 crc kubenswrapper[4826]: E0129 07:00:02.194547 4826 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 07:00:02 crc kubenswrapper[4826]: E0129 07:00:02.194701 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs podName:e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:06.194670424 +0000 UTC m=+990.056463503 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs") pod "openstack-operator-controller-manager-7b54f464f6-dfl95" (UID: "e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5") : secret "webhook-server-cert" not found Jan 29 07:00:04 crc kubenswrapper[4826]: I0129 07:00:04.525588 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" Jan 29 07:00:04 crc kubenswrapper[4826]: I0129 07:00:04.642415 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2kxl\" (UniqueName: \"kubernetes.io/projected/5cb515ad-6a84-4938-908a-6dc478741980-kube-api-access-f2kxl\") pod \"5cb515ad-6a84-4938-908a-6dc478741980\" (UID: \"5cb515ad-6a84-4938-908a-6dc478741980\") " Jan 29 07:00:04 crc kubenswrapper[4826]: I0129 07:00:04.642518 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cb515ad-6a84-4938-908a-6dc478741980-config-volume\") pod \"5cb515ad-6a84-4938-908a-6dc478741980\" (UID: \"5cb515ad-6a84-4938-908a-6dc478741980\") " Jan 29 07:00:04 crc kubenswrapper[4826]: I0129 07:00:04.642552 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5cb515ad-6a84-4938-908a-6dc478741980-secret-volume\") pod \"5cb515ad-6a84-4938-908a-6dc478741980\" (UID: \"5cb515ad-6a84-4938-908a-6dc478741980\") " Jan 29 07:00:04 crc kubenswrapper[4826]: I0129 07:00:04.647789 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cb515ad-6a84-4938-908a-6dc478741980-config-volume" (OuterVolumeSpecName: "config-volume") pod "5cb515ad-6a84-4938-908a-6dc478741980" (UID: "5cb515ad-6a84-4938-908a-6dc478741980"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:00:04 crc kubenswrapper[4826]: I0129 07:00:04.648522 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb515ad-6a84-4938-908a-6dc478741980-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5cb515ad-6a84-4938-908a-6dc478741980" (UID: "5cb515ad-6a84-4938-908a-6dc478741980"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:00:04 crc kubenswrapper[4826]: I0129 07:00:04.648903 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb515ad-6a84-4938-908a-6dc478741980-kube-api-access-f2kxl" (OuterVolumeSpecName: "kube-api-access-f2kxl") pod "5cb515ad-6a84-4938-908a-6dc478741980" (UID: "5cb515ad-6a84-4938-908a-6dc478741980"). InnerVolumeSpecName "kube-api-access-f2kxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:00:04 crc kubenswrapper[4826]: I0129 07:00:04.744385 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2kxl\" (UniqueName: \"kubernetes.io/projected/5cb515ad-6a84-4938-908a-6dc478741980-kube-api-access-f2kxl\") on node \"crc\" DevicePath \"\"" Jan 29 07:00:04 crc kubenswrapper[4826]: I0129 07:00:04.744413 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5cb515ad-6a84-4938-908a-6dc478741980-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:00:04 crc kubenswrapper[4826]: I0129 07:00:04.744422 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5cb515ad-6a84-4938-908a-6dc478741980-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:00:05 crc kubenswrapper[4826]: I0129 07:00:05.067642 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" event={"ID":"5cb515ad-6a84-4938-908a-6dc478741980","Type":"ContainerDied","Data":"e471070e5ed2bcea374280c4c91fc4e2258629dd206100e0c49809a0ee9145ab"} Jan 29 07:00:05 crc kubenswrapper[4826]: I0129 07:00:05.067690 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e471070e5ed2bcea374280c4c91fc4e2258629dd206100e0c49809a0ee9145ab" Jan 29 07:00:05 crc kubenswrapper[4826]: I0129 07:00:05.067705 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff" Jan 29 07:00:05 crc kubenswrapper[4826]: I0129 07:00:05.656866 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:00:05 crc kubenswrapper[4826]: I0129 07:00:05.656946 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:00:05 crc kubenswrapper[4826]: I0129 07:00:05.656991 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 07:00:05 crc kubenswrapper[4826]: I0129 07:00:05.658371 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3303398d9dd82a2bcfef8e8991ab372b491761ad48de0e25a106a7c53d77566"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:00:05 crc kubenswrapper[4826]: I0129 07:00:05.658438 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://d3303398d9dd82a2bcfef8e8991ab372b491761ad48de0e25a106a7c53d77566" gracePeriod=600 Jan 29 07:00:05 crc kubenswrapper[4826]: I0129 07:00:05.760991 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert\") pod \"infra-operator-controller-manager-79955696d6-8t8z8\" (UID: \"6a1bcc88-899b-4ada-9560-3e388b5d1a82\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 07:00:05 crc kubenswrapper[4826]: E0129 07:00:05.761223 4826 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 07:00:05 crc kubenswrapper[4826]: E0129 07:00:05.761631 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert podName:6a1bcc88-899b-4ada-9560-3e388b5d1a82 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:13.761603773 +0000 UTC m=+997.623396852 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert") pod "infra-operator-controller-manager-79955696d6-8t8z8" (UID: "6a1bcc88-899b-4ada-9560-3e388b5d1a82") : secret "infra-operator-webhook-server-cert" not found Jan 29 07:00:06 crc kubenswrapper[4826]: I0129 07:00:06.076070 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="d3303398d9dd82a2bcfef8e8991ab372b491761ad48de0e25a106a7c53d77566" exitCode=0 Jan 29 07:00:06 crc kubenswrapper[4826]: I0129 07:00:06.076121 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"d3303398d9dd82a2bcfef8e8991ab372b491761ad48de0e25a106a7c53d77566"} Jan 29 07:00:06 crc kubenswrapper[4826]: I0129 07:00:06.076162 4826 scope.go:117] "RemoveContainer" containerID="a3e78bc337b51dad14a09b0691d31de9770dca1238f79d864ee283914dbce58e" Jan 29 07:00:06 crc kubenswrapper[4826]: I0129 07:00:06.166110 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj\" (UID: \"34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 07:00:06 crc kubenswrapper[4826]: E0129 07:00:06.166335 4826 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 07:00:06 crc kubenswrapper[4826]: E0129 07:00:06.166378 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert podName:34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:14.16636497 +0000 UTC m=+998.028158039 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert") pod "openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" (UID: "34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 07:00:06 crc kubenswrapper[4826]: I0129 07:00:06.267330 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:06 crc kubenswrapper[4826]: I0129 07:00:06.267439 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:06 crc kubenswrapper[4826]: E0129 07:00:06.267553 4826 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 07:00:06 crc kubenswrapper[4826]: E0129 07:00:06.267599 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs podName:e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:14.267585067 +0000 UTC m=+998.129378136 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs") pod "openstack-operator-controller-manager-7b54f464f6-dfl95" (UID: "e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5") : secret "webhook-server-cert" not found Jan 29 07:00:06 crc kubenswrapper[4826]: E0129 07:00:06.267740 4826 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 07:00:06 crc kubenswrapper[4826]: E0129 07:00:06.267811 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs podName:e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:14.267791583 +0000 UTC m=+998.129584722 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs") pod "openstack-operator-controller-manager-7b54f464f6-dfl95" (UID: "e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5") : secret "metrics-server-cert" not found Jan 29 07:00:11 crc kubenswrapper[4826]: I0129 07:00:11.132702 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"2746c36a8cbae641f39bc5b503c4b8bd16a73e3034bddd5ca4705c812e26566f"} Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.142776 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-r8fr7" event={"ID":"f5d88392-3443-4b78-a27c-7594b71de46d","Type":"ContainerStarted","Data":"4f6454ba84696b647dc936236aaf7385dccb37ee533ad64fe9210f230365936f"} Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.143444 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-r8fr7" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.144118 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rq6tp" event={"ID":"b9652583-64a6-4726-986b-3d61a52ff7a9","Type":"ContainerStarted","Data":"26e39b81ca0bdd50d4f6fa8384dbf5b1ebeae987b596fed43b07e33b99cb177d"} Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.144224 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rq6tp" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.146195 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qxvv5" event={"ID":"e6027062-6010-4269-9798-6d31d88831ca","Type":"ContainerStarted","Data":"c8e613b554c9ad5898f407029f960084d379b17ee107543ff3771dada95332f5"} Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.146313 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qxvv5" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.147285 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g5v64" event={"ID":"3a490a0b-5880-4b3a-847b-a5ffbdd2329b","Type":"ContainerStarted","Data":"ca27783b3dca4db470d3b49544179b8d6b305cc8242b1616c660afb425cf4fc4"} Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.147349 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g5v64" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.148440 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dqb4n" event={"ID":"2d8d4014-bbb6-40e0-b2bb-235adaae50a3","Type":"ContainerStarted","Data":"2b2f9e05bc2c3335ae64fba7483167d4bb1c758d1e071cbc669a40949bab3eb0"} Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.148489 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dqb4n" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.149384 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-q7fkx" event={"ID":"c34d58a7-7472-43b5-a067-f8d98a83714c","Type":"ContainerStarted","Data":"861dbe7aa33ce2ddb522871359ddfccf2a7f347c03b81a801887dec0ff3db3d8"} Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.149560 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-q7fkx" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.150567 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dbcc7" event={"ID":"a90b0148-229f-46b4-ac43-2f2d8a89c167","Type":"ContainerStarted","Data":"e2eaa389620a19426cb6e46c4f5d4635a027d0860cf2dc8b1811657178befa6f"} Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.150684 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dbcc7" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.151725 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-r5zgz" event={"ID":"5a0e4f93-9116-4e18-b4ad-5e6b15571199","Type":"ContainerStarted","Data":"8a303ea5c4e5dd04dece53a64a09bf5612a18c0851bd656dfd98bf3e24e2736e"} Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.152048 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-r5zgz" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.153521 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wctjq" event={"ID":"968a346c-43bd-4d96-b609-71b838d9d5b8","Type":"ContainerStarted","Data":"7b194384caca885bf4bad9ed65275c3cc3df7f0d9a666749573391c298ed8ba5"} Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.153596 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wctjq" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.154713 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-9j5d5" event={"ID":"784b3ddb-6085-405e-a492-78fa2d30f902","Type":"ContainerStarted","Data":"73b3e9b04e63e8371b6e62ead79932c5d180623c8d1f937196d43603ab610f49"} Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.155037 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-9j5d5" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.156935 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xglxb" event={"ID":"c81e6ebc-17e4-4b52-8e78-d7fc20195c4b","Type":"ContainerStarted","Data":"8be9c6de0a85237eded9ab96dcffc7e5b799adf8ae27318dfaecc479f59c9b23"} Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.156960 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xglxb" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.344277 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-r8fr7" podStartSLOduration=4.345893948 podStartE2EDuration="15.344251586s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.71246705 +0000 UTC m=+983.574260119" lastFinishedPulling="2026-01-29 07:00:10.710824688 +0000 UTC m=+994.572617757" observedRunningTime="2026-01-29 07:00:12.288852787 +0000 UTC m=+996.150645856" watchObservedRunningTime="2026-01-29 07:00:12.344251586 +0000 UTC m=+996.206044655" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.345972 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xglxb" podStartSLOduration=3.281966364 podStartE2EDuration="14.34596609s" podCreationTimestamp="2026-01-29 06:59:58 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.670542341 +0000 UTC m=+983.532335410" lastFinishedPulling="2026-01-29 07:00:10.734542067 +0000 UTC m=+994.596335136" observedRunningTime="2026-01-29 07:00:12.341335613 +0000 UTC m=+996.203128682" watchObservedRunningTime="2026-01-29 07:00:12.34596609 +0000 UTC m=+996.207759159" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.451193 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g5v64" podStartSLOduration=3.4761126989999998 podStartE2EDuration="14.451164007s" podCreationTimestamp="2026-01-29 06:59:58 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.735843941 +0000 UTC m=+983.597637010" lastFinishedPulling="2026-01-29 07:00:10.710895219 +0000 UTC m=+994.572688318" observedRunningTime="2026-01-29 07:00:12.390445193 +0000 UTC m=+996.252238262" watchObservedRunningTime="2026-01-29 07:00:12.451164007 +0000 UTC m=+996.312957076" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.454769 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-q7fkx" podStartSLOduration=4.270069313 podStartE2EDuration="15.454763118s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.526236975 +0000 UTC m=+983.388030044" lastFinishedPulling="2026-01-29 07:00:10.71093077 +0000 UTC m=+994.572723849" observedRunningTime="2026-01-29 07:00:12.446642113 +0000 UTC m=+996.308435182" watchObservedRunningTime="2026-01-29 07:00:12.454763118 +0000 UTC m=+996.316556177" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.478246 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dbcc7" podStartSLOduration=4.377413605 podStartE2EDuration="15.478225571s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.612595827 +0000 UTC m=+983.474388886" lastFinishedPulling="2026-01-29 07:00:10.713407783 +0000 UTC m=+994.575200852" observedRunningTime="2026-01-29 07:00:12.477321298 +0000 UTC m=+996.339114367" watchObservedRunningTime="2026-01-29 07:00:12.478225571 +0000 UTC m=+996.340018650" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.566911 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wctjq" podStartSLOduration=4.48924484 podStartE2EDuration="15.566890301s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.650527055 +0000 UTC m=+983.512320124" lastFinishedPulling="2026-01-29 07:00:10.728172506 +0000 UTC m=+994.589965585" observedRunningTime="2026-01-29 07:00:12.563615779 +0000 UTC m=+996.425408848" watchObservedRunningTime="2026-01-29 07:00:12.566890301 +0000 UTC m=+996.428683360" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.567672 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-r5zgz" podStartSLOduration=4.598749077 podStartE2EDuration="15.567666451s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.744891599 +0000 UTC m=+983.606684658" lastFinishedPulling="2026-01-29 07:00:10.713808963 +0000 UTC m=+994.575602032" observedRunningTime="2026-01-29 07:00:12.532656196 +0000 UTC m=+996.394449265" watchObservedRunningTime="2026-01-29 07:00:12.567666451 +0000 UTC m=+996.429459520" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.641029 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rq6tp" podStartSLOduration=4.543793959 podStartE2EDuration="15.641013054s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.525682001 +0000 UTC m=+983.387475070" lastFinishedPulling="2026-01-29 07:00:10.622901096 +0000 UTC m=+994.484694165" observedRunningTime="2026-01-29 07:00:12.633441033 +0000 UTC m=+996.495234102" watchObservedRunningTime="2026-01-29 07:00:12.641013054 +0000 UTC m=+996.502806123" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.672460 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dqb4n" podStartSLOduration=4.631957356 podStartE2EDuration="15.672443538s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.675966428 +0000 UTC m=+983.537759497" lastFinishedPulling="2026-01-29 07:00:10.7164526 +0000 UTC m=+994.578245679" observedRunningTime="2026-01-29 07:00:12.669657828 +0000 UTC m=+996.531450897" watchObservedRunningTime="2026-01-29 07:00:12.672443538 +0000 UTC m=+996.534236607" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.734872 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-9j5d5" podStartSLOduration=4.748391048 podStartE2EDuration="15.734857385s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.74451354 +0000 UTC m=+983.606306599" lastFinishedPulling="2026-01-29 07:00:10.730979857 +0000 UTC m=+994.592772936" observedRunningTime="2026-01-29 07:00:12.703492173 +0000 UTC m=+996.565285242" watchObservedRunningTime="2026-01-29 07:00:12.734857385 +0000 UTC m=+996.596650454" Jan 29 07:00:12 crc kubenswrapper[4826]: I0129 07:00:12.736599 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qxvv5" podStartSLOduration=4.653393698 podStartE2EDuration="15.736590589s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.633156976 +0000 UTC m=+983.494950045" lastFinishedPulling="2026-01-29 07:00:10.716353867 +0000 UTC m=+994.578146936" observedRunningTime="2026-01-29 07:00:12.732403993 +0000 UTC m=+996.594197062" watchObservedRunningTime="2026-01-29 07:00:12.736590589 +0000 UTC m=+996.598383658" Jan 29 07:00:13 crc kubenswrapper[4826]: I0129 07:00:13.838488 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert\") pod \"infra-operator-controller-manager-79955696d6-8t8z8\" (UID: \"6a1bcc88-899b-4ada-9560-3e388b5d1a82\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 07:00:13 crc kubenswrapper[4826]: I0129 07:00:13.857665 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a1bcc88-899b-4ada-9560-3e388b5d1a82-cert\") pod \"infra-operator-controller-manager-79955696d6-8t8z8\" (UID: \"6a1bcc88-899b-4ada-9560-3e388b5d1a82\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 07:00:14 crc kubenswrapper[4826]: I0129 07:00:14.058321 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 07:00:14 crc kubenswrapper[4826]: I0129 07:00:14.244194 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj\" (UID: \"34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 07:00:14 crc kubenswrapper[4826]: I0129 07:00:14.247934 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6-cert\") pod \"openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj\" (UID: \"34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 07:00:14 crc kubenswrapper[4826]: I0129 07:00:14.346002 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:14 crc kubenswrapper[4826]: I0129 07:00:14.346101 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:14 crc kubenswrapper[4826]: E0129 07:00:14.346191 4826 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 07:00:14 crc kubenswrapper[4826]: E0129 07:00:14.346283 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs podName:e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5 nodeName:}" failed. No retries permitted until 2026-01-29 07:00:30.346265788 +0000 UTC m=+1014.208058857 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs") pod "openstack-operator-controller-manager-7b54f464f6-dfl95" (UID: "e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5") : secret "webhook-server-cert" not found Jan 29 07:00:14 crc kubenswrapper[4826]: I0129 07:00:14.346850 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 07:00:14 crc kubenswrapper[4826]: I0129 07:00:14.350174 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-metrics-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:17 crc kubenswrapper[4826]: I0129 07:00:17.985734 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-wctjq" Jan 29 07:00:18 crc kubenswrapper[4826]: I0129 07:00:18.008622 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-q7fkx" Jan 29 07:00:18 crc kubenswrapper[4826]: I0129 07:00:18.049992 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-rq6tp" Jan 29 07:00:18 crc kubenswrapper[4826]: I0129 07:00:18.078851 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dqb4n" Jan 29 07:00:18 crc kubenswrapper[4826]: I0129 07:00:18.110941 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-r5zgz" Jan 29 07:00:18 crc kubenswrapper[4826]: I0129 07:00:18.193607 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-9j5d5" Jan 29 07:00:18 crc kubenswrapper[4826]: I0129 07:00:18.299623 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dbcc7" Jan 29 07:00:18 crc kubenswrapper[4826]: I0129 07:00:18.318958 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-qxvv5" Jan 29 07:00:18 crc kubenswrapper[4826]: I0129 07:00:18.399246 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-r8fr7" Jan 29 07:00:18 crc kubenswrapper[4826]: I0129 07:00:18.570451 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj"] Jan 29 07:00:18 crc kubenswrapper[4826]: I0129 07:00:18.601079 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-g5v64" Jan 29 07:00:18 crc kubenswrapper[4826]: I0129 07:00:18.612385 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xglxb" Jan 29 07:00:24 crc kubenswrapper[4826]: I0129 07:00:24.831865 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8"] Jan 29 07:00:25 crc kubenswrapper[4826]: I0129 07:00:25.304334 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" event={"ID":"34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6","Type":"ContainerStarted","Data":"2dc05110a3c06bae5f6585a86fb9d7f0cd8ee1f2c5fe7bca81dbe076a994ffd2"} Jan 29 07:00:25 crc kubenswrapper[4826]: W0129 07:00:25.780528 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a1bcc88_899b_4ada_9560_3e388b5d1a82.slice/crio-30def3b13f1b52e27c2468a6a2b774c04adf127a150bf7095498dcd02e3e4d95 WatchSource:0}: Error finding container 30def3b13f1b52e27c2468a6a2b774c04adf127a150bf7095498dcd02e3e4d95: Status 404 returned error can't find the container with id 30def3b13f1b52e27c2468a6a2b774c04adf127a150bf7095498dcd02e3e4d95 Jan 29 07:00:26 crc kubenswrapper[4826]: I0129 07:00:26.327673 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" event={"ID":"6a1bcc88-899b-4ada-9560-3e388b5d1a82","Type":"ContainerStarted","Data":"30def3b13f1b52e27c2468a6a2b774c04adf127a150bf7095498dcd02e3e4d95"} Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.336155 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt" event={"ID":"790d8543-7995-41f5-a1d9-86424c85102b","Type":"ContainerStarted","Data":"41c558e5f31d628495d94fc3ac4eb4701d5137bae6acafaaf02569009441f167"} Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.336651 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.337476 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm" event={"ID":"2136722e-10e4-4fdd-8c1c-513be2cda722","Type":"ContainerStarted","Data":"a69d31dd75bc4c451c751c26619d063091c79accadd6aacd9a9f3e68caeb3575"} Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.337668 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.339213 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp" event={"ID":"fa039621-ed69-4bc7-8d1c-233b03283aa0","Type":"ContainerStarted","Data":"c1b0939487532da6acf43c71e143161aaef9be0e557d6f817497e702d5a04910"} Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.339533 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.344875 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs" event={"ID":"5723d99a-cfa2-4f46-9e29-a7f075e7d5fa","Type":"ContainerStarted","Data":"27b5106b53b70d01bebf8f83a6959a8887c3cc77e7df9071c95e1bdfebe08865"} Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.345116 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.350807 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2" event={"ID":"f1f53c69-e17d-44df-839f-9f354d1b5b24","Type":"ContainerStarted","Data":"d12853580dab365c499e6d4113aa8f93b4a54ec708cd01a5e844aa2c08cef5b4"} Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.351397 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.358514 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52" event={"ID":"0943ef83-b318-4a15-baf0-858ffb1eec9f","Type":"ContainerStarted","Data":"dbd4aa3c53650faaecca628d653f0c5e52a41d682c98694e54c1ca3a3e73d917"} Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.358983 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.361700 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt" podStartSLOduration=4.215356041 podStartE2EDuration="30.361691778s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.764144256 +0000 UTC m=+983.625937325" lastFinishedPulling="2026-01-29 07:00:25.910479993 +0000 UTC m=+1009.772273062" observedRunningTime="2026-01-29 07:00:27.35623135 +0000 UTC m=+1011.218024419" watchObservedRunningTime="2026-01-29 07:00:27.361691778 +0000 UTC m=+1011.223484847" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.363642 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-kk4wp" event={"ID":"ca001b72-69f7-488e-8410-0f046fb810bb","Type":"ContainerStarted","Data":"f9d2fe8571a984b823000f75ffc91d55403d3e4cbf1b55c4f50de33c815fd258"} Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.363889 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-kk4wp" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.365874 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm" event={"ID":"6f966d5b-f4c8-4921-9161-096ca391b81f","Type":"ContainerStarted","Data":"5cd5f77511b8ef9ffb55edea89e46f03c573abef89f77ea2bb1fb85e6db2bb0a"} Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.367544 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8" event={"ID":"3a41b9d7-4fd6-4f71-af8f-06751f6cb0dd","Type":"ContainerStarted","Data":"b4a0e3a150e25a924918cb81288ab6631f0b54c0c3d843e14c237e52ec15e537"} Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.367742 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.376912 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm" podStartSLOduration=11.203907775 podStartE2EDuration="29.376902472s" podCreationTimestamp="2026-01-29 06:59:58 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.853988156 +0000 UTC m=+983.715781225" lastFinishedPulling="2026-01-29 07:00:18.026982853 +0000 UTC m=+1001.888775922" observedRunningTime="2026-01-29 07:00:27.37523378 +0000 UTC m=+1011.237026849" watchObservedRunningTime="2026-01-29 07:00:27.376902472 +0000 UTC m=+1011.238695541" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.402249 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp" podStartSLOduration=4.35028957 podStartE2EDuration="30.402213022s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.829538988 +0000 UTC m=+983.691332057" lastFinishedPulling="2026-01-29 07:00:25.88146244 +0000 UTC m=+1009.743255509" observedRunningTime="2026-01-29 07:00:27.401930905 +0000 UTC m=+1011.263723984" watchObservedRunningTime="2026-01-29 07:00:27.402213022 +0000 UTC m=+1011.264006091" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.432735 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2" podStartSLOduration=4.328771136 podStartE2EDuration="30.432715842s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.77578608 +0000 UTC m=+983.637579149" lastFinishedPulling="2026-01-29 07:00:25.879730786 +0000 UTC m=+1009.741523855" observedRunningTime="2026-01-29 07:00:27.426645679 +0000 UTC m=+1011.288438748" watchObservedRunningTime="2026-01-29 07:00:27.432715842 +0000 UTC m=+1011.294508911" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.446148 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs" podStartSLOduration=6.764188043 podStartE2EDuration="29.446133871s" podCreationTimestamp="2026-01-29 06:59:58 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.767637114 +0000 UTC m=+983.629430183" lastFinishedPulling="2026-01-29 07:00:22.449582932 +0000 UTC m=+1006.311376011" observedRunningTime="2026-01-29 07:00:27.444522341 +0000 UTC m=+1011.306315410" watchObservedRunningTime="2026-01-29 07:00:27.446133871 +0000 UTC m=+1011.307926940" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.472013 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8" podStartSLOduration=3.331562756 podStartE2EDuration="29.471993555s" podCreationTimestamp="2026-01-29 06:59:58 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.76947389 +0000 UTC m=+983.631266959" lastFinishedPulling="2026-01-29 07:00:25.909904689 +0000 UTC m=+1009.771697758" observedRunningTime="2026-01-29 07:00:27.459783106 +0000 UTC m=+1011.321576175" watchObservedRunningTime="2026-01-29 07:00:27.471993555 +0000 UTC m=+1011.333786624" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.485179 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q6pbm" podStartSLOduration=3.548056806 podStartE2EDuration="29.485160627s" podCreationTimestamp="2026-01-29 06:59:58 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.861434884 +0000 UTC m=+983.723227953" lastFinishedPulling="2026-01-29 07:00:25.798538705 +0000 UTC m=+1009.660331774" observedRunningTime="2026-01-29 07:00:27.475345029 +0000 UTC m=+1011.337138118" watchObservedRunningTime="2026-01-29 07:00:27.485160627 +0000 UTC m=+1011.346953696" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.497826 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-kk4wp" podStartSLOduration=3.3596411760000002 podStartE2EDuration="29.497788296s" podCreationTimestamp="2026-01-29 06:59:58 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.772376854 +0000 UTC m=+983.634169923" lastFinishedPulling="2026-01-29 07:00:25.910523974 +0000 UTC m=+1009.772317043" observedRunningTime="2026-01-29 07:00:27.492806251 +0000 UTC m=+1011.354599320" watchObservedRunningTime="2026-01-29 07:00:27.497788296 +0000 UTC m=+1011.359581365" Jan 29 07:00:27 crc kubenswrapper[4826]: I0129 07:00:27.528059 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52" podStartSLOduration=4.419821656 podStartE2EDuration="30.52802736s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 06:59:59.755162219 +0000 UTC m=+983.616955288" lastFinishedPulling="2026-01-29 07:00:25.863367883 +0000 UTC m=+1009.725160992" observedRunningTime="2026-01-29 07:00:27.518563191 +0000 UTC m=+1011.380356260" watchObservedRunningTime="2026-01-29 07:00:27.52802736 +0000 UTC m=+1011.389820429" Jan 29 07:00:30 crc kubenswrapper[4826]: I0129 07:00:30.413083 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" event={"ID":"6a1bcc88-899b-4ada-9560-3e388b5d1a82","Type":"ContainerStarted","Data":"1dde2b4908c58f38ee86173378c45b4d6844aebc96ca7ef52ffcd1c3ee2e7eca"} Jan 29 07:00:30 crc kubenswrapper[4826]: I0129 07:00:30.415190 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 07:00:30 crc kubenswrapper[4826]: I0129 07:00:30.416067 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" event={"ID":"34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6","Type":"ContainerStarted","Data":"273bfcd7c0635333dc9238c92eefff55af32832f4ba4fbd5c63ea632b52d9064"} Jan 29 07:00:30 crc kubenswrapper[4826]: I0129 07:00:30.416285 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 07:00:30 crc kubenswrapper[4826]: I0129 07:00:30.417712 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:30 crc kubenswrapper[4826]: I0129 07:00:30.430597 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5-webhook-certs\") pod \"openstack-operator-controller-manager-7b54f464f6-dfl95\" (UID: \"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5\") " pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:30 crc kubenswrapper[4826]: I0129 07:00:30.477158 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" podStartSLOduration=29.934191157 podStartE2EDuration="33.476785712s" podCreationTimestamp="2026-01-29 06:59:57 +0000 UTC" firstStartedPulling="2026-01-29 07:00:25.798417912 +0000 UTC m=+1009.660210981" lastFinishedPulling="2026-01-29 07:00:29.341012427 +0000 UTC m=+1013.202805536" observedRunningTime="2026-01-29 07:00:30.438426013 +0000 UTC m=+1014.300219092" watchObservedRunningTime="2026-01-29 07:00:30.476785712 +0000 UTC m=+1014.338578791" Jan 29 07:00:30 crc kubenswrapper[4826]: I0129 07:00:30.486656 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:30 crc kubenswrapper[4826]: I0129 07:00:30.496937 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" podStartSLOduration=27.552692112 podStartE2EDuration="32.49692077s" podCreationTimestamp="2026-01-29 06:59:58 +0000 UTC" firstStartedPulling="2026-01-29 07:00:24.393142517 +0000 UTC m=+1008.254935626" lastFinishedPulling="2026-01-29 07:00:29.337371185 +0000 UTC m=+1013.199164284" observedRunningTime="2026-01-29 07:00:30.493529825 +0000 UTC m=+1014.355322904" watchObservedRunningTime="2026-01-29 07:00:30.49692077 +0000 UTC m=+1014.358713839" Jan 29 07:00:31 crc kubenswrapper[4826]: I0129 07:00:31.036240 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95"] Jan 29 07:00:31 crc kubenswrapper[4826]: W0129 07:00:31.039514 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9a11c2b_0242_4fe7_8bbf_cf536de5f0b5.slice/crio-cc3e49adfd61a7c24c14ebbb8b664fc2cbd2f4e4abfc083ab28346804588e538 WatchSource:0}: Error finding container cc3e49adfd61a7c24c14ebbb8b664fc2cbd2f4e4abfc083ab28346804588e538: Status 404 returned error can't find the container with id cc3e49adfd61a7c24c14ebbb8b664fc2cbd2f4e4abfc083ab28346804588e538 Jan 29 07:00:31 crc kubenswrapper[4826]: I0129 07:00:31.428908 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" event={"ID":"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5","Type":"ContainerStarted","Data":"9e45a1411be5f11e9aa45dfd5d57e0cbada5be1ad09765fe31405fb5eb0ac756"} Jan 29 07:00:31 crc kubenswrapper[4826]: I0129 07:00:31.429453 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" event={"ID":"e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5","Type":"ContainerStarted","Data":"cc3e49adfd61a7c24c14ebbb8b664fc2cbd2f4e4abfc083ab28346804588e538"} Jan 29 07:00:31 crc kubenswrapper[4826]: I0129 07:00:31.429515 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:31 crc kubenswrapper[4826]: I0129 07:00:31.476701 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" podStartSLOduration=33.476669204 podStartE2EDuration="33.476669204s" podCreationTimestamp="2026-01-29 06:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:00:31.465192025 +0000 UTC m=+1015.326985124" watchObservedRunningTime="2026-01-29 07:00:31.476669204 +0000 UTC m=+1015.338462303" Jan 29 07:00:34 crc kubenswrapper[4826]: I0129 07:00:34.071204 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-8t8z8" Jan 29 07:00:34 crc kubenswrapper[4826]: I0129 07:00:34.356616 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj" Jan 29 07:00:38 crc kubenswrapper[4826]: I0129 07:00:38.429118 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-x7x52" Jan 29 07:00:38 crc kubenswrapper[4826]: I0129 07:00:38.443280 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wdlx2" Jan 29 07:00:38 crc kubenswrapper[4826]: I0129 07:00:38.513095 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-j4bnt" Jan 29 07:00:38 crc kubenswrapper[4826]: I0129 07:00:38.513949 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jrkdp" Jan 29 07:00:38 crc kubenswrapper[4826]: I0129 07:00:38.666937 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-kk4wp" Jan 29 07:00:38 crc kubenswrapper[4826]: I0129 07:00:38.695693 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-p9gf8" Jan 29 07:00:38 crc kubenswrapper[4826]: I0129 07:00:38.791248 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-9vkmm" Jan 29 07:00:38 crc kubenswrapper[4826]: I0129 07:00:38.804984 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-hkgbs" Jan 29 07:00:40 crc kubenswrapper[4826]: I0129 07:00:40.498772 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7b54f464f6-dfl95" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.364401 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-mz9vk"] Jan 29 07:00:56 crc kubenswrapper[4826]: E0129 07:00:56.365985 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb515ad-6a84-4938-908a-6dc478741980" containerName="collect-profiles" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.366005 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb515ad-6a84-4938-908a-6dc478741980" containerName="collect-profiles" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.368470 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cb515ad-6a84-4938-908a-6dc478741980" containerName="collect-profiles" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.370912 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-mz9vk" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.383968 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.384466 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.384933 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.385659 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-rvlkg" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.402754 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-mz9vk"] Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.431010 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-7k46w"] Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.432263 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-7k46w" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.435627 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.438634 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-7k46w"] Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.483099 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxc92\" (UniqueName: \"kubernetes.io/projected/0e33aab6-677d-4c6f-9f8b-71e04d65b8e8-kube-api-access-nxc92\") pod \"dnsmasq-dns-84bb9d8bd9-mz9vk\" (UID: \"0e33aab6-677d-4c6f-9f8b-71e04d65b8e8\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-mz9vk" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.483168 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e33aab6-677d-4c6f-9f8b-71e04d65b8e8-config\") pod \"dnsmasq-dns-84bb9d8bd9-mz9vk\" (UID: \"0e33aab6-677d-4c6f-9f8b-71e04d65b8e8\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-mz9vk" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.585264 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27bfj\" (UniqueName: \"kubernetes.io/projected/eef9f936-9b6b-4990-a606-1e8e3d910bfc-kube-api-access-27bfj\") pod \"dnsmasq-dns-5f854695bc-7k46w\" (UID: \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\") " pod="openstack/dnsmasq-dns-5f854695bc-7k46w" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.585431 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eef9f936-9b6b-4990-a606-1e8e3d910bfc-dns-svc\") pod \"dnsmasq-dns-5f854695bc-7k46w\" (UID: \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\") " pod="openstack/dnsmasq-dns-5f854695bc-7k46w" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.585515 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxc92\" (UniqueName: \"kubernetes.io/projected/0e33aab6-677d-4c6f-9f8b-71e04d65b8e8-kube-api-access-nxc92\") pod \"dnsmasq-dns-84bb9d8bd9-mz9vk\" (UID: \"0e33aab6-677d-4c6f-9f8b-71e04d65b8e8\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-mz9vk" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.585618 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef9f936-9b6b-4990-a606-1e8e3d910bfc-config\") pod \"dnsmasq-dns-5f854695bc-7k46w\" (UID: \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\") " pod="openstack/dnsmasq-dns-5f854695bc-7k46w" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.585685 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e33aab6-677d-4c6f-9f8b-71e04d65b8e8-config\") pod \"dnsmasq-dns-84bb9d8bd9-mz9vk\" (UID: \"0e33aab6-677d-4c6f-9f8b-71e04d65b8e8\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-mz9vk" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.586890 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e33aab6-677d-4c6f-9f8b-71e04d65b8e8-config\") pod \"dnsmasq-dns-84bb9d8bd9-mz9vk\" (UID: \"0e33aab6-677d-4c6f-9f8b-71e04d65b8e8\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-mz9vk" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.618522 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxc92\" (UniqueName: \"kubernetes.io/projected/0e33aab6-677d-4c6f-9f8b-71e04d65b8e8-kube-api-access-nxc92\") pod \"dnsmasq-dns-84bb9d8bd9-mz9vk\" (UID: \"0e33aab6-677d-4c6f-9f8b-71e04d65b8e8\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-mz9vk" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.687580 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef9f936-9b6b-4990-a606-1e8e3d910bfc-config\") pod \"dnsmasq-dns-5f854695bc-7k46w\" (UID: \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\") " pod="openstack/dnsmasq-dns-5f854695bc-7k46w" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.687673 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27bfj\" (UniqueName: \"kubernetes.io/projected/eef9f936-9b6b-4990-a606-1e8e3d910bfc-kube-api-access-27bfj\") pod \"dnsmasq-dns-5f854695bc-7k46w\" (UID: \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\") " pod="openstack/dnsmasq-dns-5f854695bc-7k46w" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.687710 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eef9f936-9b6b-4990-a606-1e8e3d910bfc-dns-svc\") pod \"dnsmasq-dns-5f854695bc-7k46w\" (UID: \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\") " pod="openstack/dnsmasq-dns-5f854695bc-7k46w" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.703586 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-mz9vk" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.794840 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef9f936-9b6b-4990-a606-1e8e3d910bfc-config\") pod \"dnsmasq-dns-5f854695bc-7k46w\" (UID: \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\") " pod="openstack/dnsmasq-dns-5f854695bc-7k46w" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.799271 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eef9f936-9b6b-4990-a606-1e8e3d910bfc-dns-svc\") pod \"dnsmasq-dns-5f854695bc-7k46w\" (UID: \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\") " pod="openstack/dnsmasq-dns-5f854695bc-7k46w" Jan 29 07:00:56 crc kubenswrapper[4826]: I0129 07:00:56.801589 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27bfj\" (UniqueName: \"kubernetes.io/projected/eef9f936-9b6b-4990-a606-1e8e3d910bfc-kube-api-access-27bfj\") pod \"dnsmasq-dns-5f854695bc-7k46w\" (UID: \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\") " pod="openstack/dnsmasq-dns-5f854695bc-7k46w" Jan 29 07:00:57 crc kubenswrapper[4826]: I0129 07:00:57.051989 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-7k46w" Jan 29 07:00:57 crc kubenswrapper[4826]: I0129 07:00:57.157146 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-mz9vk"] Jan 29 07:00:57 crc kubenswrapper[4826]: I0129 07:00:57.166045 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 07:00:57 crc kubenswrapper[4826]: I0129 07:00:57.299087 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-7k46w"] Jan 29 07:00:57 crc kubenswrapper[4826]: W0129 07:00:57.300254 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeef9f936_9b6b_4990_a606_1e8e3d910bfc.slice/crio-513ba432b58e3cfac0c67a1c6e067d33cf471c8e7f5b22c4f8dcdac021e4c2a8 WatchSource:0}: Error finding container 513ba432b58e3cfac0c67a1c6e067d33cf471c8e7f5b22c4f8dcdac021e4c2a8: Status 404 returned error can't find the container with id 513ba432b58e3cfac0c67a1c6e067d33cf471c8e7f5b22c4f8dcdac021e4c2a8 Jan 29 07:00:57 crc kubenswrapper[4826]: I0129 07:00:57.688499 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-7k46w" event={"ID":"eef9f936-9b6b-4990-a606-1e8e3d910bfc","Type":"ContainerStarted","Data":"513ba432b58e3cfac0c67a1c6e067d33cf471c8e7f5b22c4f8dcdac021e4c2a8"} Jan 29 07:00:57 crc kubenswrapper[4826]: I0129 07:00:57.690470 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-mz9vk" event={"ID":"0e33aab6-677d-4c6f-9f8b-71e04d65b8e8","Type":"ContainerStarted","Data":"36ccc107e64a98f0371b13bf5750ae9989188cf3ed15bfe01e33675545f7c838"} Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.416305 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-7k46w"] Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.457423 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-dxdmv"] Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.458888 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.467641 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-dxdmv"] Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.620791 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa37e86-663c-41ba-a3cd-b144bf92805e-config\") pod \"dnsmasq-dns-c7cbb8f79-dxdmv\" (UID: \"aaa37e86-663c-41ba-a3cd-b144bf92805e\") " pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.620878 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa37e86-663c-41ba-a3cd-b144bf92805e-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-dxdmv\" (UID: \"aaa37e86-663c-41ba-a3cd-b144bf92805e\") " pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.620911 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f26n5\" (UniqueName: \"kubernetes.io/projected/aaa37e86-663c-41ba-a3cd-b144bf92805e-kube-api-access-f26n5\") pod \"dnsmasq-dns-c7cbb8f79-dxdmv\" (UID: \"aaa37e86-663c-41ba-a3cd-b144bf92805e\") " pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.721860 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f26n5\" (UniqueName: \"kubernetes.io/projected/aaa37e86-663c-41ba-a3cd-b144bf92805e-kube-api-access-f26n5\") pod \"dnsmasq-dns-c7cbb8f79-dxdmv\" (UID: \"aaa37e86-663c-41ba-a3cd-b144bf92805e\") " pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.721956 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa37e86-663c-41ba-a3cd-b144bf92805e-config\") pod \"dnsmasq-dns-c7cbb8f79-dxdmv\" (UID: \"aaa37e86-663c-41ba-a3cd-b144bf92805e\") " pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.722035 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa37e86-663c-41ba-a3cd-b144bf92805e-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-dxdmv\" (UID: \"aaa37e86-663c-41ba-a3cd-b144bf92805e\") " pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.722933 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa37e86-663c-41ba-a3cd-b144bf92805e-config\") pod \"dnsmasq-dns-c7cbb8f79-dxdmv\" (UID: \"aaa37e86-663c-41ba-a3cd-b144bf92805e\") " pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.722979 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa37e86-663c-41ba-a3cd-b144bf92805e-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-dxdmv\" (UID: \"aaa37e86-663c-41ba-a3cd-b144bf92805e\") " pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.745875 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f26n5\" (UniqueName: \"kubernetes.io/projected/aaa37e86-663c-41ba-a3cd-b144bf92805e-kube-api-access-f26n5\") pod \"dnsmasq-dns-c7cbb8f79-dxdmv\" (UID: \"aaa37e86-663c-41ba-a3cd-b144bf92805e\") " pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.799781 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.897113 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-mz9vk"] Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.935011 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-vhzmk"] Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.937120 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:00:58 crc kubenswrapper[4826]: I0129 07:00:58.958797 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-vhzmk"] Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.037453 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4mwh\" (UniqueName: \"kubernetes.io/projected/68f776f5-5464-4259-9466-2ebe6093e673-kube-api-access-s4mwh\") pod \"dnsmasq-dns-95f5f6995-vhzmk\" (UID: \"68f776f5-5464-4259-9466-2ebe6093e673\") " pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.037535 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68f776f5-5464-4259-9466-2ebe6093e673-dns-svc\") pod \"dnsmasq-dns-95f5f6995-vhzmk\" (UID: \"68f776f5-5464-4259-9466-2ebe6093e673\") " pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.037603 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f776f5-5464-4259-9466-2ebe6093e673-config\") pod \"dnsmasq-dns-95f5f6995-vhzmk\" (UID: \"68f776f5-5464-4259-9466-2ebe6093e673\") " pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.138552 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68f776f5-5464-4259-9466-2ebe6093e673-dns-svc\") pod \"dnsmasq-dns-95f5f6995-vhzmk\" (UID: \"68f776f5-5464-4259-9466-2ebe6093e673\") " pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.138632 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f776f5-5464-4259-9466-2ebe6093e673-config\") pod \"dnsmasq-dns-95f5f6995-vhzmk\" (UID: \"68f776f5-5464-4259-9466-2ebe6093e673\") " pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.138693 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4mwh\" (UniqueName: \"kubernetes.io/projected/68f776f5-5464-4259-9466-2ebe6093e673-kube-api-access-s4mwh\") pod \"dnsmasq-dns-95f5f6995-vhzmk\" (UID: \"68f776f5-5464-4259-9466-2ebe6093e673\") " pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.140127 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68f776f5-5464-4259-9466-2ebe6093e673-dns-svc\") pod \"dnsmasq-dns-95f5f6995-vhzmk\" (UID: \"68f776f5-5464-4259-9466-2ebe6093e673\") " pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.140708 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f776f5-5464-4259-9466-2ebe6093e673-config\") pod \"dnsmasq-dns-95f5f6995-vhzmk\" (UID: \"68f776f5-5464-4259-9466-2ebe6093e673\") " pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.164547 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4mwh\" (UniqueName: \"kubernetes.io/projected/68f776f5-5464-4259-9466-2ebe6093e673-kube-api-access-s4mwh\") pod \"dnsmasq-dns-95f5f6995-vhzmk\" (UID: \"68f776f5-5464-4259-9466-2ebe6093e673\") " pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.256747 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.365101 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-dxdmv"] Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.532912 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-vhzmk"] Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.555520 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.557325 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.561897 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.563741 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.563891 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.564116 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.564231 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.564497 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.564884 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7gb68" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.574157 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.647713 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.647757 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4xx\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-kube-api-access-ww4xx\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.647790 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.647812 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.647857 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.647875 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.647903 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.647919 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.647937 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.647960 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.647979 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.713106 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" event={"ID":"aaa37e86-663c-41ba-a3cd-b144bf92805e","Type":"ContainerStarted","Data":"71b44553a62426e2103caacf5e61f03cc6dd3f0b79346df27779daf6f4dd215e"} Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.715086 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" event={"ID":"68f776f5-5464-4259-9466-2ebe6093e673","Type":"ContainerStarted","Data":"aa69480dcef889206f4c72fd660e436b6a371db29ad9a516de96efe0206220c9"} Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.752199 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.752248 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.752292 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.752325 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.752351 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.752368 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.752387 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.752444 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.752466 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4xx\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-kube-api-access-ww4xx\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.752489 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.752510 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.752790 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.753491 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.753517 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.753727 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.754116 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.754169 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.758610 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.759588 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.767940 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.770899 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4xx\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-kube-api-access-ww4xx\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.771477 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.775527 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:00:59 crc kubenswrapper[4826]: I0129 07:00:59.890593 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.066230 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.067382 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.070242 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.071697 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.074412 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vc42p" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.074595 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.075166 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.077936 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.078370 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.094769 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.159086 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.159133 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.159163 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.159288 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.159380 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1794f620-102a-4b9c-9097-713579ec55ad-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.159430 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.159524 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.159559 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.159577 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.159666 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1794f620-102a-4b9c-9097-713579ec55ad-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.159692 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhqxf\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-kube-api-access-rhqxf\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.260873 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.260923 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.260952 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.260979 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.261025 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1794f620-102a-4b9c-9097-713579ec55ad-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.261058 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.261097 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.261119 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.261134 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.261156 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1794f620-102a-4b9c-9097-713579ec55ad-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.261174 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhqxf\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-kube-api-access-rhqxf\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.262225 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.262669 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.262831 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.262953 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.263476 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.263441 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.267103 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.271505 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.271721 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1794f620-102a-4b9c-9097-713579ec55ad-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.279824 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhqxf\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-kube-api-access-rhqxf\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.288502 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1794f620-102a-4b9c-9097-713579ec55ad-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.300510 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.361471 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.399948 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 07:01:00 crc kubenswrapper[4826]: W0129 07:01:00.407020 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0da3bc6b_99a0_4de9_9479_5aaef8bfd81c.slice/crio-d7b05a926283f388e3d5789ea2a0b8a4cd6c939d6df91d3d288ef854c9ded18e WatchSource:0}: Error finding container d7b05a926283f388e3d5789ea2a0b8a4cd6c939d6df91d3d288ef854c9ded18e: Status 404 returned error can't find the container with id d7b05a926283f388e3d5789ea2a0b8a4cd6c939d6df91d3d288ef854c9ded18e Jan 29 07:01:00 crc kubenswrapper[4826]: I0129 07:01:00.735730 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c","Type":"ContainerStarted","Data":"d7b05a926283f388e3d5789ea2a0b8a4cd6c939d6df91d3d288ef854c9ded18e"} Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.095731 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.686024 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.688198 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.695371 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.699015 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7nkq5" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.703186 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.703471 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.703605 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.754547 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.771563 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1794f620-102a-4b9c-9097-713579ec55ad","Type":"ContainerStarted","Data":"ec3729f30c53e6fd1d8004de6ecf626045de1ac8f9fb0ede2094e6e4cd1dba8c"} Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.797335 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/426997bd-6ba1-4ebb-b8d3-08be081add91-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.797393 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.797418 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-operator-scripts\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.797467 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/426997bd-6ba1-4ebb-b8d3-08be081add91-config-data-generated\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.797497 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-kolla-config\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.797521 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-config-data-default\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.797553 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426997bd-6ba1-4ebb-b8d3-08be081add91-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.797571 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sqd5\" (UniqueName: \"kubernetes.io/projected/426997bd-6ba1-4ebb-b8d3-08be081add91-kube-api-access-9sqd5\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.899318 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/426997bd-6ba1-4ebb-b8d3-08be081add91-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.899373 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.899394 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-operator-scripts\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.899456 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/426997bd-6ba1-4ebb-b8d3-08be081add91-config-data-generated\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.899486 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-kolla-config\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.899516 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-config-data-default\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.899545 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426997bd-6ba1-4ebb-b8d3-08be081add91-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.899561 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sqd5\" (UniqueName: \"kubernetes.io/projected/426997bd-6ba1-4ebb-b8d3-08be081add91-kube-api-access-9sqd5\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.899855 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.900032 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/426997bd-6ba1-4ebb-b8d3-08be081add91-config-data-generated\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.900808 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-kolla-config\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.902105 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-config-data-default\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.904133 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-operator-scripts\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.923223 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426997bd-6ba1-4ebb-b8d3-08be081add91-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.923326 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/426997bd-6ba1-4ebb-b8d3-08be081add91-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.932035 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sqd5\" (UniqueName: \"kubernetes.io/projected/426997bd-6ba1-4ebb-b8d3-08be081add91-kube-api-access-9sqd5\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:01 crc kubenswrapper[4826]: I0129 07:01:01.939764 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " pod="openstack/openstack-galera-0" Jan 29 07:01:02 crc kubenswrapper[4826]: I0129 07:01:02.048267 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.033085 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.034281 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.041208 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.041416 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.041581 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.041735 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-h84xq" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.049290 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.121067 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.121124 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454b9218-d564-4664-b1dd-4435fa9c60b7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.121249 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/454b9218-d564-4664-b1dd-4435fa9c60b7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.121287 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.121322 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.121357 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25s4r\" (UniqueName: \"kubernetes.io/projected/454b9218-d564-4664-b1dd-4435fa9c60b7-kube-api-access-25s4r\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.121381 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/454b9218-d564-4664-b1dd-4435fa9c60b7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.121447 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.223221 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25s4r\" (UniqueName: \"kubernetes.io/projected/454b9218-d564-4664-b1dd-4435fa9c60b7-kube-api-access-25s4r\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.223346 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/454b9218-d564-4664-b1dd-4435fa9c60b7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.223391 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.223418 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.223436 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454b9218-d564-4664-b1dd-4435fa9c60b7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.223523 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/454b9218-d564-4664-b1dd-4435fa9c60b7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.223553 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.223613 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.224593 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.225513 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.225760 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/454b9218-d564-4664-b1dd-4435fa9c60b7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.226840 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.227015 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.239092 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454b9218-d564-4664-b1dd-4435fa9c60b7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.244425 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/454b9218-d564-4664-b1dd-4435fa9c60b7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.258584 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25s4r\" (UniqueName: \"kubernetes.io/projected/454b9218-d564-4664-b1dd-4435fa9c60b7-kube-api-access-25s4r\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.271993 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.301850 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.302766 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.304519 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.304587 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-sls2c" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.307485 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.316384 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.366931 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.428871 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af36d2e1-464b-4ada-9b91-2c18c52502d1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.428930 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/af36d2e1-464b-4ada-9b91-2c18c52502d1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.428970 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/af36d2e1-464b-4ada-9b91-2c18c52502d1-kolla-config\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.429060 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af36d2e1-464b-4ada-9b91-2c18c52502d1-config-data\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.429124 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxb7l\" (UniqueName: \"kubernetes.io/projected/af36d2e1-464b-4ada-9b91-2c18c52502d1-kube-api-access-pxb7l\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.530595 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af36d2e1-464b-4ada-9b91-2c18c52502d1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.530661 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/af36d2e1-464b-4ada-9b91-2c18c52502d1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.530699 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/af36d2e1-464b-4ada-9b91-2c18c52502d1-kolla-config\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.530714 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af36d2e1-464b-4ada-9b91-2c18c52502d1-config-data\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.530733 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxb7l\" (UniqueName: \"kubernetes.io/projected/af36d2e1-464b-4ada-9b91-2c18c52502d1-kube-api-access-pxb7l\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.531670 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/af36d2e1-464b-4ada-9b91-2c18c52502d1-kolla-config\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.532118 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af36d2e1-464b-4ada-9b91-2c18c52502d1-config-data\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.533775 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/af36d2e1-464b-4ada-9b91-2c18c52502d1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.539577 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af36d2e1-464b-4ada-9b91-2c18c52502d1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.551722 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxb7l\" (UniqueName: \"kubernetes.io/projected/af36d2e1-464b-4ada-9b91-2c18c52502d1-kube-api-access-pxb7l\") pod \"memcached-0\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " pod="openstack/memcached-0" Jan 29 07:01:03 crc kubenswrapper[4826]: I0129 07:01:03.623646 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 07:01:05 crc kubenswrapper[4826]: I0129 07:01:05.232420 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 07:01:05 crc kubenswrapper[4826]: I0129 07:01:05.233781 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 07:01:05 crc kubenswrapper[4826]: I0129 07:01:05.237143 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mt2vm" Jan 29 07:01:05 crc kubenswrapper[4826]: I0129 07:01:05.247997 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 07:01:05 crc kubenswrapper[4826]: I0129 07:01:05.264935 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxprp\" (UniqueName: \"kubernetes.io/projected/c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1-kube-api-access-nxprp\") pod \"kube-state-metrics-0\" (UID: \"c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1\") " pod="openstack/kube-state-metrics-0" Jan 29 07:01:05 crc kubenswrapper[4826]: I0129 07:01:05.366421 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxprp\" (UniqueName: \"kubernetes.io/projected/c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1-kube-api-access-nxprp\") pod \"kube-state-metrics-0\" (UID: \"c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1\") " pod="openstack/kube-state-metrics-0" Jan 29 07:01:05 crc kubenswrapper[4826]: I0129 07:01:05.385737 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxprp\" (UniqueName: \"kubernetes.io/projected/c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1-kube-api-access-nxprp\") pod \"kube-state-metrics-0\" (UID: \"c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1\") " pod="openstack/kube-state-metrics-0" Jan 29 07:01:05 crc kubenswrapper[4826]: I0129 07:01:05.550797 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.393494 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4zwtm"] Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.394846 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.397752 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.398243 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.398442 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5sh6f" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.401925 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-m2d2v"] Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.403883 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.414656 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4zwtm"] Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.415251 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-m2d2v"] Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.560822 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-run-ovn\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.560886 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-lib\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.560912 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-log\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.560944 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-log-ovn\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.561003 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d13cc8c-363d-4dcb-af5f-92318cf72a81-scripts\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.561025 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-run\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.561106 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-scripts\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.561136 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-run\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.561179 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-ovn-controller-tls-certs\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.561213 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4h6z\" (UniqueName: \"kubernetes.io/projected/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-kube-api-access-n4h6z\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.561259 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-combined-ca-bundle\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.561276 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-etc-ovs\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.561322 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvv57\" (UniqueName: \"kubernetes.io/projected/2d13cc8c-363d-4dcb-af5f-92318cf72a81-kube-api-access-qvv57\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.664656 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-run\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.664713 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-scripts\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.664735 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d13cc8c-363d-4dcb-af5f-92318cf72a81-scripts\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.664762 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-run\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.664801 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-ovn-controller-tls-certs\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.664828 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4h6z\" (UniqueName: \"kubernetes.io/projected/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-kube-api-access-n4h6z\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.664882 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-combined-ca-bundle\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.664907 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-etc-ovs\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.664947 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvv57\" (UniqueName: \"kubernetes.io/projected/2d13cc8c-363d-4dcb-af5f-92318cf72a81-kube-api-access-qvv57\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.664971 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-run-ovn\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.664999 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-lib\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.665015 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-log\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.665043 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-log-ovn\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.666628 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-log-ovn\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.667775 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-run\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.667992 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-run\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.670211 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d13cc8c-363d-4dcb-af5f-92318cf72a81-scripts\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.671403 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-run-ovn\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.671518 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-log\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.671696 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-etc-ovs\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.671736 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-lib\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.672095 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-scripts\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.679460 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-combined-ca-bundle\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.680383 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-ovn-controller-tls-certs\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.683238 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4h6z\" (UniqueName: \"kubernetes.io/projected/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-kube-api-access-n4h6z\") pod \"ovn-controller-4zwtm\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.690188 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvv57\" (UniqueName: \"kubernetes.io/projected/2d13cc8c-363d-4dcb-af5f-92318cf72a81-kube-api-access-qvv57\") pod \"ovn-controller-ovs-m2d2v\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.731338 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:09 crc kubenswrapper[4826]: I0129 07:01:09.751722 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.045227 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.053083 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.058689 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.060199 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.061426 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.062760 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mh42v" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.063103 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.068148 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.202207 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.202268 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.202356 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.202387 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.202430 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.202458 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.202479 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-config\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.202676 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f25lp\" (UniqueName: \"kubernetes.io/projected/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-kube-api-access-f25lp\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.304799 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.304896 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.304916 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.304938 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.304977 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.305003 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.305021 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-config\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.305064 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f25lp\" (UniqueName: \"kubernetes.io/projected/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-kube-api-access-f25lp\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.307471 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.308947 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.309287 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.310056 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-config\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.311418 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.329325 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.330825 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.337025 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.340064 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f25lp\" (UniqueName: \"kubernetes.io/projected/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-kube-api-access-f25lp\") pod \"ovsdbserver-sb-0\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:11 crc kubenswrapper[4826]: I0129 07:01:11.400064 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.537879 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.543256 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.547675 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.547800 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.547921 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-mvng2" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.548024 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.563067 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.633509 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.633576 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af857248-0a50-4850-93dd-c8c4e5d8e5ea-config\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.633602 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.633626 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdvdl\" (UniqueName: \"kubernetes.io/projected/af857248-0a50-4850-93dd-c8c4e5d8e5ea-kube-api-access-tdvdl\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.633653 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af857248-0a50-4850-93dd-c8c4e5d8e5ea-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.633691 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/af857248-0a50-4850-93dd-c8c4e5d8e5ea-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.634080 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.634175 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.735908 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.736010 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af857248-0a50-4850-93dd-c8c4e5d8e5ea-config\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.736046 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdvdl\" (UniqueName: \"kubernetes.io/projected/af857248-0a50-4850-93dd-c8c4e5d8e5ea-kube-api-access-tdvdl\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.736079 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af857248-0a50-4850-93dd-c8c4e5d8e5ea-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.736111 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.736163 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/af857248-0a50-4850-93dd-c8c4e5d8e5ea-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.736992 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/af857248-0a50-4850-93dd-c8c4e5d8e5ea-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.737279 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.737372 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af857248-0a50-4850-93dd-c8c4e5d8e5ea-config\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.737385 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.737288 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af857248-0a50-4850-93dd-c8c4e5d8e5ea-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.737639 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.742679 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.743669 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.754039 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.755310 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.759334 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdvdl\" (UniqueName: \"kubernetes.io/projected/af857248-0a50-4850-93dd-c8c4e5d8e5ea-kube-api-access-tdvdl\") pod \"ovsdbserver-nb-0\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:12 crc kubenswrapper[4826]: I0129 07:01:12.910795 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.585710 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.586416 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27bfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-7k46w_openstack(eef9f936-9b6b-4990-a606-1e8e3d910bfc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.587523 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-7k46w" podUID="eef9f936-9b6b-4990-a606-1e8e3d910bfc" Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.596688 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.596958 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxc92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-mz9vk_openstack(0e33aab6-677d-4c6f-9f8b-71e04d65b8e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.598917 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-mz9vk" podUID="0e33aab6-677d-4c6f-9f8b-71e04d65b8e8" Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.612870 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.613029 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4mwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-vhzmk_openstack(68f776f5-5464-4259-9466-2ebe6093e673): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.614368 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" podUID="68f776f5-5464-4259-9466-2ebe6093e673" Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.634217 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.634384 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f26n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-c7cbb8f79-dxdmv_openstack(aaa37e86-663c-41ba-a3cd-b144bf92805e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.635905 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" podUID="aaa37e86-663c-41ba-a3cd-b144bf92805e" Jan 29 07:01:19 crc kubenswrapper[4826]: I0129 07:01:19.887520 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 07:01:19 crc kubenswrapper[4826]: W0129 07:01:19.891835 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod426997bd_6ba1_4ebb_b8d3_08be081add91.slice/crio-ef305694fe6ce4067b3cc218917f89bee5278082ec7690a8c116f633d18ceb35 WatchSource:0}: Error finding container ef305694fe6ce4067b3cc218917f89bee5278082ec7690a8c116f633d18ceb35: Status 404 returned error can't find the container with id ef305694fe6ce4067b3cc218917f89bee5278082ec7690a8c116f633d18ceb35 Jan 29 07:01:19 crc kubenswrapper[4826]: I0129 07:01:19.944630 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"426997bd-6ba1-4ebb-b8d3-08be081add91","Type":"ContainerStarted","Data":"ef305694fe6ce4067b3cc218917f89bee5278082ec7690a8c116f633d18ceb35"} Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.946243 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" podUID="68f776f5-5464-4259-9466-2ebe6093e673" Jan 29 07:01:19 crc kubenswrapper[4826]: E0129 07:01:19.953283 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" podUID="aaa37e86-663c-41ba-a3cd-b144bf92805e" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.205739 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 07:01:20 crc kubenswrapper[4826]: W0129 07:01:20.235801 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2c7d16c_55e0_4a1e_81f4_a751dcef2cf1.slice/crio-b311d228575a7bff201b4df9489cc4ea8f4da60a8100a5aae86d49ad212a434e WatchSource:0}: Error finding container b311d228575a7bff201b4df9489cc4ea8f4da60a8100a5aae86d49ad212a434e: Status 404 returned error can't find the container with id b311d228575a7bff201b4df9489cc4ea8f4da60a8100a5aae86d49ad212a434e Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.328773 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-7k46w" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.330225 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-mz9vk" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.390191 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-m2d2v"] Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.393776 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eef9f936-9b6b-4990-a606-1e8e3d910bfc-dns-svc\") pod \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\" (UID: \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\") " Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.393842 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef9f936-9b6b-4990-a606-1e8e3d910bfc-config\") pod \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\" (UID: \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\") " Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.393933 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxc92\" (UniqueName: \"kubernetes.io/projected/0e33aab6-677d-4c6f-9f8b-71e04d65b8e8-kube-api-access-nxc92\") pod \"0e33aab6-677d-4c6f-9f8b-71e04d65b8e8\" (UID: \"0e33aab6-677d-4c6f-9f8b-71e04d65b8e8\") " Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.393955 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27bfj\" (UniqueName: \"kubernetes.io/projected/eef9f936-9b6b-4990-a606-1e8e3d910bfc-kube-api-access-27bfj\") pod \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\" (UID: \"eef9f936-9b6b-4990-a606-1e8e3d910bfc\") " Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.394019 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e33aab6-677d-4c6f-9f8b-71e04d65b8e8-config\") pod \"0e33aab6-677d-4c6f-9f8b-71e04d65b8e8\" (UID: \"0e33aab6-677d-4c6f-9f8b-71e04d65b8e8\") " Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.394785 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eef9f936-9b6b-4990-a606-1e8e3d910bfc-config" (OuterVolumeSpecName: "config") pod "eef9f936-9b6b-4990-a606-1e8e3d910bfc" (UID: "eef9f936-9b6b-4990-a606-1e8e3d910bfc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.394836 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e33aab6-677d-4c6f-9f8b-71e04d65b8e8-config" (OuterVolumeSpecName: "config") pod "0e33aab6-677d-4c6f-9f8b-71e04d65b8e8" (UID: "0e33aab6-677d-4c6f-9f8b-71e04d65b8e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.395212 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eef9f936-9b6b-4990-a606-1e8e3d910bfc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eef9f936-9b6b-4990-a606-1e8e3d910bfc" (UID: "eef9f936-9b6b-4990-a606-1e8e3d910bfc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.400769 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef9f936-9b6b-4990-a606-1e8e3d910bfc-kube-api-access-27bfj" (OuterVolumeSpecName: "kube-api-access-27bfj") pod "eef9f936-9b6b-4990-a606-1e8e3d910bfc" (UID: "eef9f936-9b6b-4990-a606-1e8e3d910bfc"). InnerVolumeSpecName "kube-api-access-27bfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.402016 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e33aab6-677d-4c6f-9f8b-71e04d65b8e8-kube-api-access-nxc92" (OuterVolumeSpecName: "kube-api-access-nxc92") pod "0e33aab6-677d-4c6f-9f8b-71e04d65b8e8" (UID: "0e33aab6-677d-4c6f-9f8b-71e04d65b8e8"). InnerVolumeSpecName "kube-api-access-nxc92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.460054 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4zwtm"] Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.476083 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.489071 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.497629 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxc92\" (UniqueName: \"kubernetes.io/projected/0e33aab6-677d-4c6f-9f8b-71e04d65b8e8-kube-api-access-nxc92\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.497661 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27bfj\" (UniqueName: \"kubernetes.io/projected/eef9f936-9b6b-4990-a606-1e8e3d910bfc-kube-api-access-27bfj\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.497676 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e33aab6-677d-4c6f-9f8b-71e04d65b8e8-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.497693 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eef9f936-9b6b-4990-a606-1e8e3d910bfc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.497706 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef9f936-9b6b-4990-a606-1e8e3d910bfc-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.544600 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 07:01:20 crc kubenswrapper[4826]: W0129 07:01:20.599374 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a20458e_fa0a_4aa2_a59a_70ebb523a3d9.slice/crio-a4ccaf8801cbd74ccfc269f74f1a84318ac464689082d95d4b7d1bf4743bde1a WatchSource:0}: Error finding container a4ccaf8801cbd74ccfc269f74f1a84318ac464689082d95d4b7d1bf4743bde1a: Status 404 returned error can't find the container with id a4ccaf8801cbd74ccfc269f74f1a84318ac464689082d95d4b7d1bf4743bde1a Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.953593 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9","Type":"ContainerStarted","Data":"a4ccaf8801cbd74ccfc269f74f1a84318ac464689082d95d4b7d1bf4743bde1a"} Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.954811 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4zwtm" event={"ID":"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0","Type":"ContainerStarted","Data":"0e2b877e63b4206a7b3f9783b41ecf6a4a7a1003aa85c4707b7aed7613891ad7"} Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.955838 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-7k46w" event={"ID":"eef9f936-9b6b-4990-a606-1e8e3d910bfc","Type":"ContainerDied","Data":"513ba432b58e3cfac0c67a1c6e067d33cf471c8e7f5b22c4f8dcdac021e4c2a8"} Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.955877 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-7k46w" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.959216 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1794f620-102a-4b9c-9097-713579ec55ad","Type":"ContainerStarted","Data":"a285c0e82f869c096c5852cbe3ebb71f48bfdd919cd5f2aa2550ecf47c3da59f"} Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.962368 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"454b9218-d564-4664-b1dd-4435fa9c60b7","Type":"ContainerStarted","Data":"b9150e6fb524e0cfb0c782223a739c11df1184d94285f30bcb6c9b659080949c"} Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.963574 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1","Type":"ContainerStarted","Data":"b311d228575a7bff201b4df9489cc4ea8f4da60a8100a5aae86d49ad212a434e"} Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.964476 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-mz9vk" event={"ID":"0e33aab6-677d-4c6f-9f8b-71e04d65b8e8","Type":"ContainerDied","Data":"36ccc107e64a98f0371b13bf5750ae9989188cf3ed15bfe01e33675545f7c838"} Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.964535 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-mz9vk" Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.966113 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c","Type":"ContainerStarted","Data":"a618aff47b6a4f080c12ed436cdd2e152b8a7acc4f88c6d531c82c61bbd02d8c"} Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.966993 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m2d2v" event={"ID":"2d13cc8c-363d-4dcb-af5f-92318cf72a81","Type":"ContainerStarted","Data":"0d27e5fdbb1ac758e1918ca2744d38aad57098af1310e0adf874d910b96e24d4"} Jan 29 07:01:20 crc kubenswrapper[4826]: I0129 07:01:20.967964 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"af36d2e1-464b-4ada-9b91-2c18c52502d1","Type":"ContainerStarted","Data":"2e88ac09c178a6b2512ea0fdc58d5fe8516f78ba07cd97e7f3bfe552aece400f"} Jan 29 07:01:21 crc kubenswrapper[4826]: I0129 07:01:21.054074 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-7k46w"] Jan 29 07:01:21 crc kubenswrapper[4826]: I0129 07:01:21.070840 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-7k46w"] Jan 29 07:01:21 crc kubenswrapper[4826]: I0129 07:01:21.088843 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-mz9vk"] Jan 29 07:01:21 crc kubenswrapper[4826]: I0129 07:01:21.097799 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-mz9vk"] Jan 29 07:01:21 crc kubenswrapper[4826]: I0129 07:01:21.366571 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 07:01:21 crc kubenswrapper[4826]: I0129 07:01:21.975585 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"af857248-0a50-4850-93dd-c8c4e5d8e5ea","Type":"ContainerStarted","Data":"1c15ca98f505d964a31fc8ea42371e21d244875489de574c16f53d5e1d645c32"} Jan 29 07:01:22 crc kubenswrapper[4826]: I0129 07:01:22.834753 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e33aab6-677d-4c6f-9f8b-71e04d65b8e8" path="/var/lib/kubelet/pods/0e33aab6-677d-4c6f-9f8b-71e04d65b8e8/volumes" Jan 29 07:01:22 crc kubenswrapper[4826]: I0129 07:01:22.835927 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef9f936-9b6b-4990-a606-1e8e3d910bfc" path="/var/lib/kubelet/pods/eef9f936-9b6b-4990-a606-1e8e3d910bfc/volumes" Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.147210 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1","Type":"ContainerStarted","Data":"3be1f88eab62bf3521f70d5f30c9c6d8f758049c6f738bf9d863789d95170f4f"} Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.147748 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.149009 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9","Type":"ContainerStarted","Data":"949d52a8a709e90860b9b0ef1491f8a9e2676ac4584b2115505be7c970fcced1"} Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.151219 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"426997bd-6ba1-4ebb-b8d3-08be081add91","Type":"ContainerStarted","Data":"5507b9d03083bbcbf3f503fcf50eaf096b7739e47fb5b4546534bf1849f59544"} Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.174996 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4zwtm" event={"ID":"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0","Type":"ContainerStarted","Data":"9015e9cab42dfd44fe7092ceb6eb5f305eac76d9b101c255e55adbb653135a3c"} Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.175887 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-4zwtm" Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.178028 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.76135016 podStartE2EDuration="30.178012701s" podCreationTimestamp="2026-01-29 07:01:05 +0000 UTC" firstStartedPulling="2026-01-29 07:01:20.238719335 +0000 UTC m=+1064.100512404" lastFinishedPulling="2026-01-29 07:01:34.655381846 +0000 UTC m=+1078.517174945" observedRunningTime="2026-01-29 07:01:35.175184979 +0000 UTC m=+1079.036978048" watchObservedRunningTime="2026-01-29 07:01:35.178012701 +0000 UTC m=+1079.039805770" Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.178115 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"af36d2e1-464b-4ada-9b91-2c18c52502d1","Type":"ContainerStarted","Data":"92577bcdcf244adebede7b28b6bb8f3affcb3adeed6e90572060e941115a1be5"} Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.178211 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.179979 4826 generic.go:334] "Generic (PLEG): container finished" podID="aaa37e86-663c-41ba-a3cd-b144bf92805e" containerID="d4b7e743f1f167bc06eccd90ab64842d89f331eb74d5cac0fcc189604ed95c17" exitCode=0 Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.180072 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" event={"ID":"aaa37e86-663c-41ba-a3cd-b144bf92805e","Type":"ContainerDied","Data":"d4b7e743f1f167bc06eccd90ab64842d89f331eb74d5cac0fcc189604ed95c17"} Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.186665 4826 generic.go:334] "Generic (PLEG): container finished" podID="68f776f5-5464-4259-9466-2ebe6093e673" containerID="cb011f49261f0d8dd2dc4149fa9657613140e4e34877de153bcc4fcadf64a1ac" exitCode=0 Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.186699 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" event={"ID":"68f776f5-5464-4259-9466-2ebe6093e673","Type":"ContainerDied","Data":"cb011f49261f0d8dd2dc4149fa9657613140e4e34877de153bcc4fcadf64a1ac"} Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.189276 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"454b9218-d564-4664-b1dd-4435fa9c60b7","Type":"ContainerStarted","Data":"9cb768d646eadba2cb5d45a808c8c8e8c1b8f03c854c19add211340ed6949a2d"} Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.198933 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"af857248-0a50-4850-93dd-c8c4e5d8e5ea","Type":"ContainerStarted","Data":"358deb67c7029377050eca333cfd2c34f12e0e975144bb97ed20f885a41cdc54"} Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.204271 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m2d2v" event={"ID":"2d13cc8c-363d-4dcb-af5f-92318cf72a81","Type":"ContainerStarted","Data":"cedf10890041c37f6ac64bb119ab4a13de2addc104534b554aebcdd824040d66"} Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.229594 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4zwtm" podStartSLOduration=12.929744319 podStartE2EDuration="26.229579264s" podCreationTimestamp="2026-01-29 07:01:09 +0000 UTC" firstStartedPulling="2026-01-29 07:01:20.479527209 +0000 UTC m=+1064.341320278" lastFinishedPulling="2026-01-29 07:01:33.779362104 +0000 UTC m=+1077.641155223" observedRunningTime="2026-01-29 07:01:35.229517542 +0000 UTC m=+1079.091310631" watchObservedRunningTime="2026-01-29 07:01:35.229579264 +0000 UTC m=+1079.091372333" Jan 29 07:01:35 crc kubenswrapper[4826]: I0129 07:01:35.286754 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.31170593 podStartE2EDuration="32.286729198s" podCreationTimestamp="2026-01-29 07:01:03 +0000 UTC" firstStartedPulling="2026-01-29 07:01:20.481382466 +0000 UTC m=+1064.343175535" lastFinishedPulling="2026-01-29 07:01:33.456405734 +0000 UTC m=+1077.318198803" observedRunningTime="2026-01-29 07:01:35.276601772 +0000 UTC m=+1079.138394851" watchObservedRunningTime="2026-01-29 07:01:35.286729198 +0000 UTC m=+1079.148522287" Jan 29 07:01:36 crc kubenswrapper[4826]: I0129 07:01:36.214081 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" event={"ID":"aaa37e86-663c-41ba-a3cd-b144bf92805e","Type":"ContainerStarted","Data":"3d162a6c65ae0f8a471283f0f949a5a0cac1c2982479fbd1fa241ef36b175d31"} Jan 29 07:01:36 crc kubenswrapper[4826]: I0129 07:01:36.214395 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:01:36 crc kubenswrapper[4826]: I0129 07:01:36.216120 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" event={"ID":"68f776f5-5464-4259-9466-2ebe6093e673","Type":"ContainerStarted","Data":"668d818a2ae9530c45edaf4c7de03be500561b99a866cd544011235c63fdb125"} Jan 29 07:01:36 crc kubenswrapper[4826]: I0129 07:01:36.217058 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:01:36 crc kubenswrapper[4826]: I0129 07:01:36.217604 4826 generic.go:334] "Generic (PLEG): container finished" podID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerID="cedf10890041c37f6ac64bb119ab4a13de2addc104534b554aebcdd824040d66" exitCode=0 Jan 29 07:01:36 crc kubenswrapper[4826]: I0129 07:01:36.217659 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m2d2v" event={"ID":"2d13cc8c-363d-4dcb-af5f-92318cf72a81","Type":"ContainerDied","Data":"cedf10890041c37f6ac64bb119ab4a13de2addc104534b554aebcdd824040d66"} Jan 29 07:01:36 crc kubenswrapper[4826]: I0129 07:01:36.239215 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" podStartSLOduration=2.963490509 podStartE2EDuration="38.239186992s" podCreationTimestamp="2026-01-29 07:00:58 +0000 UTC" firstStartedPulling="2026-01-29 07:00:59.379679123 +0000 UTC m=+1043.241472192" lastFinishedPulling="2026-01-29 07:01:34.655375596 +0000 UTC m=+1078.517168675" observedRunningTime="2026-01-29 07:01:36.228253516 +0000 UTC m=+1080.090046585" watchObservedRunningTime="2026-01-29 07:01:36.239186992 +0000 UTC m=+1080.100980061" Jan 29 07:01:36 crc kubenswrapper[4826]: I0129 07:01:36.250373 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" podStartSLOduration=-9223371998.605268 podStartE2EDuration="38.249507343s" podCreationTimestamp="2026-01-29 07:00:58 +0000 UTC" firstStartedPulling="2026-01-29 07:00:59.55368253 +0000 UTC m=+1043.415475599" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:01:36.24622334 +0000 UTC m=+1080.108016409" watchObservedRunningTime="2026-01-29 07:01:36.249507343 +0000 UTC m=+1080.111300422" Jan 29 07:01:37 crc kubenswrapper[4826]: I0129 07:01:37.230218 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m2d2v" event={"ID":"2d13cc8c-363d-4dcb-af5f-92318cf72a81","Type":"ContainerStarted","Data":"7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08"} Jan 29 07:01:37 crc kubenswrapper[4826]: I0129 07:01:37.232073 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:37 crc kubenswrapper[4826]: I0129 07:01:37.232115 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m2d2v" event={"ID":"2d13cc8c-363d-4dcb-af5f-92318cf72a81","Type":"ContainerStarted","Data":"55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6"} Jan 29 07:01:37 crc kubenswrapper[4826]: I0129 07:01:37.292543 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-m2d2v" podStartSLOduration=14.779532084 podStartE2EDuration="28.292526125s" podCreationTimestamp="2026-01-29 07:01:09 +0000 UTC" firstStartedPulling="2026-01-29 07:01:20.396901521 +0000 UTC m=+1064.258694590" lastFinishedPulling="2026-01-29 07:01:33.909895522 +0000 UTC m=+1077.771688631" observedRunningTime="2026-01-29 07:01:37.273088574 +0000 UTC m=+1081.134881703" watchObservedRunningTime="2026-01-29 07:01:37.292526125 +0000 UTC m=+1081.154319194" Jan 29 07:01:38 crc kubenswrapper[4826]: I0129 07:01:38.241406 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:01:43 crc kubenswrapper[4826]: I0129 07:01:43.626656 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 07:01:43 crc kubenswrapper[4826]: I0129 07:01:43.802025 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:01:44 crc kubenswrapper[4826]: I0129 07:01:44.258528 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:01:44 crc kubenswrapper[4826]: I0129 07:01:44.340568 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-dxdmv"] Jan 29 07:01:44 crc kubenswrapper[4826]: I0129 07:01:44.340909 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" podUID="aaa37e86-663c-41ba-a3cd-b144bf92805e" containerName="dnsmasq-dns" containerID="cri-o://3d162a6c65ae0f8a471283f0f949a5a0cac1c2982479fbd1fa241ef36b175d31" gracePeriod=10 Jan 29 07:01:45 crc kubenswrapper[4826]: I0129 07:01:45.555111 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 07:01:45 crc kubenswrapper[4826]: I0129 07:01:45.716065 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-j6bkp"] Jan 29 07:01:45 crc kubenswrapper[4826]: I0129 07:01:45.717661 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:45 crc kubenswrapper[4826]: I0129 07:01:45.728070 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-j6bkp"] Jan 29 07:01:45 crc kubenswrapper[4826]: I0129 07:01:45.905833 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tflqn\" (UniqueName: \"kubernetes.io/projected/8979d5de-6236-4b9e-a386-8befb05962dc-kube-api-access-tflqn\") pod \"dnsmasq-dns-7f9f9f545f-j6bkp\" (UID: \"8979d5de-6236-4b9e-a386-8befb05962dc\") " pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:45 crc kubenswrapper[4826]: I0129 07:01:45.905896 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8979d5de-6236-4b9e-a386-8befb05962dc-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-j6bkp\" (UID: \"8979d5de-6236-4b9e-a386-8befb05962dc\") " pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:45 crc kubenswrapper[4826]: I0129 07:01:45.905938 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8979d5de-6236-4b9e-a386-8befb05962dc-config\") pod \"dnsmasq-dns-7f9f9f545f-j6bkp\" (UID: \"8979d5de-6236-4b9e-a386-8befb05962dc\") " pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.006962 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tflqn\" (UniqueName: \"kubernetes.io/projected/8979d5de-6236-4b9e-a386-8befb05962dc-kube-api-access-tflqn\") pod \"dnsmasq-dns-7f9f9f545f-j6bkp\" (UID: \"8979d5de-6236-4b9e-a386-8befb05962dc\") " pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.007847 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8979d5de-6236-4b9e-a386-8befb05962dc-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-j6bkp\" (UID: \"8979d5de-6236-4b9e-a386-8befb05962dc\") " pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.007905 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8979d5de-6236-4b9e-a386-8befb05962dc-config\") pod \"dnsmasq-dns-7f9f9f545f-j6bkp\" (UID: \"8979d5de-6236-4b9e-a386-8befb05962dc\") " pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.008742 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8979d5de-6236-4b9e-a386-8befb05962dc-config\") pod \"dnsmasq-dns-7f9f9f545f-j6bkp\" (UID: \"8979d5de-6236-4b9e-a386-8befb05962dc\") " pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.009237 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8979d5de-6236-4b9e-a386-8befb05962dc-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-j6bkp\" (UID: \"8979d5de-6236-4b9e-a386-8befb05962dc\") " pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.038275 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tflqn\" (UniqueName: \"kubernetes.io/projected/8979d5de-6236-4b9e-a386-8befb05962dc-kube-api-access-tflqn\") pod \"dnsmasq-dns-7f9f9f545f-j6bkp\" (UID: \"8979d5de-6236-4b9e-a386-8befb05962dc\") " pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.046173 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.873628 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.882869 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.885383 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.885441 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.885618 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.887869 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-ktjxh" Jan 29 07:01:46 crc kubenswrapper[4826]: I0129 07:01:46.900412 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.020954 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.021056 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-cache\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.021208 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-lock\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.021370 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.021806 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.021839 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s588z\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-kube-api-access-s588z\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.123327 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.123688 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-lock\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.123784 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.123915 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.124007 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s588z\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-kube-api-access-s588z\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.124034 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.124160 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-cache\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.124215 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.124278 4826 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.124320 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-lock\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.124324 4826 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.124400 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift podName:85b51a36-8aa5-46e7-b8ab-a7e672c491d7 nodeName:}" failed. No retries permitted until 2026-01-29 07:01:47.624384251 +0000 UTC m=+1091.486177320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift") pod "swift-storage-0" (UID: "85b51a36-8aa5-46e7-b8ab-a7e672c491d7") : configmap "swift-ring-files" not found Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.124571 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-cache\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.130228 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.147770 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s588z\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-kube-api-access-s588z\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.159783 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.225365 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa37e86-663c-41ba-a3cd-b144bf92805e-dns-svc\") pod \"aaa37e86-663c-41ba-a3cd-b144bf92805e\" (UID: \"aaa37e86-663c-41ba-a3cd-b144bf92805e\") " Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.225552 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f26n5\" (UniqueName: \"kubernetes.io/projected/aaa37e86-663c-41ba-a3cd-b144bf92805e-kube-api-access-f26n5\") pod \"aaa37e86-663c-41ba-a3cd-b144bf92805e\" (UID: \"aaa37e86-663c-41ba-a3cd-b144bf92805e\") " Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.225581 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa37e86-663c-41ba-a3cd-b144bf92805e-config\") pod \"aaa37e86-663c-41ba-a3cd-b144bf92805e\" (UID: \"aaa37e86-663c-41ba-a3cd-b144bf92805e\") " Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.228342 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa37e86-663c-41ba-a3cd-b144bf92805e-kube-api-access-f26n5" (OuterVolumeSpecName: "kube-api-access-f26n5") pod "aaa37e86-663c-41ba-a3cd-b144bf92805e" (UID: "aaa37e86-663c-41ba-a3cd-b144bf92805e"). InnerVolumeSpecName "kube-api-access-f26n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.257641 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa37e86-663c-41ba-a3cd-b144bf92805e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aaa37e86-663c-41ba-a3cd-b144bf92805e" (UID: "aaa37e86-663c-41ba-a3cd-b144bf92805e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.257748 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa37e86-663c-41ba-a3cd-b144bf92805e-config" (OuterVolumeSpecName: "config") pod "aaa37e86-663c-41ba-a3cd-b144bf92805e" (UID: "aaa37e86-663c-41ba-a3cd-b144bf92805e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.327003 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa37e86-663c-41ba-a3cd-b144bf92805e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.327262 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f26n5\" (UniqueName: \"kubernetes.io/projected/aaa37e86-663c-41ba-a3cd-b144bf92805e-kube-api-access-f26n5\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.327277 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa37e86-663c-41ba-a3cd-b144bf92805e-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.329720 4826 generic.go:334] "Generic (PLEG): container finished" podID="aaa37e86-663c-41ba-a3cd-b144bf92805e" containerID="3d162a6c65ae0f8a471283f0f949a5a0cac1c2982479fbd1fa241ef36b175d31" exitCode=0 Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.329785 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.329822 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" event={"ID":"aaa37e86-663c-41ba-a3cd-b144bf92805e","Type":"ContainerDied","Data":"3d162a6c65ae0f8a471283f0f949a5a0cac1c2982479fbd1fa241ef36b175d31"} Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.329863 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-dxdmv" event={"ID":"aaa37e86-663c-41ba-a3cd-b144bf92805e","Type":"ContainerDied","Data":"71b44553a62426e2103caacf5e61f03cc6dd3f0b79346df27779daf6f4dd215e"} Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.329883 4826 scope.go:117] "RemoveContainer" containerID="3d162a6c65ae0f8a471283f0f949a5a0cac1c2982479fbd1fa241ef36b175d31" Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.332633 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7" Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.332829 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n679h67fh57bhc9h75h5ffh698h5f6h656h59ch87h9hdh8ch686h56fh599h589h54h55fhdch74h65bh644h68bhffh67bh547h586h57ch5fdh9dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdvdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(af857248-0a50-4850-93dd-c8c4e5d8e5ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.334079 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.343026 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7" Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.343174 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n577h548hc9h564h59dh57hb8h67bh54fh544h56fh58fh64bh695h66ch577hfbh5fdhf5h584h559h68bh6bh5d8h65ch657h54fh5c9h646h676h664h654q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f25lp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(3a20458e-fa0a-4aa2-a59a-70ebb523a3d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.345019 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.350228 4826 scope.go:117] "RemoveContainer" containerID="d4b7e743f1f167bc06eccd90ab64842d89f331eb74d5cac0fcc189604ed95c17" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.373137 4826 scope.go:117] "RemoveContainer" containerID="3d162a6c65ae0f8a471283f0f949a5a0cac1c2982479fbd1fa241ef36b175d31" Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.373540 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d162a6c65ae0f8a471283f0f949a5a0cac1c2982479fbd1fa241ef36b175d31\": container with ID starting with 3d162a6c65ae0f8a471283f0f949a5a0cac1c2982479fbd1fa241ef36b175d31 not found: ID does not exist" containerID="3d162a6c65ae0f8a471283f0f949a5a0cac1c2982479fbd1fa241ef36b175d31" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.373574 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d162a6c65ae0f8a471283f0f949a5a0cac1c2982479fbd1fa241ef36b175d31"} err="failed to get container status \"3d162a6c65ae0f8a471283f0f949a5a0cac1c2982479fbd1fa241ef36b175d31\": rpc error: code = NotFound desc = could not find container \"3d162a6c65ae0f8a471283f0f949a5a0cac1c2982479fbd1fa241ef36b175d31\": container with ID starting with 3d162a6c65ae0f8a471283f0f949a5a0cac1c2982479fbd1fa241ef36b175d31 not found: ID does not exist" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.373599 4826 scope.go:117] "RemoveContainer" containerID="d4b7e743f1f167bc06eccd90ab64842d89f331eb74d5cac0fcc189604ed95c17" Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.373992 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4b7e743f1f167bc06eccd90ab64842d89f331eb74d5cac0fcc189604ed95c17\": container with ID starting with d4b7e743f1f167bc06eccd90ab64842d89f331eb74d5cac0fcc189604ed95c17 not found: ID does not exist" containerID="d4b7e743f1f167bc06eccd90ab64842d89f331eb74d5cac0fcc189604ed95c17" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.374018 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b7e743f1f167bc06eccd90ab64842d89f331eb74d5cac0fcc189604ed95c17"} err="failed to get container status \"d4b7e743f1f167bc06eccd90ab64842d89f331eb74d5cac0fcc189604ed95c17\": rpc error: code = NotFound desc = could not find container \"d4b7e743f1f167bc06eccd90ab64842d89f331eb74d5cac0fcc189604ed95c17\": container with ID starting with d4b7e743f1f167bc06eccd90ab64842d89f331eb74d5cac0fcc189604ed95c17 not found: ID does not exist" Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.395237 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-dxdmv"] Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.411352 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-dxdmv"] Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.632953 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.633209 4826 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.633247 4826 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 07:01:47 crc kubenswrapper[4826]: E0129 07:01:47.633344 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift podName:85b51a36-8aa5-46e7-b8ab-a7e672c491d7 nodeName:}" failed. No retries permitted until 2026-01-29 07:01:48.63332093 +0000 UTC m=+1092.495113999 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift") pod "swift-storage-0" (UID: "85b51a36-8aa5-46e7-b8ab-a7e672c491d7") : configmap "swift-ring-files" not found Jan 29 07:01:47 crc kubenswrapper[4826]: W0129 07:01:47.720479 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8979d5de_6236_4b9e_a386_8befb05962dc.slice/crio-c5f842c397a2fdcc2cb58da0059a3df9f16d9e184688a18cdf6331a51ca0ba66 WatchSource:0}: Error finding container c5f842c397a2fdcc2cb58da0059a3df9f16d9e184688a18cdf6331a51ca0ba66: Status 404 returned error can't find the container with id c5f842c397a2fdcc2cb58da0059a3df9f16d9e184688a18cdf6331a51ca0ba66 Jan 29 07:01:47 crc kubenswrapper[4826]: I0129 07:01:47.721938 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-j6bkp"] Jan 29 07:01:48 crc kubenswrapper[4826]: I0129 07:01:48.347330 4826 generic.go:334] "Generic (PLEG): container finished" podID="454b9218-d564-4664-b1dd-4435fa9c60b7" containerID="9cb768d646eadba2cb5d45a808c8c8e8c1b8f03c854c19add211340ed6949a2d" exitCode=0 Jan 29 07:01:48 crc kubenswrapper[4826]: I0129 07:01:48.347834 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"454b9218-d564-4664-b1dd-4435fa9c60b7","Type":"ContainerDied","Data":"9cb768d646eadba2cb5d45a808c8c8e8c1b8f03c854c19add211340ed6949a2d"} Jan 29 07:01:48 crc kubenswrapper[4826]: I0129 07:01:48.358960 4826 generic.go:334] "Generic (PLEG): container finished" podID="426997bd-6ba1-4ebb-b8d3-08be081add91" containerID="5507b9d03083bbcbf3f503fcf50eaf096b7739e47fb5b4546534bf1849f59544" exitCode=0 Jan 29 07:01:48 crc kubenswrapper[4826]: I0129 07:01:48.359038 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"426997bd-6ba1-4ebb-b8d3-08be081add91","Type":"ContainerDied","Data":"5507b9d03083bbcbf3f503fcf50eaf096b7739e47fb5b4546534bf1849f59544"} Jan 29 07:01:48 crc kubenswrapper[4826]: I0129 07:01:48.367000 4826 generic.go:334] "Generic (PLEG): container finished" podID="8979d5de-6236-4b9e-a386-8befb05962dc" containerID="5cbde9d096ba385beca2424b79410b873fa531e0d494fdf19159d5dc6d081ed6" exitCode=0 Jan 29 07:01:48 crc kubenswrapper[4826]: I0129 07:01:48.367455 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" event={"ID":"8979d5de-6236-4b9e-a386-8befb05962dc","Type":"ContainerDied","Data":"5cbde9d096ba385beca2424b79410b873fa531e0d494fdf19159d5dc6d081ed6"} Jan 29 07:01:48 crc kubenswrapper[4826]: I0129 07:01:48.367529 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" event={"ID":"8979d5de-6236-4b9e-a386-8befb05962dc","Type":"ContainerStarted","Data":"c5f842c397a2fdcc2cb58da0059a3df9f16d9e184688a18cdf6331a51ca0ba66"} Jan 29 07:01:48 crc kubenswrapper[4826]: E0129 07:01:48.379007 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" Jan 29 07:01:48 crc kubenswrapper[4826]: E0129 07:01:48.379648 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" Jan 29 07:01:48 crc kubenswrapper[4826]: I0129 07:01:48.657497 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:48 crc kubenswrapper[4826]: E0129 07:01:48.657677 4826 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 07:01:48 crc kubenswrapper[4826]: E0129 07:01:48.657704 4826 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 07:01:48 crc kubenswrapper[4826]: E0129 07:01:48.657791 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift podName:85b51a36-8aa5-46e7-b8ab-a7e672c491d7 nodeName:}" failed. No retries permitted until 2026-01-29 07:01:50.657767143 +0000 UTC m=+1094.519560212 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift") pod "swift-storage-0" (UID: "85b51a36-8aa5-46e7-b8ab-a7e672c491d7") : configmap "swift-ring-files" not found Jan 29 07:01:48 crc kubenswrapper[4826]: I0129 07:01:48.817606 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa37e86-663c-41ba-a3cd-b144bf92805e" path="/var/lib/kubelet/pods/aaa37e86-663c-41ba-a3cd-b144bf92805e/volumes" Jan 29 07:01:48 crc kubenswrapper[4826]: I0129 07:01:48.911723 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:48 crc kubenswrapper[4826]: I0129 07:01:48.951073 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:49 crc kubenswrapper[4826]: I0129 07:01:49.384131 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"454b9218-d564-4664-b1dd-4435fa9c60b7","Type":"ContainerStarted","Data":"1937919f8dd64b752c871037ec07858c20e0540d1d4d7464eab4f0a0259be556"} Jan 29 07:01:49 crc kubenswrapper[4826]: I0129 07:01:49.386469 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"426997bd-6ba1-4ebb-b8d3-08be081add91","Type":"ContainerStarted","Data":"efa12df27e8f1e42dacfe2b602d83d6ae2370e8bb4cb8ce8ab4abb28c9e997af"} Jan 29 07:01:49 crc kubenswrapper[4826]: I0129 07:01:49.388766 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" event={"ID":"8979d5de-6236-4b9e-a386-8befb05962dc","Type":"ContainerStarted","Data":"d81e34c735dd1e66e5f7ce0d6cdeb2605e1156e273ea551cb1e59751d9cb4ccf"} Jan 29 07:01:49 crc kubenswrapper[4826]: I0129 07:01:49.389054 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:49 crc kubenswrapper[4826]: E0129 07:01:49.391108 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" Jan 29 07:01:49 crc kubenswrapper[4826]: I0129 07:01:49.410200 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=33.983602985 podStartE2EDuration="47.410168132s" podCreationTimestamp="2026-01-29 07:01:02 +0000 UTC" firstStartedPulling="2026-01-29 07:01:20.483206502 +0000 UTC m=+1064.344999571" lastFinishedPulling="2026-01-29 07:01:33.909771639 +0000 UTC m=+1077.771564718" observedRunningTime="2026-01-29 07:01:49.404353045 +0000 UTC m=+1093.266146134" watchObservedRunningTime="2026-01-29 07:01:49.410168132 +0000 UTC m=+1093.271961201" Jan 29 07:01:49 crc kubenswrapper[4826]: I0129 07:01:49.446501 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 07:01:49 crc kubenswrapper[4826]: I0129 07:01:49.456725 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" podStartSLOduration=4.456710808 podStartE2EDuration="4.456710808s" podCreationTimestamp="2026-01-29 07:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:01:49.451389823 +0000 UTC m=+1093.313182882" watchObservedRunningTime="2026-01-29 07:01:49.456710808 +0000 UTC m=+1093.318503877" Jan 29 07:01:49 crc kubenswrapper[4826]: I0129 07:01:49.456903 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=38.206356125 podStartE2EDuration="49.456897782s" podCreationTimestamp="2026-01-29 07:01:00 +0000 UTC" firstStartedPulling="2026-01-29 07:01:19.895405262 +0000 UTC m=+1063.757198321" lastFinishedPulling="2026-01-29 07:01:31.145946879 +0000 UTC m=+1075.007739978" observedRunningTime="2026-01-29 07:01:49.431107891 +0000 UTC m=+1093.292900970" watchObservedRunningTime="2026-01-29 07:01:49.456897782 +0000 UTC m=+1093.318690851" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.399789 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.401284 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:50 crc kubenswrapper[4826]: E0129 07:01:50.403575 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" Jan 29 07:01:50 crc kubenswrapper[4826]: E0129 07:01:50.403906 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.466866 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.702566 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:50 crc kubenswrapper[4826]: E0129 07:01:50.703230 4826 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 07:01:50 crc kubenswrapper[4826]: E0129 07:01:50.703427 4826 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 07:01:50 crc kubenswrapper[4826]: E0129 07:01:50.703653 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift podName:85b51a36-8aa5-46e7-b8ab-a7e672c491d7 nodeName:}" failed. No retries permitted until 2026-01-29 07:01:54.703613192 +0000 UTC m=+1098.565406301 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift") pod "swift-storage-0" (UID: "85b51a36-8aa5-46e7-b8ab-a7e672c491d7") : configmap "swift-ring-files" not found Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.820330 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nlxt2"] Jan 29 07:01:50 crc kubenswrapper[4826]: E0129 07:01:50.820747 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa37e86-663c-41ba-a3cd-b144bf92805e" containerName="init" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.820769 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa37e86-663c-41ba-a3cd-b144bf92805e" containerName="init" Jan 29 07:01:50 crc kubenswrapper[4826]: E0129 07:01:50.820806 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa37e86-663c-41ba-a3cd-b144bf92805e" containerName="dnsmasq-dns" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.820815 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa37e86-663c-41ba-a3cd-b144bf92805e" containerName="dnsmasq-dns" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.821015 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa37e86-663c-41ba-a3cd-b144bf92805e" containerName="dnsmasq-dns" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.821626 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nlxt2"] Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.821762 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.823846 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.824158 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.824477 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.906401 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjvxj\" (UniqueName: \"kubernetes.io/projected/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-kube-api-access-bjvxj\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.906461 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-swiftconf\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.906480 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-dispersionconf\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.906545 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-scripts\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.906598 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-etc-swift\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.906613 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-combined-ca-bundle\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:50 crc kubenswrapper[4826]: I0129 07:01:50.906635 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-ring-data-devices\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.007854 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-scripts\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.007940 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-etc-swift\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.007970 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-combined-ca-bundle\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.008005 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-ring-data-devices\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.008089 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjvxj\" (UniqueName: \"kubernetes.io/projected/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-kube-api-access-bjvxj\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.008125 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-swiftconf\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.008145 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-dispersionconf\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.009093 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-scripts\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.009691 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-etc-swift\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.009798 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-ring-data-devices\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.014411 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-swiftconf\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.014678 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-dispersionconf\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.019951 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-combined-ca-bundle\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.034052 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjvxj\" (UniqueName: \"kubernetes.io/projected/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-kube-api-access-bjvxj\") pod \"swift-ring-rebalance-nlxt2\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.148005 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.400371 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:51 crc kubenswrapper[4826]: E0129 07:01:51.410625 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" Jan 29 07:01:51 crc kubenswrapper[4826]: E0129 07:01:51.411154 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.442822 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 07:01:51 crc kubenswrapper[4826]: I0129 07:01:51.628796 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nlxt2"] Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.049712 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.049759 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 07:01:52 crc kubenswrapper[4826]: E0129 07:01:52.121772 4826 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.173:52664->38.102.83.173:41327: write tcp 38.102.83.173:52664->38.102.83.173:41327: write: connection reset by peer Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.304579 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jmdtd"] Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.305806 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.308952 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.318851 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jmdtd"] Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.421082 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nlxt2" event={"ID":"34dddfc9-db4b-48c0-9ec0-6eceb641aa26","Type":"ContainerStarted","Data":"d66e9eb5b12839b686adf0c04ad120289cd318683a00b6f60f5209ab099f7f09"} Jan 29 07:01:52 crc kubenswrapper[4826]: E0129 07:01:52.436525 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.437526 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b8cca1-3968-49aa-b4ed-88d9d4075223-combined-ca-bundle\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.437611 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b8cca1-3968-49aa-b4ed-88d9d4075223-config\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.437693 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b8cca1-3968-49aa-b4ed-88d9d4075223-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.437723 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d6b8cca1-3968-49aa-b4ed-88d9d4075223-ovn-rundir\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.437777 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8jss\" (UniqueName: \"kubernetes.io/projected/d6b8cca1-3968-49aa-b4ed-88d9d4075223-kube-api-access-n8jss\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.437816 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d6b8cca1-3968-49aa-b4ed-88d9d4075223-ovs-rundir\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.462206 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-j6bkp"] Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.462488 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" podUID="8979d5de-6236-4b9e-a386-8befb05962dc" containerName="dnsmasq-dns" containerID="cri-o://d81e34c735dd1e66e5f7ce0d6cdeb2605e1156e273ea551cb1e59751d9cb4ccf" gracePeriod=10 Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.488978 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8555945b55-7g6ww"] Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.492277 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.499457 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.506424 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-7g6ww"] Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.539498 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d6b8cca1-3968-49aa-b4ed-88d9d4075223-ovs-rundir\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.539291 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d6b8cca1-3968-49aa-b4ed-88d9d4075223-ovs-rundir\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.539611 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b8cca1-3968-49aa-b4ed-88d9d4075223-combined-ca-bundle\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.539962 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-ovsdbserver-sb\") pod \"dnsmasq-dns-8555945b55-7g6ww\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.540009 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b8cca1-3968-49aa-b4ed-88d9d4075223-config\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.540024 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-dns-svc\") pod \"dnsmasq-dns-8555945b55-7g6ww\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.540515 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b8cca1-3968-49aa-b4ed-88d9d4075223-config\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.540571 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-config\") pod \"dnsmasq-dns-8555945b55-7g6ww\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.540654 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b8cca1-3968-49aa-b4ed-88d9d4075223-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.540685 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d6b8cca1-3968-49aa-b4ed-88d9d4075223-ovn-rundir\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.540728 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw2lg\" (UniqueName: \"kubernetes.io/projected/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-kube-api-access-tw2lg\") pod \"dnsmasq-dns-8555945b55-7g6ww\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.540766 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8jss\" (UniqueName: \"kubernetes.io/projected/d6b8cca1-3968-49aa-b4ed-88d9d4075223-kube-api-access-n8jss\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.541247 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d6b8cca1-3968-49aa-b4ed-88d9d4075223-ovn-rundir\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.545733 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b8cca1-3968-49aa-b4ed-88d9d4075223-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.545881 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b8cca1-3968-49aa-b4ed-88d9d4075223-combined-ca-bundle\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.556905 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8jss\" (UniqueName: \"kubernetes.io/projected/d6b8cca1-3968-49aa-b4ed-88d9d4075223-kube-api-access-n8jss\") pod \"ovn-controller-metrics-jmdtd\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.665618 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.666182 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw2lg\" (UniqueName: \"kubernetes.io/projected/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-kube-api-access-tw2lg\") pod \"dnsmasq-dns-8555945b55-7g6ww\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.666316 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-ovsdbserver-sb\") pod \"dnsmasq-dns-8555945b55-7g6ww\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.666357 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-dns-svc\") pod \"dnsmasq-dns-8555945b55-7g6ww\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.666380 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-config\") pod \"dnsmasq-dns-8555945b55-7g6ww\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.667356 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-ovsdbserver-sb\") pod \"dnsmasq-dns-8555945b55-7g6ww\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.667442 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-config\") pod \"dnsmasq-dns-8555945b55-7g6ww\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.667494 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-dns-svc\") pod \"dnsmasq-dns-8555945b55-7g6ww\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.694645 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw2lg\" (UniqueName: \"kubernetes.io/projected/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-kube-api-access-tw2lg\") pod \"dnsmasq-dns-8555945b55-7g6ww\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.710767 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-7g6ww"] Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.713578 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.769225 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-47qwr"] Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.772020 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.774377 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.781599 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-47qwr"] Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.869677 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhvjh\" (UniqueName: \"kubernetes.io/projected/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-kube-api-access-nhvjh\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.869751 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.869773 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.869811 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-config\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.869832 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.944785 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.971690 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhvjh\" (UniqueName: \"kubernetes.io/projected/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-kube-api-access-nhvjh\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.972240 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.973195 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.973212 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.973261 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.973357 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-config\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.973389 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.973927 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.973948 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-config\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:52 crc kubenswrapper[4826]: I0129 07:01:52.989079 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhvjh\" (UniqueName: \"kubernetes.io/projected/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-kube-api-access-nhvjh\") pod \"dnsmasq-dns-6cb545bd4c-47qwr\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.074839 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tflqn\" (UniqueName: \"kubernetes.io/projected/8979d5de-6236-4b9e-a386-8befb05962dc-kube-api-access-tflqn\") pod \"8979d5de-6236-4b9e-a386-8befb05962dc\" (UID: \"8979d5de-6236-4b9e-a386-8befb05962dc\") " Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.075065 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8979d5de-6236-4b9e-a386-8befb05962dc-config\") pod \"8979d5de-6236-4b9e-a386-8befb05962dc\" (UID: \"8979d5de-6236-4b9e-a386-8befb05962dc\") " Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.075115 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8979d5de-6236-4b9e-a386-8befb05962dc-dns-svc\") pod \"8979d5de-6236-4b9e-a386-8befb05962dc\" (UID: \"8979d5de-6236-4b9e-a386-8befb05962dc\") " Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.078027 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8979d5de-6236-4b9e-a386-8befb05962dc-kube-api-access-tflqn" (OuterVolumeSpecName: "kube-api-access-tflqn") pod "8979d5de-6236-4b9e-a386-8befb05962dc" (UID: "8979d5de-6236-4b9e-a386-8befb05962dc"). InnerVolumeSpecName "kube-api-access-tflqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.107938 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8979d5de-6236-4b9e-a386-8befb05962dc-config" (OuterVolumeSpecName: "config") pod "8979d5de-6236-4b9e-a386-8befb05962dc" (UID: "8979d5de-6236-4b9e-a386-8befb05962dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.122528 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8979d5de-6236-4b9e-a386-8befb05962dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8979d5de-6236-4b9e-a386-8befb05962dc" (UID: "8979d5de-6236-4b9e-a386-8befb05962dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.164135 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.177492 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tflqn\" (UniqueName: \"kubernetes.io/projected/8979d5de-6236-4b9e-a386-8befb05962dc-kube-api-access-tflqn\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.177539 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8979d5de-6236-4b9e-a386-8befb05962dc-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.177555 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8979d5de-6236-4b9e-a386-8befb05962dc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.221962 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jmdtd"] Jan 29 07:01:53 crc kubenswrapper[4826]: W0129 07:01:53.225266 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6b8cca1_3968_49aa_b4ed_88d9d4075223.slice/crio-98844d33f7081c3da1ac1cb223a390cca10ebb3f27f597cf8d6235f58cbd6d92 WatchSource:0}: Error finding container 98844d33f7081c3da1ac1cb223a390cca10ebb3f27f597cf8d6235f58cbd6d92: Status 404 returned error can't find the container with id 98844d33f7081c3da1ac1cb223a390cca10ebb3f27f597cf8d6235f58cbd6d92 Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.317317 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-7g6ww"] Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.368422 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.368462 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:53 crc kubenswrapper[4826]: W0129 07:01:53.401467 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76dd8a30_1e6e_4f89_beb8_5000c5e93dac.slice/crio-f25f24780ab18b3d138150c36f4d930c5a7211ad3350f24724cb32d638596f73 WatchSource:0}: Error finding container f25f24780ab18b3d138150c36f4d930c5a7211ad3350f24724cb32d638596f73: Status 404 returned error can't find the container with id f25f24780ab18b3d138150c36f4d930c5a7211ad3350f24724cb32d638596f73 Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.432148 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8555945b55-7g6ww" event={"ID":"76dd8a30-1e6e-4f89-beb8-5000c5e93dac","Type":"ContainerStarted","Data":"f25f24780ab18b3d138150c36f4d930c5a7211ad3350f24724cb32d638596f73"} Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.434126 4826 generic.go:334] "Generic (PLEG): container finished" podID="0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" containerID="a618aff47b6a4f080c12ed436cdd2e152b8a7acc4f88c6d531c82c61bbd02d8c" exitCode=0 Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.434206 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c","Type":"ContainerDied","Data":"a618aff47b6a4f080c12ed436cdd2e152b8a7acc4f88c6d531c82c61bbd02d8c"} Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.435520 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jmdtd" event={"ID":"d6b8cca1-3968-49aa-b4ed-88d9d4075223","Type":"ContainerStarted","Data":"98844d33f7081c3da1ac1cb223a390cca10ebb3f27f597cf8d6235f58cbd6d92"} Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.437570 4826 generic.go:334] "Generic (PLEG): container finished" podID="8979d5de-6236-4b9e-a386-8befb05962dc" containerID="d81e34c735dd1e66e5f7ce0d6cdeb2605e1156e273ea551cb1e59751d9cb4ccf" exitCode=0 Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.437624 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" event={"ID":"8979d5de-6236-4b9e-a386-8befb05962dc","Type":"ContainerDied","Data":"d81e34c735dd1e66e5f7ce0d6cdeb2605e1156e273ea551cb1e59751d9cb4ccf"} Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.437626 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.437642 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-j6bkp" event={"ID":"8979d5de-6236-4b9e-a386-8befb05962dc","Type":"ContainerDied","Data":"c5f842c397a2fdcc2cb58da0059a3df9f16d9e184688a18cdf6331a51ca0ba66"} Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.437662 4826 scope.go:117] "RemoveContainer" containerID="d81e34c735dd1e66e5f7ce0d6cdeb2605e1156e273ea551cb1e59751d9cb4ccf" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.440259 4826 generic.go:334] "Generic (PLEG): container finished" podID="1794f620-102a-4b9c-9097-713579ec55ad" containerID="a285c0e82f869c096c5852cbe3ebb71f48bfdd919cd5f2aa2550ecf47c3da59f" exitCode=0 Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.441233 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1794f620-102a-4b9c-9097-713579ec55ad","Type":"ContainerDied","Data":"a285c0e82f869c096c5852cbe3ebb71f48bfdd919cd5f2aa2550ecf47c3da59f"} Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.441898 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:53 crc kubenswrapper[4826]: E0129 07:01:53.446926 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.477826 4826 scope.go:117] "RemoveContainer" containerID="5cbde9d096ba385beca2424b79410b873fa531e0d494fdf19159d5dc6d081ed6" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.512462 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-j6bkp"] Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.521480 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-j6bkp"] Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.544930 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 07:01:53 crc kubenswrapper[4826]: I0129 07:01:53.610211 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-47qwr"] Jan 29 07:01:54 crc kubenswrapper[4826]: I0129 07:01:54.751067 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:01:54 crc kubenswrapper[4826]: E0129 07:01:54.751274 4826 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 07:01:54 crc kubenswrapper[4826]: E0129 07:01:54.751519 4826 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 07:01:54 crc kubenswrapper[4826]: E0129 07:01:54.751574 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift podName:85b51a36-8aa5-46e7-b8ab-a7e672c491d7 nodeName:}" failed. No retries permitted until 2026-01-29 07:02:02.751556844 +0000 UTC m=+1106.613349913 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift") pod "swift-storage-0" (UID: "85b51a36-8aa5-46e7-b8ab-a7e672c491d7") : configmap "swift-ring-files" not found Jan 29 07:01:54 crc kubenswrapper[4826]: I0129 07:01:54.840779 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8979d5de-6236-4b9e-a386-8befb05962dc" path="/var/lib/kubelet/pods/8979d5de-6236-4b9e-a386-8befb05962dc/volumes" Jan 29 07:01:55 crc kubenswrapper[4826]: W0129 07:01:55.553601 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82fc89ef_0d6c_4f94_b36b_2d7b4ea3aa1a.slice/crio-32e65aaf7ada5214518d3d9472f41ebb6a12256462856e703bbf936b27354fe4 WatchSource:0}: Error finding container 32e65aaf7ada5214518d3d9472f41ebb6a12256462856e703bbf936b27354fe4: Status 404 returned error can't find the container with id 32e65aaf7ada5214518d3d9472f41ebb6a12256462856e703bbf936b27354fe4 Jan 29 07:01:55 crc kubenswrapper[4826]: I0129 07:01:55.583192 4826 scope.go:117] "RemoveContainer" containerID="d81e34c735dd1e66e5f7ce0d6cdeb2605e1156e273ea551cb1e59751d9cb4ccf" Jan 29 07:01:55 crc kubenswrapper[4826]: E0129 07:01:55.583928 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81e34c735dd1e66e5f7ce0d6cdeb2605e1156e273ea551cb1e59751d9cb4ccf\": container with ID starting with d81e34c735dd1e66e5f7ce0d6cdeb2605e1156e273ea551cb1e59751d9cb4ccf not found: ID does not exist" containerID="d81e34c735dd1e66e5f7ce0d6cdeb2605e1156e273ea551cb1e59751d9cb4ccf" Jan 29 07:01:55 crc kubenswrapper[4826]: I0129 07:01:55.583986 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81e34c735dd1e66e5f7ce0d6cdeb2605e1156e273ea551cb1e59751d9cb4ccf"} err="failed to get container status \"d81e34c735dd1e66e5f7ce0d6cdeb2605e1156e273ea551cb1e59751d9cb4ccf\": rpc error: code = NotFound desc = could not find container \"d81e34c735dd1e66e5f7ce0d6cdeb2605e1156e273ea551cb1e59751d9cb4ccf\": container with ID starting with d81e34c735dd1e66e5f7ce0d6cdeb2605e1156e273ea551cb1e59751d9cb4ccf not found: ID does not exist" Jan 29 07:01:55 crc kubenswrapper[4826]: I0129 07:01:55.584022 4826 scope.go:117] "RemoveContainer" containerID="5cbde9d096ba385beca2424b79410b873fa531e0d494fdf19159d5dc6d081ed6" Jan 29 07:01:55 crc kubenswrapper[4826]: E0129 07:01:55.584581 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cbde9d096ba385beca2424b79410b873fa531e0d494fdf19159d5dc6d081ed6\": container with ID starting with 5cbde9d096ba385beca2424b79410b873fa531e0d494fdf19159d5dc6d081ed6 not found: ID does not exist" containerID="5cbde9d096ba385beca2424b79410b873fa531e0d494fdf19159d5dc6d081ed6" Jan 29 07:01:55 crc kubenswrapper[4826]: I0129 07:01:55.584630 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cbde9d096ba385beca2424b79410b873fa531e0d494fdf19159d5dc6d081ed6"} err="failed to get container status \"5cbde9d096ba385beca2424b79410b873fa531e0d494fdf19159d5dc6d081ed6\": rpc error: code = NotFound desc = could not find container \"5cbde9d096ba385beca2424b79410b873fa531e0d494fdf19159d5dc6d081ed6\": container with ID starting with 5cbde9d096ba385beca2424b79410b873fa531e0d494fdf19159d5dc6d081ed6 not found: ID does not exist" Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.147535 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.224908 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.483955 4826 generic.go:334] "Generic (PLEG): container finished" podID="82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" containerID="3ba3b1d66aae82bf152504940a914bd66dc28d5355f475f5e2d087451fddf4b5" exitCode=0 Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.484030 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" event={"ID":"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a","Type":"ContainerDied","Data":"3ba3b1d66aae82bf152504940a914bd66dc28d5355f475f5e2d087451fddf4b5"} Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.484059 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" event={"ID":"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a","Type":"ContainerStarted","Data":"32e65aaf7ada5214518d3d9472f41ebb6a12256462856e703bbf936b27354fe4"} Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.486750 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1794f620-102a-4b9c-9097-713579ec55ad","Type":"ContainerStarted","Data":"f0ac6dfd0c3c53f1c0e6b9f3709b00a0bca023456e5256221df78b7693b4c9bf"} Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.486949 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.491600 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nlxt2" event={"ID":"34dddfc9-db4b-48c0-9ec0-6eceb641aa26","Type":"ContainerStarted","Data":"41887c449ab701a5102ed17452b957ff0bc72de7e1198e0897cd0855d871173f"} Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.494593 4826 generic.go:334] "Generic (PLEG): container finished" podID="76dd8a30-1e6e-4f89-beb8-5000c5e93dac" containerID="9f8d9b3a89f8f0ee9e1d29687dfe5a1bb1ec786706fc564f93e82c7b62dfb7e9" exitCode=0 Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.494687 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8555945b55-7g6ww" event={"ID":"76dd8a30-1e6e-4f89-beb8-5000c5e93dac","Type":"ContainerDied","Data":"9f8d9b3a89f8f0ee9e1d29687dfe5a1bb1ec786706fc564f93e82c7b62dfb7e9"} Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.498914 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jmdtd" event={"ID":"d6b8cca1-3968-49aa-b4ed-88d9d4075223","Type":"ContainerStarted","Data":"e468868d139ea7d7683d5f9d96d634b091bed19d0c79b17142be195283c11a88"} Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.500530 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c","Type":"ContainerStarted","Data":"561f44049eef8bcf9743aabf5fda4a13b2156ef6047f470fc4f0c9a570583cb1"} Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.500985 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.534214 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.472621295 podStartE2EDuration="58.534194773s" podCreationTimestamp="2026-01-29 07:00:58 +0000 UTC" firstStartedPulling="2026-01-29 07:01:00.455119525 +0000 UTC m=+1044.316912594" lastFinishedPulling="2026-01-29 07:01:19.516692963 +0000 UTC m=+1063.378486072" observedRunningTime="2026-01-29 07:01:56.527803981 +0000 UTC m=+1100.389597060" watchObservedRunningTime="2026-01-29 07:01:56.534194773 +0000 UTC m=+1100.395987842" Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.571679 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.088729672 podStartE2EDuration="57.5716597s" podCreationTimestamp="2026-01-29 07:00:59 +0000 UTC" firstStartedPulling="2026-01-29 07:01:01.10716911 +0000 UTC m=+1044.968962179" lastFinishedPulling="2026-01-29 07:01:19.590099118 +0000 UTC m=+1063.451892207" observedRunningTime="2026-01-29 07:01:56.556141748 +0000 UTC m=+1100.417934837" watchObservedRunningTime="2026-01-29 07:01:56.5716597 +0000 UTC m=+1100.433452759" Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.614436 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-nlxt2" podStartSLOduration=2.521245824 podStartE2EDuration="6.61441742s" podCreationTimestamp="2026-01-29 07:01:50 +0000 UTC" firstStartedPulling="2026-01-29 07:01:51.621871132 +0000 UTC m=+1095.483664211" lastFinishedPulling="2026-01-29 07:01:55.715042728 +0000 UTC m=+1099.576835807" observedRunningTime="2026-01-29 07:01:56.607336011 +0000 UTC m=+1100.469129080" watchObservedRunningTime="2026-01-29 07:01:56.61441742 +0000 UTC m=+1100.476210489" Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.653120 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jmdtd" podStartSLOduration=2.184743473 podStartE2EDuration="4.652285017s" podCreationTimestamp="2026-01-29 07:01:52 +0000 UTC" firstStartedPulling="2026-01-29 07:01:53.227429356 +0000 UTC m=+1097.089222425" lastFinishedPulling="2026-01-29 07:01:55.69497089 +0000 UTC m=+1099.556763969" observedRunningTime="2026-01-29 07:01:56.642593262 +0000 UTC m=+1100.504386331" watchObservedRunningTime="2026-01-29 07:01:56.652285017 +0000 UTC m=+1100.514078086" Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.845565 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.995409 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-dns-svc\") pod \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.995803 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-ovsdbserver-sb\") pod \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.995869 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw2lg\" (UniqueName: \"kubernetes.io/projected/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-kube-api-access-tw2lg\") pod \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " Jan 29 07:01:56 crc kubenswrapper[4826]: I0129 07:01:56.995929 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-config\") pod \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\" (UID: \"76dd8a30-1e6e-4f89-beb8-5000c5e93dac\") " Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.002639 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-kube-api-access-tw2lg" (OuterVolumeSpecName: "kube-api-access-tw2lg") pod "76dd8a30-1e6e-4f89-beb8-5000c5e93dac" (UID: "76dd8a30-1e6e-4f89-beb8-5000c5e93dac"). InnerVolumeSpecName "kube-api-access-tw2lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.016398 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "76dd8a30-1e6e-4f89-beb8-5000c5e93dac" (UID: "76dd8a30-1e6e-4f89-beb8-5000c5e93dac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.016927 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76dd8a30-1e6e-4f89-beb8-5000c5e93dac" (UID: "76dd8a30-1e6e-4f89-beb8-5000c5e93dac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.018714 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-config" (OuterVolumeSpecName: "config") pod "76dd8a30-1e6e-4f89-beb8-5000c5e93dac" (UID: "76dd8a30-1e6e-4f89-beb8-5000c5e93dac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.097779 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.097822 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.097836 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw2lg\" (UniqueName: \"kubernetes.io/projected/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-kube-api-access-tw2lg\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.097849 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76dd8a30-1e6e-4f89-beb8-5000c5e93dac-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.512918 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" event={"ID":"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a","Type":"ContainerStarted","Data":"e372534610518a0d89c9dff4aa3d8785b79ab67dd83f0a16b4a1b8ba01759085"} Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.514814 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.516108 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8555945b55-7g6ww" event={"ID":"76dd8a30-1e6e-4f89-beb8-5000c5e93dac","Type":"ContainerDied","Data":"f25f24780ab18b3d138150c36f4d930c5a7211ad3350f24724cb32d638596f73"} Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.516138 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8555945b55-7g6ww" Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.516160 4826 scope.go:117] "RemoveContainer" containerID="9f8d9b3a89f8f0ee9e1d29687dfe5a1bb1ec786706fc564f93e82c7b62dfb7e9" Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.544167 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" podStartSLOduration=5.544143247 podStartE2EDuration="5.544143247s" podCreationTimestamp="2026-01-29 07:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:01:57.537631849 +0000 UTC m=+1101.399424918" watchObservedRunningTime="2026-01-29 07:01:57.544143247 +0000 UTC m=+1101.405936326" Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.587223 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-7g6ww"] Jan 29 07:01:57 crc kubenswrapper[4826]: I0129 07:01:57.593592 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8555945b55-7g6ww"] Jan 29 07:01:58 crc kubenswrapper[4826]: I0129 07:01:58.826241 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76dd8a30-1e6e-4f89-beb8-5000c5e93dac" path="/var/lib/kubelet/pods/76dd8a30-1e6e-4f89-beb8-5000c5e93dac/volumes" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.749628 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7m444"] Jan 29 07:02:00 crc kubenswrapper[4826]: E0129 07:02:00.751392 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76dd8a30-1e6e-4f89-beb8-5000c5e93dac" containerName="init" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.751479 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="76dd8a30-1e6e-4f89-beb8-5000c5e93dac" containerName="init" Jan 29 07:02:00 crc kubenswrapper[4826]: E0129 07:02:00.751551 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8979d5de-6236-4b9e-a386-8befb05962dc" containerName="dnsmasq-dns" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.751607 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8979d5de-6236-4b9e-a386-8befb05962dc" containerName="dnsmasq-dns" Jan 29 07:02:00 crc kubenswrapper[4826]: E0129 07:02:00.751679 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8979d5de-6236-4b9e-a386-8befb05962dc" containerName="init" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.751735 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8979d5de-6236-4b9e-a386-8befb05962dc" containerName="init" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.751949 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8979d5de-6236-4b9e-a386-8befb05962dc" containerName="dnsmasq-dns" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.752084 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="76dd8a30-1e6e-4f89-beb8-5000c5e93dac" containerName="init" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.752698 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7m444" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.766263 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.780089 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7m444"] Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.865475 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m84js\" (UniqueName: \"kubernetes.io/projected/6c6477c0-6361-4904-9574-8aa028f87d8d-kube-api-access-m84js\") pod \"root-account-create-update-7m444\" (UID: \"6c6477c0-6361-4904-9574-8aa028f87d8d\") " pod="openstack/root-account-create-update-7m444" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.865625 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6477c0-6361-4904-9574-8aa028f87d8d-operator-scripts\") pod \"root-account-create-update-7m444\" (UID: \"6c6477c0-6361-4904-9574-8aa028f87d8d\") " pod="openstack/root-account-create-update-7m444" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.966677 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m84js\" (UniqueName: \"kubernetes.io/projected/6c6477c0-6361-4904-9574-8aa028f87d8d-kube-api-access-m84js\") pod \"root-account-create-update-7m444\" (UID: \"6c6477c0-6361-4904-9574-8aa028f87d8d\") " pod="openstack/root-account-create-update-7m444" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.966862 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6477c0-6361-4904-9574-8aa028f87d8d-operator-scripts\") pod \"root-account-create-update-7m444\" (UID: \"6c6477c0-6361-4904-9574-8aa028f87d8d\") " pod="openstack/root-account-create-update-7m444" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.967947 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6477c0-6361-4904-9574-8aa028f87d8d-operator-scripts\") pod \"root-account-create-update-7m444\" (UID: \"6c6477c0-6361-4904-9574-8aa028f87d8d\") " pod="openstack/root-account-create-update-7m444" Jan 29 07:02:00 crc kubenswrapper[4826]: I0129 07:02:00.986858 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m84js\" (UniqueName: \"kubernetes.io/projected/6c6477c0-6361-4904-9574-8aa028f87d8d-kube-api-access-m84js\") pod \"root-account-create-update-7m444\" (UID: \"6c6477c0-6361-4904-9574-8aa028f87d8d\") " pod="openstack/root-account-create-update-7m444" Jan 29 07:02:01 crc kubenswrapper[4826]: I0129 07:02:01.070655 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7m444" Jan 29 07:02:01 crc kubenswrapper[4826]: I0129 07:02:01.433389 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7m444"] Jan 29 07:02:01 crc kubenswrapper[4826]: W0129 07:02:01.461422 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c6477c0_6361_4904_9574_8aa028f87d8d.slice/crio-f04887488af4d28176e0316ffbf4bb11d0923fea6f13d5eea48d09d5713090ee WatchSource:0}: Error finding container f04887488af4d28176e0316ffbf4bb11d0923fea6f13d5eea48d09d5713090ee: Status 404 returned error can't find the container with id f04887488af4d28176e0316ffbf4bb11d0923fea6f13d5eea48d09d5713090ee Jan 29 07:02:01 crc kubenswrapper[4826]: I0129 07:02:01.555035 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7m444" event={"ID":"6c6477c0-6361-4904-9574-8aa028f87d8d","Type":"ContainerStarted","Data":"f04887488af4d28176e0316ffbf4bb11d0923fea6f13d5eea48d09d5713090ee"} Jan 29 07:02:02 crc kubenswrapper[4826]: I0129 07:02:02.566109 4826 generic.go:334] "Generic (PLEG): container finished" podID="6c6477c0-6361-4904-9574-8aa028f87d8d" containerID="cb11fed4a94e5e8a15ea4adda925e0e38ecf713101a66a332eeccfc99758d0e6" exitCode=0 Jan 29 07:02:02 crc kubenswrapper[4826]: I0129 07:02:02.566187 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7m444" event={"ID":"6c6477c0-6361-4904-9574-8aa028f87d8d","Type":"ContainerDied","Data":"cb11fed4a94e5e8a15ea4adda925e0e38ecf713101a66a332eeccfc99758d0e6"} Jan 29 07:02:02 crc kubenswrapper[4826]: I0129 07:02:02.801731 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:02:02 crc kubenswrapper[4826]: E0129 07:02:02.802028 4826 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 07:02:02 crc kubenswrapper[4826]: E0129 07:02:02.802072 4826 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 07:02:02 crc kubenswrapper[4826]: E0129 07:02:02.802660 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift podName:85b51a36-8aa5-46e7-b8ab-a7e672c491d7 nodeName:}" failed. No retries permitted until 2026-01-29 07:02:18.802634342 +0000 UTC m=+1122.664427411 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift") pod "swift-storage-0" (UID: "85b51a36-8aa5-46e7-b8ab-a7e672c491d7") : configmap "swift-ring-files" not found Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.166513 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.243957 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-vhzmk"] Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.244201 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" podUID="68f776f5-5464-4259-9466-2ebe6093e673" containerName="dnsmasq-dns" containerID="cri-o://668d818a2ae9530c45edaf4c7de03be500561b99a866cd544011235c63fdb125" gracePeriod=10 Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.254447 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-r6wg8"] Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.255521 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r6wg8" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.286770 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-r6wg8"] Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.354498 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6fcd-account-create-update-fgtgt"] Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.358509 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fcd-account-create-update-fgtgt" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.360387 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.368637 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6fcd-account-create-update-fgtgt"] Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.410886 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2621642-f600-4c3e-b641-9665ec72e213-operator-scripts\") pod \"keystone-db-create-r6wg8\" (UID: \"f2621642-f600-4c3e-b641-9665ec72e213\") " pod="openstack/keystone-db-create-r6wg8" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.411073 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj948\" (UniqueName: \"kubernetes.io/projected/f2621642-f600-4c3e-b641-9665ec72e213-kube-api-access-wj948\") pod \"keystone-db-create-r6wg8\" (UID: \"f2621642-f600-4c3e-b641-9665ec72e213\") " pod="openstack/keystone-db-create-r6wg8" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.512938 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj948\" (UniqueName: \"kubernetes.io/projected/f2621642-f600-4c3e-b641-9665ec72e213-kube-api-access-wj948\") pod \"keystone-db-create-r6wg8\" (UID: \"f2621642-f600-4c3e-b641-9665ec72e213\") " pod="openstack/keystone-db-create-r6wg8" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.513018 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2621642-f600-4c3e-b641-9665ec72e213-operator-scripts\") pod \"keystone-db-create-r6wg8\" (UID: \"f2621642-f600-4c3e-b641-9665ec72e213\") " pod="openstack/keystone-db-create-r6wg8" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.513091 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvb58\" (UniqueName: \"kubernetes.io/projected/c17bd300-9b3b-4b17-a668-ac2038915fc8-kube-api-access-rvb58\") pod \"keystone-6fcd-account-create-update-fgtgt\" (UID: \"c17bd300-9b3b-4b17-a668-ac2038915fc8\") " pod="openstack/keystone-6fcd-account-create-update-fgtgt" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.513114 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c17bd300-9b3b-4b17-a668-ac2038915fc8-operator-scripts\") pod \"keystone-6fcd-account-create-update-fgtgt\" (UID: \"c17bd300-9b3b-4b17-a668-ac2038915fc8\") " pod="openstack/keystone-6fcd-account-create-update-fgtgt" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.514050 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2621642-f600-4c3e-b641-9665ec72e213-operator-scripts\") pod \"keystone-db-create-r6wg8\" (UID: \"f2621642-f600-4c3e-b641-9665ec72e213\") " pod="openstack/keystone-db-create-r6wg8" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.540223 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj948\" (UniqueName: \"kubernetes.io/projected/f2621642-f600-4c3e-b641-9665ec72e213-kube-api-access-wj948\") pod \"keystone-db-create-r6wg8\" (UID: \"f2621642-f600-4c3e-b641-9665ec72e213\") " pod="openstack/keystone-db-create-r6wg8" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.543772 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-d82nq"] Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.544727 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d82nq" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.560256 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-d82nq"] Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.583618 4826 generic.go:334] "Generic (PLEG): container finished" podID="68f776f5-5464-4259-9466-2ebe6093e673" containerID="668d818a2ae9530c45edaf4c7de03be500561b99a866cd544011235c63fdb125" exitCode=0 Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.583707 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" event={"ID":"68f776f5-5464-4259-9466-2ebe6093e673","Type":"ContainerDied","Data":"668d818a2ae9530c45edaf4c7de03be500561b99a866cd544011235c63fdb125"} Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.586458 4826 generic.go:334] "Generic (PLEG): container finished" podID="34dddfc9-db4b-48c0-9ec0-6eceb641aa26" containerID="41887c449ab701a5102ed17452b957ff0bc72de7e1198e0897cd0855d871173f" exitCode=0 Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.586642 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nlxt2" event={"ID":"34dddfc9-db4b-48c0-9ec0-6eceb641aa26","Type":"ContainerDied","Data":"41887c449ab701a5102ed17452b957ff0bc72de7e1198e0897cd0855d871173f"} Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.606021 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r6wg8" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.615020 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2-operator-scripts\") pod \"placement-db-create-d82nq\" (UID: \"34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2\") " pod="openstack/placement-db-create-d82nq" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.615090 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mkgx\" (UniqueName: \"kubernetes.io/projected/34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2-kube-api-access-4mkgx\") pod \"placement-db-create-d82nq\" (UID: \"34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2\") " pod="openstack/placement-db-create-d82nq" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.615190 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvb58\" (UniqueName: \"kubernetes.io/projected/c17bd300-9b3b-4b17-a668-ac2038915fc8-kube-api-access-rvb58\") pod \"keystone-6fcd-account-create-update-fgtgt\" (UID: \"c17bd300-9b3b-4b17-a668-ac2038915fc8\") " pod="openstack/keystone-6fcd-account-create-update-fgtgt" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.615289 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c17bd300-9b3b-4b17-a668-ac2038915fc8-operator-scripts\") pod \"keystone-6fcd-account-create-update-fgtgt\" (UID: \"c17bd300-9b3b-4b17-a668-ac2038915fc8\") " pod="openstack/keystone-6fcd-account-create-update-fgtgt" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.616234 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c17bd300-9b3b-4b17-a668-ac2038915fc8-operator-scripts\") pod \"keystone-6fcd-account-create-update-fgtgt\" (UID: \"c17bd300-9b3b-4b17-a668-ac2038915fc8\") " pod="openstack/keystone-6fcd-account-create-update-fgtgt" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.639218 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvb58\" (UniqueName: \"kubernetes.io/projected/c17bd300-9b3b-4b17-a668-ac2038915fc8-kube-api-access-rvb58\") pod \"keystone-6fcd-account-create-update-fgtgt\" (UID: \"c17bd300-9b3b-4b17-a668-ac2038915fc8\") " pod="openstack/keystone-6fcd-account-create-update-fgtgt" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.664189 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3ce2-account-create-update-zgdts"] Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.665626 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3ce2-account-create-update-zgdts" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.676115 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.682634 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3ce2-account-create-update-zgdts"] Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.713513 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fcd-account-create-update-fgtgt" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.716557 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2-operator-scripts\") pod \"placement-db-create-d82nq\" (UID: \"34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2\") " pod="openstack/placement-db-create-d82nq" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.716613 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mkgx\" (UniqueName: \"kubernetes.io/projected/34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2-kube-api-access-4mkgx\") pod \"placement-db-create-d82nq\" (UID: \"34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2\") " pod="openstack/placement-db-create-d82nq" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.717472 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2-operator-scripts\") pod \"placement-db-create-d82nq\" (UID: \"34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2\") " pod="openstack/placement-db-create-d82nq" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.757283 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mkgx\" (UniqueName: \"kubernetes.io/projected/34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2-kube-api-access-4mkgx\") pod \"placement-db-create-d82nq\" (UID: \"34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2\") " pod="openstack/placement-db-create-d82nq" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.788539 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.819227 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97-operator-scripts\") pod \"placement-3ce2-account-create-update-zgdts\" (UID: \"f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97\") " pod="openstack/placement-3ce2-account-create-update-zgdts" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.819344 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nbdb\" (UniqueName: \"kubernetes.io/projected/f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97-kube-api-access-5nbdb\") pod \"placement-3ce2-account-create-update-zgdts\" (UID: \"f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97\") " pod="openstack/placement-3ce2-account-create-update-zgdts" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.868360 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tjfkk"] Jan 29 07:02:03 crc kubenswrapper[4826]: E0129 07:02:03.868861 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f776f5-5464-4259-9466-2ebe6093e673" containerName="dnsmasq-dns" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.868876 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f776f5-5464-4259-9466-2ebe6093e673" containerName="dnsmasq-dns" Jan 29 07:02:03 crc kubenswrapper[4826]: E0129 07:02:03.868908 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f776f5-5464-4259-9466-2ebe6093e673" containerName="init" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.868914 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f776f5-5464-4259-9466-2ebe6093e673" containerName="init" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.869083 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f776f5-5464-4259-9466-2ebe6093e673" containerName="dnsmasq-dns" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.869766 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tjfkk" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.884293 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d82nq" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.896722 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tjfkk"] Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.921275 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68f776f5-5464-4259-9466-2ebe6093e673-dns-svc\") pod \"68f776f5-5464-4259-9466-2ebe6093e673\" (UID: \"68f776f5-5464-4259-9466-2ebe6093e673\") " Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.921971 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4mwh\" (UniqueName: \"kubernetes.io/projected/68f776f5-5464-4259-9466-2ebe6093e673-kube-api-access-s4mwh\") pod \"68f776f5-5464-4259-9466-2ebe6093e673\" (UID: \"68f776f5-5464-4259-9466-2ebe6093e673\") " Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.922051 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f776f5-5464-4259-9466-2ebe6093e673-config\") pod \"68f776f5-5464-4259-9466-2ebe6093e673\" (UID: \"68f776f5-5464-4259-9466-2ebe6093e673\") " Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.922487 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97-operator-scripts\") pod \"placement-3ce2-account-create-update-zgdts\" (UID: \"f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97\") " pod="openstack/placement-3ce2-account-create-update-zgdts" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.922659 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nbdb\" (UniqueName: \"kubernetes.io/projected/f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97-kube-api-access-5nbdb\") pod \"placement-3ce2-account-create-update-zgdts\" (UID: \"f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97\") " pod="openstack/placement-3ce2-account-create-update-zgdts" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.924547 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97-operator-scripts\") pod \"placement-3ce2-account-create-update-zgdts\" (UID: \"f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97\") " pod="openstack/placement-3ce2-account-create-update-zgdts" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.933214 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f776f5-5464-4259-9466-2ebe6093e673-kube-api-access-s4mwh" (OuterVolumeSpecName: "kube-api-access-s4mwh") pod "68f776f5-5464-4259-9466-2ebe6093e673" (UID: "68f776f5-5464-4259-9466-2ebe6093e673"). InnerVolumeSpecName "kube-api-access-s4mwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.959615 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nbdb\" (UniqueName: \"kubernetes.io/projected/f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97-kube-api-access-5nbdb\") pod \"placement-3ce2-account-create-update-zgdts\" (UID: \"f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97\") " pod="openstack/placement-3ce2-account-create-update-zgdts" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.976870 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-97ad-account-create-update-xx9t5"] Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.983958 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-97ad-account-create-update-xx9t5"] Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.988457 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-97ad-account-create-update-xx9t5" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.991466 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 07:02:03 crc kubenswrapper[4826]: I0129 07:02:03.992980 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68f776f5-5464-4259-9466-2ebe6093e673-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68f776f5-5464-4259-9466-2ebe6093e673" (UID: "68f776f5-5464-4259-9466-2ebe6093e673"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.007410 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68f776f5-5464-4259-9466-2ebe6093e673-config" (OuterVolumeSpecName: "config") pod "68f776f5-5464-4259-9466-2ebe6093e673" (UID: "68f776f5-5464-4259-9466-2ebe6093e673"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.023914 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6cs7\" (UniqueName: \"kubernetes.io/projected/30cb6345-09e5-421e-8cac-7223ff25731a-kube-api-access-l6cs7\") pod \"glance-db-create-tjfkk\" (UID: \"30cb6345-09e5-421e-8cac-7223ff25731a\") " pod="openstack/glance-db-create-tjfkk" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.024033 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30cb6345-09e5-421e-8cac-7223ff25731a-operator-scripts\") pod \"glance-db-create-tjfkk\" (UID: \"30cb6345-09e5-421e-8cac-7223ff25731a\") " pod="openstack/glance-db-create-tjfkk" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.024395 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68f776f5-5464-4259-9466-2ebe6093e673-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.024408 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4mwh\" (UniqueName: \"kubernetes.io/projected/68f776f5-5464-4259-9466-2ebe6093e673-kube-api-access-s4mwh\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.024427 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f776f5-5464-4259-9466-2ebe6093e673-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.057629 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3ce2-account-create-update-zgdts" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.062943 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7m444" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.125647 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m84js\" (UniqueName: \"kubernetes.io/projected/6c6477c0-6361-4904-9574-8aa028f87d8d-kube-api-access-m84js\") pod \"6c6477c0-6361-4904-9574-8aa028f87d8d\" (UID: \"6c6477c0-6361-4904-9574-8aa028f87d8d\") " Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.125880 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6477c0-6361-4904-9574-8aa028f87d8d-operator-scripts\") pod \"6c6477c0-6361-4904-9574-8aa028f87d8d\" (UID: \"6c6477c0-6361-4904-9574-8aa028f87d8d\") " Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.126410 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjj9v\" (UniqueName: \"kubernetes.io/projected/bcdb469f-8122-4cba-a7e1-9e2e9e829263-kube-api-access-sjj9v\") pod \"glance-97ad-account-create-update-xx9t5\" (UID: \"bcdb469f-8122-4cba-a7e1-9e2e9e829263\") " pod="openstack/glance-97ad-account-create-update-xx9t5" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.126611 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30cb6345-09e5-421e-8cac-7223ff25731a-operator-scripts\") pod \"glance-db-create-tjfkk\" (UID: \"30cb6345-09e5-421e-8cac-7223ff25731a\") " pod="openstack/glance-db-create-tjfkk" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.127215 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30cb6345-09e5-421e-8cac-7223ff25731a-operator-scripts\") pod \"glance-db-create-tjfkk\" (UID: \"30cb6345-09e5-421e-8cac-7223ff25731a\") " pod="openstack/glance-db-create-tjfkk" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.127538 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6477c0-6361-4904-9574-8aa028f87d8d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c6477c0-6361-4904-9574-8aa028f87d8d" (UID: "6c6477c0-6361-4904-9574-8aa028f87d8d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.128744 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcdb469f-8122-4cba-a7e1-9e2e9e829263-operator-scripts\") pod \"glance-97ad-account-create-update-xx9t5\" (UID: \"bcdb469f-8122-4cba-a7e1-9e2e9e829263\") " pod="openstack/glance-97ad-account-create-update-xx9t5" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.128925 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6cs7\" (UniqueName: \"kubernetes.io/projected/30cb6345-09e5-421e-8cac-7223ff25731a-kube-api-access-l6cs7\") pod \"glance-db-create-tjfkk\" (UID: \"30cb6345-09e5-421e-8cac-7223ff25731a\") " pod="openstack/glance-db-create-tjfkk" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.129102 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c6477c0-6361-4904-9574-8aa028f87d8d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.171532 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6477c0-6361-4904-9574-8aa028f87d8d-kube-api-access-m84js" (OuterVolumeSpecName: "kube-api-access-m84js") pod "6c6477c0-6361-4904-9574-8aa028f87d8d" (UID: "6c6477c0-6361-4904-9574-8aa028f87d8d"). InnerVolumeSpecName "kube-api-access-m84js". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.192198 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6cs7\" (UniqueName: \"kubernetes.io/projected/30cb6345-09e5-421e-8cac-7223ff25731a-kube-api-access-l6cs7\") pod \"glance-db-create-tjfkk\" (UID: \"30cb6345-09e5-421e-8cac-7223ff25731a\") " pod="openstack/glance-db-create-tjfkk" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.196316 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tjfkk" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.230489 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjj9v\" (UniqueName: \"kubernetes.io/projected/bcdb469f-8122-4cba-a7e1-9e2e9e829263-kube-api-access-sjj9v\") pod \"glance-97ad-account-create-update-xx9t5\" (UID: \"bcdb469f-8122-4cba-a7e1-9e2e9e829263\") " pod="openstack/glance-97ad-account-create-update-xx9t5" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.230633 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcdb469f-8122-4cba-a7e1-9e2e9e829263-operator-scripts\") pod \"glance-97ad-account-create-update-xx9t5\" (UID: \"bcdb469f-8122-4cba-a7e1-9e2e9e829263\") " pod="openstack/glance-97ad-account-create-update-xx9t5" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.230714 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m84js\" (UniqueName: \"kubernetes.io/projected/6c6477c0-6361-4904-9574-8aa028f87d8d-kube-api-access-m84js\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.232103 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcdb469f-8122-4cba-a7e1-9e2e9e829263-operator-scripts\") pod \"glance-97ad-account-create-update-xx9t5\" (UID: \"bcdb469f-8122-4cba-a7e1-9e2e9e829263\") " pod="openstack/glance-97ad-account-create-update-xx9t5" Jan 29 07:02:04 crc kubenswrapper[4826]: W0129 07:02:04.246843 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2621642_f600_4c3e_b641_9665ec72e213.slice/crio-5b08aee47f5c3e932b1154e96d9d37b93263512bc7053ffa54b435bd8f63261c WatchSource:0}: Error finding container 5b08aee47f5c3e932b1154e96d9d37b93263512bc7053ffa54b435bd8f63261c: Status 404 returned error can't find the container with id 5b08aee47f5c3e932b1154e96d9d37b93263512bc7053ffa54b435bd8f63261c Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.250048 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-r6wg8"] Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.255688 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjj9v\" (UniqueName: \"kubernetes.io/projected/bcdb469f-8122-4cba-a7e1-9e2e9e829263-kube-api-access-sjj9v\") pod \"glance-97ad-account-create-update-xx9t5\" (UID: \"bcdb469f-8122-4cba-a7e1-9e2e9e829263\") " pod="openstack/glance-97ad-account-create-update-xx9t5" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.309009 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-97ad-account-create-update-xx9t5" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.369522 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6fcd-account-create-update-fgtgt"] Jan 29 07:02:04 crc kubenswrapper[4826]: W0129 07:02:04.387634 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc17bd300_9b3b_4b17_a668_ac2038915fc8.slice/crio-48ecc676f2b9876c91450948d240b40dae33ed4740bf8b0295072a6a3e5c83a9 WatchSource:0}: Error finding container 48ecc676f2b9876c91450948d240b40dae33ed4740bf8b0295072a6a3e5c83a9: Status 404 returned error can't find the container with id 48ecc676f2b9876c91450948d240b40dae33ed4740bf8b0295072a6a3e5c83a9 Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.447075 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3ce2-account-create-update-zgdts"] Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.495107 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-d82nq"] Jan 29 07:02:04 crc kubenswrapper[4826]: W0129 07:02:04.532599 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ffc130_bd9e_4f9c_9d02_7f3cf93df9d2.slice/crio-0e3fdce4df7db01219787d2fe880c5d47af20c29701ec1024d0a7190ce1116fe WatchSource:0}: Error finding container 0e3fdce4df7db01219787d2fe880c5d47af20c29701ec1024d0a7190ce1116fe: Status 404 returned error can't find the container with id 0e3fdce4df7db01219787d2fe880c5d47af20c29701ec1024d0a7190ce1116fe Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.597023 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" event={"ID":"68f776f5-5464-4259-9466-2ebe6093e673","Type":"ContainerDied","Data":"aa69480dcef889206f4c72fd660e436b6a371db29ad9a516de96efe0206220c9"} Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.597067 4826 scope.go:117] "RemoveContainer" containerID="668d818a2ae9530c45edaf4c7de03be500561b99a866cd544011235c63fdb125" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.597252 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-vhzmk" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.609285 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r6wg8" event={"ID":"f2621642-f600-4c3e-b641-9665ec72e213","Type":"ContainerStarted","Data":"f36d662feb9b70757b17b28dd352459437ead99aa4d2976e18151a94e375a7af"} Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.609379 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r6wg8" event={"ID":"f2621642-f600-4c3e-b641-9665ec72e213","Type":"ContainerStarted","Data":"5b08aee47f5c3e932b1154e96d9d37b93263512bc7053ffa54b435bd8f63261c"} Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.613750 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7m444" event={"ID":"6c6477c0-6361-4904-9574-8aa028f87d8d","Type":"ContainerDied","Data":"f04887488af4d28176e0316ffbf4bb11d0923fea6f13d5eea48d09d5713090ee"} Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.613783 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f04887488af4d28176e0316ffbf4bb11d0923fea6f13d5eea48d09d5713090ee" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.613840 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7m444" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.621981 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d82nq" event={"ID":"34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2","Type":"ContainerStarted","Data":"0e3fdce4df7db01219787d2fe880c5d47af20c29701ec1024d0a7190ce1116fe"} Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.633020 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3ce2-account-create-update-zgdts" event={"ID":"f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97","Type":"ContainerStarted","Data":"0f75b29fa3036ad911b9efda9d9c5b04b748c7f0b9d3c8a069306e302f772941"} Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.639452 4826 scope.go:117] "RemoveContainer" containerID="cb011f49261f0d8dd2dc4149fa9657613140e4e34877de153bcc4fcadf64a1ac" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.649566 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-vhzmk"] Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.650022 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6fcd-account-create-update-fgtgt" event={"ID":"c17bd300-9b3b-4b17-a668-ac2038915fc8","Type":"ContainerStarted","Data":"48ecc676f2b9876c91450948d240b40dae33ed4740bf8b0295072a6a3e5c83a9"} Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.659564 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-vhzmk"] Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.713092 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tjfkk"] Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.781573 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4zwtm" podUID="bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" containerName="ovn-controller" probeResult="failure" output=< Jan 29 07:02:04 crc kubenswrapper[4826]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 07:02:04 crc kubenswrapper[4826]: > Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.830019 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68f776f5-5464-4259-9466-2ebe6093e673" path="/var/lib/kubelet/pods/68f776f5-5464-4259-9466-2ebe6093e673/volumes" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.832335 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-97ad-account-create-update-xx9t5"] Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.899409 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.946379 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-scripts\") pod \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.946465 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-combined-ca-bundle\") pod \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.946517 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-dispersionconf\") pod \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.946549 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-swiftconf\") pod \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.946575 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-etc-swift\") pod \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.946625 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjvxj\" (UniqueName: \"kubernetes.io/projected/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-kube-api-access-bjvxj\") pod \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.946657 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-ring-data-devices\") pod \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\" (UID: \"34dddfc9-db4b-48c0-9ec0-6eceb641aa26\") " Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.947879 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "34dddfc9-db4b-48c0-9ec0-6eceb641aa26" (UID: "34dddfc9-db4b-48c0-9ec0-6eceb641aa26"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.948505 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "34dddfc9-db4b-48c0-9ec0-6eceb641aa26" (UID: "34dddfc9-db4b-48c0-9ec0-6eceb641aa26"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.951764 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-kube-api-access-bjvxj" (OuterVolumeSpecName: "kube-api-access-bjvxj") pod "34dddfc9-db4b-48c0-9ec0-6eceb641aa26" (UID: "34dddfc9-db4b-48c0-9ec0-6eceb641aa26"). InnerVolumeSpecName "kube-api-access-bjvxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.953497 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "34dddfc9-db4b-48c0-9ec0-6eceb641aa26" (UID: "34dddfc9-db4b-48c0-9ec0-6eceb641aa26"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.975835 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-scripts" (OuterVolumeSpecName: "scripts") pod "34dddfc9-db4b-48c0-9ec0-6eceb641aa26" (UID: "34dddfc9-db4b-48c0-9ec0-6eceb641aa26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:04 crc kubenswrapper[4826]: I0129 07:02:04.997193 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "34dddfc9-db4b-48c0-9ec0-6eceb641aa26" (UID: "34dddfc9-db4b-48c0-9ec0-6eceb641aa26"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.000142 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34dddfc9-db4b-48c0-9ec0-6eceb641aa26" (UID: "34dddfc9-db4b-48c0-9ec0-6eceb641aa26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.048804 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.048863 4826 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.048876 4826 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.048889 4826 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.048902 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjvxj\" (UniqueName: \"kubernetes.io/projected/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-kube-api-access-bjvxj\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.048915 4826 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.048929 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34dddfc9-db4b-48c0-9ec0-6eceb641aa26-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.663764 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tjfkk" event={"ID":"30cb6345-09e5-421e-8cac-7223ff25731a","Type":"ContainerStarted","Data":"abd209ee5b4766c1445380ec8af0704045df0a1a4d72c3edf9a087f6a1dc1dc5"} Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.666132 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-97ad-account-create-update-xx9t5" event={"ID":"bcdb469f-8122-4cba-a7e1-9e2e9e829263","Type":"ContainerStarted","Data":"9cac120bfe3aecbcab57f0603594c4db07b163bd22743c5da977483e167644be"} Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.672934 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nlxt2" Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.677462 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nlxt2" event={"ID":"34dddfc9-db4b-48c0-9ec0-6eceb641aa26","Type":"ContainerDied","Data":"d66e9eb5b12839b686adf0c04ad120289cd318683a00b6f60f5209ab099f7f09"} Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.677511 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d66e9eb5b12839b686adf0c04ad120289cd318683a00b6f60f5209ab099f7f09" Jan 29 07:02:05 crc kubenswrapper[4826]: I0129 07:02:05.716967 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-r6wg8" podStartSLOduration=2.7169376290000002 podStartE2EDuration="2.716937629s" podCreationTimestamp="2026-01-29 07:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:02:05.70109849 +0000 UTC m=+1109.562891599" watchObservedRunningTime="2026-01-29 07:02:05.716937629 +0000 UTC m=+1109.578730738" Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.683418 4826 generic.go:334] "Generic (PLEG): container finished" podID="f2621642-f600-4c3e-b641-9665ec72e213" containerID="f36d662feb9b70757b17b28dd352459437ead99aa4d2976e18151a94e375a7af" exitCode=0 Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.683462 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r6wg8" event={"ID":"f2621642-f600-4c3e-b641-9665ec72e213","Type":"ContainerDied","Data":"f36d662feb9b70757b17b28dd352459437ead99aa4d2976e18151a94e375a7af"} Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.687655 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9","Type":"ContainerStarted","Data":"566e075f705cbbe67baf7d3552d44286bc465cf556a9a4ef3b74748293dbc37a"} Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.690459 4826 generic.go:334] "Generic (PLEG): container finished" podID="34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2" containerID="bf86fb4ad75b42745b9024c2a242dcc8628452687b71ca3b6937b01bb71646c9" exitCode=0 Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.690563 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d82nq" event={"ID":"34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2","Type":"ContainerDied","Data":"bf86fb4ad75b42745b9024c2a242dcc8628452687b71ca3b6937b01bb71646c9"} Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.693594 4826 generic.go:334] "Generic (PLEG): container finished" podID="f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97" containerID="e5181413a8e5faf280cefd33bbfd96ed69dcab6f05230cff35710cfaa85a847f" exitCode=0 Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.693723 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3ce2-account-create-update-zgdts" event={"ID":"f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97","Type":"ContainerDied","Data":"e5181413a8e5faf280cefd33bbfd96ed69dcab6f05230cff35710cfaa85a847f"} Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.707519 4826 generic.go:334] "Generic (PLEG): container finished" podID="c17bd300-9b3b-4b17-a668-ac2038915fc8" containerID="b8c90ffd14ba63fdc1141218b4b95071de44fe71869b70a72ff68b6e207fdeab" exitCode=0 Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.707656 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6fcd-account-create-update-fgtgt" event={"ID":"c17bd300-9b3b-4b17-a668-ac2038915fc8","Type":"ContainerDied","Data":"b8c90ffd14ba63fdc1141218b4b95071de44fe71869b70a72ff68b6e207fdeab"} Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.712391 4826 generic.go:334] "Generic (PLEG): container finished" podID="30cb6345-09e5-421e-8cac-7223ff25731a" containerID="3951b0758c239a5b0edda6fe5d77c334c3b619b6fe73c93f7cc738f9168885ad" exitCode=0 Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.712492 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tjfkk" event={"ID":"30cb6345-09e5-421e-8cac-7223ff25731a","Type":"ContainerDied","Data":"3951b0758c239a5b0edda6fe5d77c334c3b619b6fe73c93f7cc738f9168885ad"} Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.718780 4826 generic.go:334] "Generic (PLEG): container finished" podID="bcdb469f-8122-4cba-a7e1-9e2e9e829263" containerID="c064964aba665f0e4b9b54a6929bd5f3e07cb3ed36b2f733b8bc30cf4991d621" exitCode=0 Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.718849 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-97ad-account-create-update-xx9t5" event={"ID":"bcdb469f-8122-4cba-a7e1-9e2e9e829263","Type":"ContainerDied","Data":"c064964aba665f0e4b9b54a6929bd5f3e07cb3ed36b2f733b8bc30cf4991d621"} Jan 29 07:02:06 crc kubenswrapper[4826]: I0129 07:02:06.756017 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=43.959743427 podStartE2EDuration="57.755994344s" podCreationTimestamp="2026-01-29 07:01:09 +0000 UTC" firstStartedPulling="2026-01-29 07:01:20.601716656 +0000 UTC m=+1064.463509735" lastFinishedPulling="2026-01-29 07:01:34.397967563 +0000 UTC m=+1078.259760652" observedRunningTime="2026-01-29 07:02:06.750841231 +0000 UTC m=+1110.612634300" watchObservedRunningTime="2026-01-29 07:02:06.755994344 +0000 UTC m=+1110.617787433" Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.050008 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7m444"] Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.058751 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7m444"] Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.732968 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"af857248-0a50-4850-93dd-c8c4e5d8e5ea","Type":"ContainerStarted","Data":"fbe7c7ae52eeb239791fe0ed601efbb5177d39b8ea9d0057b13b2f3c5c72b647"} Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.783027 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=43.922924143 podStartE2EDuration="56.782987807s" podCreationTimestamp="2026-01-29 07:01:11 +0000 UTC" firstStartedPulling="2026-01-29 07:01:21.539220783 +0000 UTC m=+1065.401013852" lastFinishedPulling="2026-01-29 07:01:34.399284447 +0000 UTC m=+1078.261077516" observedRunningTime="2026-01-29 07:02:07.765584568 +0000 UTC m=+1111.627377727" watchObservedRunningTime="2026-01-29 07:02:07.782987807 +0000 UTC m=+1111.644780916" Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.919075 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 07:02:07 crc kubenswrapper[4826]: E0129 07:02:07.930181 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6477c0-6361-4904-9574-8aa028f87d8d" containerName="mariadb-account-create-update" Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.930210 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6477c0-6361-4904-9574-8aa028f87d8d" containerName="mariadb-account-create-update" Jan 29 07:02:07 crc kubenswrapper[4826]: E0129 07:02:07.930235 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34dddfc9-db4b-48c0-9ec0-6eceb641aa26" containerName="swift-ring-rebalance" Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.930242 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="34dddfc9-db4b-48c0-9ec0-6eceb641aa26" containerName="swift-ring-rebalance" Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.930715 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="34dddfc9-db4b-48c0-9ec0-6eceb641aa26" containerName="swift-ring-rebalance" Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.930738 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c6477c0-6361-4904-9574-8aa028f87d8d" containerName="mariadb-account-create-update" Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.945489 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.954393 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.955416 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.970184 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-s9vvx" Jan 29 07:02:07 crc kubenswrapper[4826]: I0129 07:02:07.991823 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:07.997792 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.021636 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbxlf\" (UniqueName: \"kubernetes.io/projected/17dd6ec1-84fb-4bb3-8700-c8691f059937-kube-api-access-cbxlf\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.021793 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17dd6ec1-84fb-4bb3-8700-c8691f059937-scripts\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.021833 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.021915 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.021937 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17dd6ec1-84fb-4bb3-8700-c8691f059937-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.021983 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.022094 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd6ec1-84fb-4bb3-8700-c8691f059937-config\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.125430 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.125733 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd6ec1-84fb-4bb3-8700-c8691f059937-config\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.125776 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbxlf\" (UniqueName: \"kubernetes.io/projected/17dd6ec1-84fb-4bb3-8700-c8691f059937-kube-api-access-cbxlf\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.125822 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17dd6ec1-84fb-4bb3-8700-c8691f059937-scripts\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.125842 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.125876 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.125894 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17dd6ec1-84fb-4bb3-8700-c8691f059937-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.126950 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17dd6ec1-84fb-4bb3-8700-c8691f059937-scripts\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.127492 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd6ec1-84fb-4bb3-8700-c8691f059937-config\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.131774 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17dd6ec1-84fb-4bb3-8700-c8691f059937-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.139215 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.141167 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.141348 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.144807 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbxlf\" (UniqueName: \"kubernetes.io/projected/17dd6ec1-84fb-4bb3-8700-c8691f059937-kube-api-access-cbxlf\") pod \"ovn-northd-0\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.331697 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.426413 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d82nq" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.428261 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fcd-account-create-update-fgtgt" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.432221 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r6wg8" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.436795 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tjfkk" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.458709 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3ce2-account-create-update-zgdts" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.526595 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-97ad-account-create-update-xx9t5" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.534263 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nbdb\" (UniqueName: \"kubernetes.io/projected/f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97-kube-api-access-5nbdb\") pod \"f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97\" (UID: \"f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97\") " Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.534476 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2-operator-scripts\") pod \"34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2\" (UID: \"34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2\") " Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.534671 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6cs7\" (UniqueName: \"kubernetes.io/projected/30cb6345-09e5-421e-8cac-7223ff25731a-kube-api-access-l6cs7\") pod \"30cb6345-09e5-421e-8cac-7223ff25731a\" (UID: \"30cb6345-09e5-421e-8cac-7223ff25731a\") " Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.535201 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2" (UID: "34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.535505 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97-operator-scripts\") pod \"f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97\" (UID: \"f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97\") " Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.535562 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2621642-f600-4c3e-b641-9665ec72e213-operator-scripts\") pod \"f2621642-f600-4c3e-b641-9665ec72e213\" (UID: \"f2621642-f600-4c3e-b641-9665ec72e213\") " Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.535582 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30cb6345-09e5-421e-8cac-7223ff25731a-operator-scripts\") pod \"30cb6345-09e5-421e-8cac-7223ff25731a\" (UID: \"30cb6345-09e5-421e-8cac-7223ff25731a\") " Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.535607 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mkgx\" (UniqueName: \"kubernetes.io/projected/34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2-kube-api-access-4mkgx\") pod \"34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2\" (UID: \"34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2\") " Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.535628 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj948\" (UniqueName: \"kubernetes.io/projected/f2621642-f600-4c3e-b641-9665ec72e213-kube-api-access-wj948\") pod \"f2621642-f600-4c3e-b641-9665ec72e213\" (UID: \"f2621642-f600-4c3e-b641-9665ec72e213\") " Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.535665 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c17bd300-9b3b-4b17-a668-ac2038915fc8-operator-scripts\") pod \"c17bd300-9b3b-4b17-a668-ac2038915fc8\" (UID: \"c17bd300-9b3b-4b17-a668-ac2038915fc8\") " Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.535686 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvb58\" (UniqueName: \"kubernetes.io/projected/c17bd300-9b3b-4b17-a668-ac2038915fc8-kube-api-access-rvb58\") pod \"c17bd300-9b3b-4b17-a668-ac2038915fc8\" (UID: \"c17bd300-9b3b-4b17-a668-ac2038915fc8\") " Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.536084 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.536619 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97" (UID: "f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.536769 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30cb6345-09e5-421e-8cac-7223ff25731a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30cb6345-09e5-421e-8cac-7223ff25731a" (UID: "30cb6345-09e5-421e-8cac-7223ff25731a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.536970 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c17bd300-9b3b-4b17-a668-ac2038915fc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c17bd300-9b3b-4b17-a668-ac2038915fc8" (UID: "c17bd300-9b3b-4b17-a668-ac2038915fc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.537022 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2621642-f600-4c3e-b641-9665ec72e213-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2621642-f600-4c3e-b641-9665ec72e213" (UID: "f2621642-f600-4c3e-b641-9665ec72e213"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.540183 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97-kube-api-access-5nbdb" (OuterVolumeSpecName: "kube-api-access-5nbdb") pod "f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97" (UID: "f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97"). InnerVolumeSpecName "kube-api-access-5nbdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.542208 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2-kube-api-access-4mkgx" (OuterVolumeSpecName: "kube-api-access-4mkgx") pod "34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2" (UID: "34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2"). InnerVolumeSpecName "kube-api-access-4mkgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.543273 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2621642-f600-4c3e-b641-9665ec72e213-kube-api-access-wj948" (OuterVolumeSpecName: "kube-api-access-wj948") pod "f2621642-f600-4c3e-b641-9665ec72e213" (UID: "f2621642-f600-4c3e-b641-9665ec72e213"). InnerVolumeSpecName "kube-api-access-wj948". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.543436 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c17bd300-9b3b-4b17-a668-ac2038915fc8-kube-api-access-rvb58" (OuterVolumeSpecName: "kube-api-access-rvb58") pod "c17bd300-9b3b-4b17-a668-ac2038915fc8" (UID: "c17bd300-9b3b-4b17-a668-ac2038915fc8"). InnerVolumeSpecName "kube-api-access-rvb58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.548249 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30cb6345-09e5-421e-8cac-7223ff25731a-kube-api-access-l6cs7" (OuterVolumeSpecName: "kube-api-access-l6cs7") pod "30cb6345-09e5-421e-8cac-7223ff25731a" (UID: "30cb6345-09e5-421e-8cac-7223ff25731a"). InnerVolumeSpecName "kube-api-access-l6cs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.637434 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcdb469f-8122-4cba-a7e1-9e2e9e829263-operator-scripts\") pod \"bcdb469f-8122-4cba-a7e1-9e2e9e829263\" (UID: \"bcdb469f-8122-4cba-a7e1-9e2e9e829263\") " Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.637506 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjj9v\" (UniqueName: \"kubernetes.io/projected/bcdb469f-8122-4cba-a7e1-9e2e9e829263-kube-api-access-sjj9v\") pod \"bcdb469f-8122-4cba-a7e1-9e2e9e829263\" (UID: \"bcdb469f-8122-4cba-a7e1-9e2e9e829263\") " Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.637911 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcdb469f-8122-4cba-a7e1-9e2e9e829263-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcdb469f-8122-4cba-a7e1-9e2e9e829263" (UID: "bcdb469f-8122-4cba-a7e1-9e2e9e829263"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.637983 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c17bd300-9b3b-4b17-a668-ac2038915fc8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.638000 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvb58\" (UniqueName: \"kubernetes.io/projected/c17bd300-9b3b-4b17-a668-ac2038915fc8-kube-api-access-rvb58\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.638012 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nbdb\" (UniqueName: \"kubernetes.io/projected/f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97-kube-api-access-5nbdb\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.638022 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6cs7\" (UniqueName: \"kubernetes.io/projected/30cb6345-09e5-421e-8cac-7223ff25731a-kube-api-access-l6cs7\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.638032 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.638061 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2621642-f600-4c3e-b641-9665ec72e213-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.638071 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30cb6345-09e5-421e-8cac-7223ff25731a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.638081 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mkgx\" (UniqueName: \"kubernetes.io/projected/34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2-kube-api-access-4mkgx\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.638091 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj948\" (UniqueName: \"kubernetes.io/projected/f2621642-f600-4c3e-b641-9665ec72e213-kube-api-access-wj948\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.640769 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdb469f-8122-4cba-a7e1-9e2e9e829263-kube-api-access-sjj9v" (OuterVolumeSpecName: "kube-api-access-sjj9v") pod "bcdb469f-8122-4cba-a7e1-9e2e9e829263" (UID: "bcdb469f-8122-4cba-a7e1-9e2e9e829263"). InnerVolumeSpecName "kube-api-access-sjj9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.739966 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcdb469f-8122-4cba-a7e1-9e2e9e829263-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.740360 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjj9v\" (UniqueName: \"kubernetes.io/projected/bcdb469f-8122-4cba-a7e1-9e2e9e829263-kube-api-access-sjj9v\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.749734 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fcd-account-create-update-fgtgt" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.749787 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6fcd-account-create-update-fgtgt" event={"ID":"c17bd300-9b3b-4b17-a668-ac2038915fc8","Type":"ContainerDied","Data":"48ecc676f2b9876c91450948d240b40dae33ed4740bf8b0295072a6a3e5c83a9"} Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.749860 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48ecc676f2b9876c91450948d240b40dae33ed4740bf8b0295072a6a3e5c83a9" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.751747 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tjfkk" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.751777 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tjfkk" event={"ID":"30cb6345-09e5-421e-8cac-7223ff25731a","Type":"ContainerDied","Data":"abd209ee5b4766c1445380ec8af0704045df0a1a4d72c3edf9a087f6a1dc1dc5"} Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.751951 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abd209ee5b4766c1445380ec8af0704045df0a1a4d72c3edf9a087f6a1dc1dc5" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.753973 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-97ad-account-create-update-xx9t5" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.753997 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-97ad-account-create-update-xx9t5" event={"ID":"bcdb469f-8122-4cba-a7e1-9e2e9e829263","Type":"ContainerDied","Data":"9cac120bfe3aecbcab57f0603594c4db07b163bd22743c5da977483e167644be"} Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.754050 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cac120bfe3aecbcab57f0603594c4db07b163bd22743c5da977483e167644be" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.755742 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r6wg8" event={"ID":"f2621642-f600-4c3e-b641-9665ec72e213","Type":"ContainerDied","Data":"5b08aee47f5c3e932b1154e96d9d37b93263512bc7053ffa54b435bd8f63261c"} Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.755773 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b08aee47f5c3e932b1154e96d9d37b93263512bc7053ffa54b435bd8f63261c" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.755807 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r6wg8" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.757348 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-d82nq" event={"ID":"34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2","Type":"ContainerDied","Data":"0e3fdce4df7db01219787d2fe880c5d47af20c29701ec1024d0a7190ce1116fe"} Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.757377 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e3fdce4df7db01219787d2fe880c5d47af20c29701ec1024d0a7190ce1116fe" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.757440 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-d82nq" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.760724 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3ce2-account-create-update-zgdts" event={"ID":"f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97","Type":"ContainerDied","Data":"0f75b29fa3036ad911b9efda9d9c5b04b748c7f0b9d3c8a069306e302f772941"} Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.760771 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f75b29fa3036ad911b9efda9d9c5b04b748c7f0b9d3c8a069306e302f772941" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.760840 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3ce2-account-create-update-zgdts" Jan 29 07:02:08 crc kubenswrapper[4826]: W0129 07:02:08.835115 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17dd6ec1_84fb_4bb3_8700_c8691f059937.slice/crio-1095560d66bb32e4b21ddb8c2db45cc2e6b9c9229ce2399fd7ef5d5440cc36be WatchSource:0}: Error finding container 1095560d66bb32e4b21ddb8c2db45cc2e6b9c9229ce2399fd7ef5d5440cc36be: Status 404 returned error can't find the container with id 1095560d66bb32e4b21ddb8c2db45cc2e6b9c9229ce2399fd7ef5d5440cc36be Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.837972 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c6477c0-6361-4904-9574-8aa028f87d8d" path="/var/lib/kubelet/pods/6c6477c0-6361-4904-9574-8aa028f87d8d/volumes" Jan 29 07:02:08 crc kubenswrapper[4826]: I0129 07:02:08.840275 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 07:02:09 crc kubenswrapper[4826]: I0129 07:02:09.776761 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"17dd6ec1-84fb-4bb3-8700-c8691f059937","Type":"ContainerStarted","Data":"1095560d66bb32e4b21ddb8c2db45cc2e6b9c9229ce2399fd7ef5d5440cc36be"} Jan 29 07:02:09 crc kubenswrapper[4826]: I0129 07:02:09.785002 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4zwtm" podUID="bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" containerName="ovn-controller" probeResult="failure" output=< Jan 29 07:02:09 crc kubenswrapper[4826]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 07:02:09 crc kubenswrapper[4826]: > Jan 29 07:02:09 crc kubenswrapper[4826]: I0129 07:02:09.826183 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:02:09 crc kubenswrapper[4826]: I0129 07:02:09.832779 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:02:09 crc kubenswrapper[4826]: I0129 07:02:09.896554 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.051023 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4zwtm-config-cgtwg"] Jan 29 07:02:10 crc kubenswrapper[4826]: E0129 07:02:10.054518 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17bd300-9b3b-4b17-a668-ac2038915fc8" containerName="mariadb-account-create-update" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.054569 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17bd300-9b3b-4b17-a668-ac2038915fc8" containerName="mariadb-account-create-update" Jan 29 07:02:10 crc kubenswrapper[4826]: E0129 07:02:10.054614 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97" containerName="mariadb-account-create-update" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.054621 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97" containerName="mariadb-account-create-update" Jan 29 07:02:10 crc kubenswrapper[4826]: E0129 07:02:10.054645 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2621642-f600-4c3e-b641-9665ec72e213" containerName="mariadb-database-create" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.054651 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2621642-f600-4c3e-b641-9665ec72e213" containerName="mariadb-database-create" Jan 29 07:02:10 crc kubenswrapper[4826]: E0129 07:02:10.054661 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2" containerName="mariadb-database-create" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.054666 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2" containerName="mariadb-database-create" Jan 29 07:02:10 crc kubenswrapper[4826]: E0129 07:02:10.054673 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cb6345-09e5-421e-8cac-7223ff25731a" containerName="mariadb-database-create" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.054679 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cb6345-09e5-421e-8cac-7223ff25731a" containerName="mariadb-database-create" Jan 29 07:02:10 crc kubenswrapper[4826]: E0129 07:02:10.054686 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdb469f-8122-4cba-a7e1-9e2e9e829263" containerName="mariadb-account-create-update" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.054692 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdb469f-8122-4cba-a7e1-9e2e9e829263" containerName="mariadb-account-create-update" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.054972 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2621642-f600-4c3e-b641-9665ec72e213" containerName="mariadb-database-create" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.054987 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="30cb6345-09e5-421e-8cac-7223ff25731a" containerName="mariadb-database-create" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.055001 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcdb469f-8122-4cba-a7e1-9e2e9e829263" containerName="mariadb-account-create-update" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.055011 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97" containerName="mariadb-account-create-update" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.055037 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c17bd300-9b3b-4b17-a668-ac2038915fc8" containerName="mariadb-account-create-update" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.055048 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2" containerName="mariadb-database-create" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.055965 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.058678 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.060103 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4zwtm-config-cgtwg"] Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.169138 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-run-ovn\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.169190 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-log-ovn\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.169221 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-run\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.170466 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-additional-scripts\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.170577 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-scripts\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.170700 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6xw6\" (UniqueName: \"kubernetes.io/projected/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-kube-api-access-k6xw6\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.271644 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-additional-scripts\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.271703 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-scripts\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.271754 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6xw6\" (UniqueName: \"kubernetes.io/projected/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-kube-api-access-k6xw6\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.271783 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-run-ovn\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.271802 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-log-ovn\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.271820 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-run\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.272154 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-run\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.272904 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-additional-scripts\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.273012 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-run-ovn\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.273063 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-log-ovn\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.274592 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-scripts\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.296199 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6xw6\" (UniqueName: \"kubernetes.io/projected/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-kube-api-access-k6xw6\") pod \"ovn-controller-4zwtm-config-cgtwg\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.376858 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:10 crc kubenswrapper[4826]: I0129 07:02:10.403571 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 07:02:11 crc kubenswrapper[4826]: W0129 07:02:11.186837 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d9e4fdf_ad83_4f6e_abbd_55d3a6acf005.slice/crio-83a7ca9ae0d825a0033d26407e00694c0f474c674988d224d7803877a07b7421 WatchSource:0}: Error finding container 83a7ca9ae0d825a0033d26407e00694c0f474c674988d224d7803877a07b7421: Status 404 returned error can't find the container with id 83a7ca9ae0d825a0033d26407e00694c0f474c674988d224d7803877a07b7421 Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.188562 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4zwtm-config-cgtwg"] Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.629439 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-27drr"] Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.630644 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-27drr" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.641730 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2f64-account-create-update-2n4t7"] Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.643029 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2f64-account-create-update-2n4t7" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.645126 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.656080 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-27drr"] Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.678387 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2f64-account-create-update-2n4t7"] Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.698291 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt6sl\" (UniqueName: \"kubernetes.io/projected/4ddbe0a6-485b-4bde-b08a-8745e551bbf6-kube-api-access-lt6sl\") pod \"cinder-db-create-27drr\" (UID: \"4ddbe0a6-485b-4bde-b08a-8745e551bbf6\") " pod="openstack/cinder-db-create-27drr" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.698496 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/812c49e0-db30-4cfe-9722-6ddb5931a2c0-operator-scripts\") pod \"barbican-2f64-account-create-update-2n4t7\" (UID: \"812c49e0-db30-4cfe-9722-6ddb5931a2c0\") " pod="openstack/barbican-2f64-account-create-update-2n4t7" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.698528 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ddbe0a6-485b-4bde-b08a-8745e551bbf6-operator-scripts\") pod \"cinder-db-create-27drr\" (UID: \"4ddbe0a6-485b-4bde-b08a-8745e551bbf6\") " pod="openstack/cinder-db-create-27drr" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.698590 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5f8c\" (UniqueName: \"kubernetes.io/projected/812c49e0-db30-4cfe-9722-6ddb5931a2c0-kube-api-access-z5f8c\") pod \"barbican-2f64-account-create-update-2n4t7\" (UID: \"812c49e0-db30-4cfe-9722-6ddb5931a2c0\") " pod="openstack/barbican-2f64-account-create-update-2n4t7" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.800043 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"17dd6ec1-84fb-4bb3-8700-c8691f059937","Type":"ContainerStarted","Data":"2231f0fb6e7d9114a6e1ce628c3f775fddfa8bbe31c0d18c4ebf95440d9a1023"} Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.800109 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"17dd6ec1-84fb-4bb3-8700-c8691f059937","Type":"ContainerStarted","Data":"7ba9ac04e0850886e890e608a55f11c50e9ca3d5994419279b8b8fc19be3fbd4"} Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.801860 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.803557 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4zwtm-config-cgtwg" event={"ID":"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005","Type":"ContainerStarted","Data":"cf2944944f9f21b3ba12ebc08b9e24adaea20d8108a210e674d22e24563e0980"} Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.803584 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4zwtm-config-cgtwg" event={"ID":"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005","Type":"ContainerStarted","Data":"83a7ca9ae0d825a0033d26407e00694c0f474c674988d224d7803877a07b7421"} Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.898996 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5f8c\" (UniqueName: \"kubernetes.io/projected/812c49e0-db30-4cfe-9722-6ddb5931a2c0-kube-api-access-z5f8c\") pod \"barbican-2f64-account-create-update-2n4t7\" (UID: \"812c49e0-db30-4cfe-9722-6ddb5931a2c0\") " pod="openstack/barbican-2f64-account-create-update-2n4t7" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.899460 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt6sl\" (UniqueName: \"kubernetes.io/projected/4ddbe0a6-485b-4bde-b08a-8745e551bbf6-kube-api-access-lt6sl\") pod \"cinder-db-create-27drr\" (UID: \"4ddbe0a6-485b-4bde-b08a-8745e551bbf6\") " pod="openstack/cinder-db-create-27drr" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.899615 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/812c49e0-db30-4cfe-9722-6ddb5931a2c0-operator-scripts\") pod \"barbican-2f64-account-create-update-2n4t7\" (UID: \"812c49e0-db30-4cfe-9722-6ddb5931a2c0\") " pod="openstack/barbican-2f64-account-create-update-2n4t7" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.899640 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ddbe0a6-485b-4bde-b08a-8745e551bbf6-operator-scripts\") pod \"cinder-db-create-27drr\" (UID: \"4ddbe0a6-485b-4bde-b08a-8745e551bbf6\") " pod="openstack/cinder-db-create-27drr" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.900425 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ddbe0a6-485b-4bde-b08a-8745e551bbf6-operator-scripts\") pod \"cinder-db-create-27drr\" (UID: \"4ddbe0a6-485b-4bde-b08a-8745e551bbf6\") " pod="openstack/cinder-db-create-27drr" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.901853 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/812c49e0-db30-4cfe-9722-6ddb5931a2c0-operator-scripts\") pod \"barbican-2f64-account-create-update-2n4t7\" (UID: \"812c49e0-db30-4cfe-9722-6ddb5931a2c0\") " pod="openstack/barbican-2f64-account-create-update-2n4t7" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.935283 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt6sl\" (UniqueName: \"kubernetes.io/projected/4ddbe0a6-485b-4bde-b08a-8745e551bbf6-kube-api-access-lt6sl\") pod \"cinder-db-create-27drr\" (UID: \"4ddbe0a6-485b-4bde-b08a-8745e551bbf6\") " pod="openstack/cinder-db-create-27drr" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.941369 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8bhcx"] Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.942955 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8bhcx" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.945155 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-27drr" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.949644 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-4cd8-account-create-update-5swv2"] Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.951283 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4cd8-account-create-update-5swv2" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.955711 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.958997 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5f8c\" (UniqueName: \"kubernetes.io/projected/812c49e0-db30-4cfe-9722-6ddb5931a2c0-kube-api-access-z5f8c\") pod \"barbican-2f64-account-create-update-2n4t7\" (UID: \"812c49e0-db30-4cfe-9722-6ddb5931a2c0\") " pod="openstack/barbican-2f64-account-create-update-2n4t7" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.968815 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8bhcx"] Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.975866 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.059657491 podStartE2EDuration="4.975844381s" podCreationTimestamp="2026-01-29 07:02:07 +0000 UTC" firstStartedPulling="2026-01-29 07:02:08.838413015 +0000 UTC m=+1112.700206114" lastFinishedPulling="2026-01-29 07:02:10.754599935 +0000 UTC m=+1114.616393004" observedRunningTime="2026-01-29 07:02:11.97038138 +0000 UTC m=+1115.832174469" watchObservedRunningTime="2026-01-29 07:02:11.975844381 +0000 UTC m=+1115.837637450" Jan 29 07:02:11 crc kubenswrapper[4826]: I0129 07:02:11.984359 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4cd8-account-create-update-5swv2"] Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.001086 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c89e1f-a128-4289-ad90-49c11f2c89cd-operator-scripts\") pod \"cinder-4cd8-account-create-update-5swv2\" (UID: \"18c89e1f-a128-4289-ad90-49c11f2c89cd\") " pod="openstack/cinder-4cd8-account-create-update-5swv2" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.001153 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79bda2ef-7c8b-4970-b194-4f17c6c2c4cd-operator-scripts\") pod \"barbican-db-create-8bhcx\" (UID: \"79bda2ef-7c8b-4970-b194-4f17c6c2c4cd\") " pod="openstack/barbican-db-create-8bhcx" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.001201 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r78m\" (UniqueName: \"kubernetes.io/projected/79bda2ef-7c8b-4970-b194-4f17c6c2c4cd-kube-api-access-8r78m\") pod \"barbican-db-create-8bhcx\" (UID: \"79bda2ef-7c8b-4970-b194-4f17c6c2c4cd\") " pod="openstack/barbican-db-create-8bhcx" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.001226 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz642\" (UniqueName: \"kubernetes.io/projected/18c89e1f-a128-4289-ad90-49c11f2c89cd-kube-api-access-kz642\") pod \"cinder-4cd8-account-create-update-5swv2\" (UID: \"18c89e1f-a128-4289-ad90-49c11f2c89cd\") " pod="openstack/cinder-4cd8-account-create-update-5swv2" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.014468 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4zwtm-config-cgtwg" podStartSLOduration=2.014448968 podStartE2EDuration="2.014448968s" podCreationTimestamp="2026-01-29 07:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:02:12.010486795 +0000 UTC m=+1115.872279864" watchObservedRunningTime="2026-01-29 07:02:12.014448968 +0000 UTC m=+1115.876242027" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.102768 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c89e1f-a128-4289-ad90-49c11f2c89cd-operator-scripts\") pod \"cinder-4cd8-account-create-update-5swv2\" (UID: \"18c89e1f-a128-4289-ad90-49c11f2c89cd\") " pod="openstack/cinder-4cd8-account-create-update-5swv2" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.103356 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79bda2ef-7c8b-4970-b194-4f17c6c2c4cd-operator-scripts\") pod \"barbican-db-create-8bhcx\" (UID: \"79bda2ef-7c8b-4970-b194-4f17c6c2c4cd\") " pod="openstack/barbican-db-create-8bhcx" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.103429 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r78m\" (UniqueName: \"kubernetes.io/projected/79bda2ef-7c8b-4970-b194-4f17c6c2c4cd-kube-api-access-8r78m\") pod \"barbican-db-create-8bhcx\" (UID: \"79bda2ef-7c8b-4970-b194-4f17c6c2c4cd\") " pod="openstack/barbican-db-create-8bhcx" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.103466 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz642\" (UniqueName: \"kubernetes.io/projected/18c89e1f-a128-4289-ad90-49c11f2c89cd-kube-api-access-kz642\") pod \"cinder-4cd8-account-create-update-5swv2\" (UID: \"18c89e1f-a128-4289-ad90-49c11f2c89cd\") " pod="openstack/cinder-4cd8-account-create-update-5swv2" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.104496 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79bda2ef-7c8b-4970-b194-4f17c6c2c4cd-operator-scripts\") pod \"barbican-db-create-8bhcx\" (UID: \"79bda2ef-7c8b-4970-b194-4f17c6c2c4cd\") " pod="openstack/barbican-db-create-8bhcx" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.104809 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c89e1f-a128-4289-ad90-49c11f2c89cd-operator-scripts\") pod \"cinder-4cd8-account-create-update-5swv2\" (UID: \"18c89e1f-a128-4289-ad90-49c11f2c89cd\") " pod="openstack/cinder-4cd8-account-create-update-5swv2" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.158937 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz642\" (UniqueName: \"kubernetes.io/projected/18c89e1f-a128-4289-ad90-49c11f2c89cd-kube-api-access-kz642\") pod \"cinder-4cd8-account-create-update-5swv2\" (UID: \"18c89e1f-a128-4289-ad90-49c11f2c89cd\") " pod="openstack/cinder-4cd8-account-create-update-5swv2" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.174048 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r78m\" (UniqueName: \"kubernetes.io/projected/79bda2ef-7c8b-4970-b194-4f17c6c2c4cd-kube-api-access-8r78m\") pod \"barbican-db-create-8bhcx\" (UID: \"79bda2ef-7c8b-4970-b194-4f17c6c2c4cd\") " pod="openstack/barbican-db-create-8bhcx" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.243473 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-npv6r"] Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.253119 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-npv6r" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.256776 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2f64-account-create-update-2n4t7" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.268165 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2845-account-create-update-ll4xp"] Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.274350 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2845-account-create-update-ll4xp" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.276972 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-npv6r"] Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.279496 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.300772 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2845-account-create-update-ll4xp"] Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.314833 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644e2cae-40d8-4a56-a2d8-941d1b61efa4-operator-scripts\") pod \"neutron-db-create-npv6r\" (UID: \"644e2cae-40d8-4a56-a2d8-941d1b61efa4\") " pod="openstack/neutron-db-create-npv6r" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.314939 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xwg\" (UniqueName: \"kubernetes.io/projected/644e2cae-40d8-4a56-a2d8-941d1b61efa4-kube-api-access-67xwg\") pod \"neutron-db-create-npv6r\" (UID: \"644e2cae-40d8-4a56-a2d8-941d1b61efa4\") " pod="openstack/neutron-db-create-npv6r" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.352121 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-54vbh"] Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.353408 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-54vbh" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.358408 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.364651 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-54vbh"] Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.382255 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-27drr"] Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.418107 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67xwg\" (UniqueName: \"kubernetes.io/projected/644e2cae-40d8-4a56-a2d8-941d1b61efa4-kube-api-access-67xwg\") pod \"neutron-db-create-npv6r\" (UID: \"644e2cae-40d8-4a56-a2d8-941d1b61efa4\") " pod="openstack/neutron-db-create-npv6r" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.418576 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjslv\" (UniqueName: \"kubernetes.io/projected/0ae2b787-9c3f-40ca-91e2-10a4257a1df1-kube-api-access-tjslv\") pod \"neutron-2845-account-create-update-ll4xp\" (UID: \"0ae2b787-9c3f-40ca-91e2-10a4257a1df1\") " pod="openstack/neutron-2845-account-create-update-ll4xp" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.418638 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644e2cae-40d8-4a56-a2d8-941d1b61efa4-operator-scripts\") pod \"neutron-db-create-npv6r\" (UID: \"644e2cae-40d8-4a56-a2d8-941d1b61efa4\") " pod="openstack/neutron-db-create-npv6r" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.418707 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae2b787-9c3f-40ca-91e2-10a4257a1df1-operator-scripts\") pod \"neutron-2845-account-create-update-ll4xp\" (UID: \"0ae2b787-9c3f-40ca-91e2-10a4257a1df1\") " pod="openstack/neutron-2845-account-create-update-ll4xp" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.418735 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e98edf07-b03d-47bd-bb1a-b942a1e66fc3-operator-scripts\") pod \"root-account-create-update-54vbh\" (UID: \"e98edf07-b03d-47bd-bb1a-b942a1e66fc3\") " pod="openstack/root-account-create-update-54vbh" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.418756 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rtct\" (UniqueName: \"kubernetes.io/projected/e98edf07-b03d-47bd-bb1a-b942a1e66fc3-kube-api-access-7rtct\") pod \"root-account-create-update-54vbh\" (UID: \"e98edf07-b03d-47bd-bb1a-b942a1e66fc3\") " pod="openstack/root-account-create-update-54vbh" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.419732 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644e2cae-40d8-4a56-a2d8-941d1b61efa4-operator-scripts\") pod \"neutron-db-create-npv6r\" (UID: \"644e2cae-40d8-4a56-a2d8-941d1b61efa4\") " pod="openstack/neutron-db-create-npv6r" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.435001 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67xwg\" (UniqueName: \"kubernetes.io/projected/644e2cae-40d8-4a56-a2d8-941d1b61efa4-kube-api-access-67xwg\") pod \"neutron-db-create-npv6r\" (UID: \"644e2cae-40d8-4a56-a2d8-941d1b61efa4\") " pod="openstack/neutron-db-create-npv6r" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.445621 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8bhcx" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.458506 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4cd8-account-create-update-5swv2" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.520898 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae2b787-9c3f-40ca-91e2-10a4257a1df1-operator-scripts\") pod \"neutron-2845-account-create-update-ll4xp\" (UID: \"0ae2b787-9c3f-40ca-91e2-10a4257a1df1\") " pod="openstack/neutron-2845-account-create-update-ll4xp" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.520947 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e98edf07-b03d-47bd-bb1a-b942a1e66fc3-operator-scripts\") pod \"root-account-create-update-54vbh\" (UID: \"e98edf07-b03d-47bd-bb1a-b942a1e66fc3\") " pod="openstack/root-account-create-update-54vbh" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.520988 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rtct\" (UniqueName: \"kubernetes.io/projected/e98edf07-b03d-47bd-bb1a-b942a1e66fc3-kube-api-access-7rtct\") pod \"root-account-create-update-54vbh\" (UID: \"e98edf07-b03d-47bd-bb1a-b942a1e66fc3\") " pod="openstack/root-account-create-update-54vbh" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.521041 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjslv\" (UniqueName: \"kubernetes.io/projected/0ae2b787-9c3f-40ca-91e2-10a4257a1df1-kube-api-access-tjslv\") pod \"neutron-2845-account-create-update-ll4xp\" (UID: \"0ae2b787-9c3f-40ca-91e2-10a4257a1df1\") " pod="openstack/neutron-2845-account-create-update-ll4xp" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.521898 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e98edf07-b03d-47bd-bb1a-b942a1e66fc3-operator-scripts\") pod \"root-account-create-update-54vbh\" (UID: \"e98edf07-b03d-47bd-bb1a-b942a1e66fc3\") " pod="openstack/root-account-create-update-54vbh" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.534071 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae2b787-9c3f-40ca-91e2-10a4257a1df1-operator-scripts\") pod \"neutron-2845-account-create-update-ll4xp\" (UID: \"0ae2b787-9c3f-40ca-91e2-10a4257a1df1\") " pod="openstack/neutron-2845-account-create-update-ll4xp" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.552221 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rtct\" (UniqueName: \"kubernetes.io/projected/e98edf07-b03d-47bd-bb1a-b942a1e66fc3-kube-api-access-7rtct\") pod \"root-account-create-update-54vbh\" (UID: \"e98edf07-b03d-47bd-bb1a-b942a1e66fc3\") " pod="openstack/root-account-create-update-54vbh" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.555395 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjslv\" (UniqueName: \"kubernetes.io/projected/0ae2b787-9c3f-40ca-91e2-10a4257a1df1-kube-api-access-tjslv\") pod \"neutron-2845-account-create-update-ll4xp\" (UID: \"0ae2b787-9c3f-40ca-91e2-10a4257a1df1\") " pod="openstack/neutron-2845-account-create-update-ll4xp" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.574238 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2f64-account-create-update-2n4t7"] Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.595969 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-npv6r" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.644671 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2845-account-create-update-ll4xp" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.673428 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-54vbh" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.823078 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2f64-account-create-update-2n4t7" event={"ID":"812c49e0-db30-4cfe-9722-6ddb5931a2c0","Type":"ContainerStarted","Data":"137afef9d47f0a6a5614d4f2d7c8e1432be4bfbe5c3da69b6db4f80ef575253a"} Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.823757 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2f64-account-create-update-2n4t7" event={"ID":"812c49e0-db30-4cfe-9722-6ddb5931a2c0","Type":"ContainerStarted","Data":"b761f8cf4cc1bdb2dc355933b100c6d9fb05e5d74e56180e73b3a03ea3f3c400"} Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.826291 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-27drr" event={"ID":"4ddbe0a6-485b-4bde-b08a-8745e551bbf6","Type":"ContainerStarted","Data":"845f1e3e68f92d66cad36f3caa9cbd051acb175e0581aa74b30ede6cbc6d5c03"} Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.826353 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-27drr" event={"ID":"4ddbe0a6-485b-4bde-b08a-8745e551bbf6","Type":"ContainerStarted","Data":"6e1a2882c5626ffba31ef89d13d55d0d13a847245e1ec1571e0b704ba5a99072"} Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.828349 4826 generic.go:334] "Generic (PLEG): container finished" podID="0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005" containerID="cf2944944f9f21b3ba12ebc08b9e24adaea20d8108a210e674d22e24563e0980" exitCode=0 Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.828691 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4zwtm-config-cgtwg" event={"ID":"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005","Type":"ContainerDied","Data":"cf2944944f9f21b3ba12ebc08b9e24adaea20d8108a210e674d22e24563e0980"} Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.847976 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-2f64-account-create-update-2n4t7" podStartSLOduration=1.847958628 podStartE2EDuration="1.847958628s" podCreationTimestamp="2026-01-29 07:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:02:12.842083566 +0000 UTC m=+1116.703876635" watchObservedRunningTime="2026-01-29 07:02:12.847958628 +0000 UTC m=+1116.709751697" Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.867844 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-27drr" podStartSLOduration=1.86782743 podStartE2EDuration="1.86782743s" podCreationTimestamp="2026-01-29 07:02:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:02:12.867291147 +0000 UTC m=+1116.729084216" watchObservedRunningTime="2026-01-29 07:02:12.86782743 +0000 UTC m=+1116.729620499" Jan 29 07:02:12 crc kubenswrapper[4826]: E0129 07:02:12.981700 4826 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.173:45870->38.102.83.173:41327: write tcp 38.102.83.173:45870->38.102.83.173:41327: write: broken pipe Jan 29 07:02:12 crc kubenswrapper[4826]: I0129 07:02:12.982094 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8bhcx"] Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.100624 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4cd8-account-create-update-5swv2"] Jan 29 07:02:13 crc kubenswrapper[4826]: W0129 07:02:13.132176 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18c89e1f_a128_4289_ad90_49c11f2c89cd.slice/crio-e044727dac87afd658ecbf5db802aaec04f5a76de6dc2e1b6869a63324c7d809 WatchSource:0}: Error finding container e044727dac87afd658ecbf5db802aaec04f5a76de6dc2e1b6869a63324c7d809: Status 404 returned error can't find the container with id e044727dac87afd658ecbf5db802aaec04f5a76de6dc2e1b6869a63324c7d809 Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.154693 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-npv6r"] Jan 29 07:02:13 crc kubenswrapper[4826]: W0129 07:02:13.163711 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod644e2cae_40d8_4a56_a2d8_941d1b61efa4.slice/crio-57b31e2dd74f9eda329d10315d221da86f9b3d1dbd43a528e2bae4f357c2a2c7 WatchSource:0}: Error finding container 57b31e2dd74f9eda329d10315d221da86f9b3d1dbd43a528e2bae4f357c2a2c7: Status 404 returned error can't find the container with id 57b31e2dd74f9eda329d10315d221da86f9b3d1dbd43a528e2bae4f357c2a2c7 Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.226038 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2845-account-create-update-ll4xp"] Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.238471 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-54vbh"] Jan 29 07:02:13 crc kubenswrapper[4826]: W0129 07:02:13.241528 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ae2b787_9c3f_40ca_91e2_10a4257a1df1.slice/crio-e204f1c5f5eab37c1c17ed9c5a0ecb51fd47d4b65d7e447a47a1a568406da5c2 WatchSource:0}: Error finding container e204f1c5f5eab37c1c17ed9c5a0ecb51fd47d4b65d7e447a47a1a568406da5c2: Status 404 returned error can't find the container with id e204f1c5f5eab37c1c17ed9c5a0ecb51fd47d4b65d7e447a47a1a568406da5c2 Jan 29 07:02:13 crc kubenswrapper[4826]: W0129 07:02:13.257152 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode98edf07_b03d_47bd_bb1a_b942a1e66fc3.slice/crio-0688b552d3bfbe9c57ece1eb7e97fe6391fc22274c295c2a24c04db232e9f3ac WatchSource:0}: Error finding container 0688b552d3bfbe9c57ece1eb7e97fe6391fc22274c295c2a24c04db232e9f3ac: Status 404 returned error can't find the container with id 0688b552d3bfbe9c57ece1eb7e97fe6391fc22274c295c2a24c04db232e9f3ac Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.867588 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2845-account-create-update-ll4xp" event={"ID":"0ae2b787-9c3f-40ca-91e2-10a4257a1df1","Type":"ContainerStarted","Data":"d549f4b596986e823d47d770b85f06655aed211e20d2127f7baf8c70f10f5971"} Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.867645 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2845-account-create-update-ll4xp" event={"ID":"0ae2b787-9c3f-40ca-91e2-10a4257a1df1","Type":"ContainerStarted","Data":"e204f1c5f5eab37c1c17ed9c5a0ecb51fd47d4b65d7e447a47a1a568406da5c2"} Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.871139 4826 generic.go:334] "Generic (PLEG): container finished" podID="4ddbe0a6-485b-4bde-b08a-8745e551bbf6" containerID="845f1e3e68f92d66cad36f3caa9cbd051acb175e0581aa74b30ede6cbc6d5c03" exitCode=0 Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.871270 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-27drr" event={"ID":"4ddbe0a6-485b-4bde-b08a-8745e551bbf6","Type":"ContainerDied","Data":"845f1e3e68f92d66cad36f3caa9cbd051acb175e0581aa74b30ede6cbc6d5c03"} Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.873327 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-54vbh" event={"ID":"e98edf07-b03d-47bd-bb1a-b942a1e66fc3","Type":"ContainerStarted","Data":"d2b2d9c846c9f62ce0471ed8392344e2eed633a6d6946652cf52b4e532c79a51"} Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.873381 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-54vbh" event={"ID":"e98edf07-b03d-47bd-bb1a-b942a1e66fc3","Type":"ContainerStarted","Data":"0688b552d3bfbe9c57ece1eb7e97fe6391fc22274c295c2a24c04db232e9f3ac"} Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.875225 4826 generic.go:334] "Generic (PLEG): container finished" podID="644e2cae-40d8-4a56-a2d8-941d1b61efa4" containerID="ff2e9d1be46f0b2258c946e148a06f55582dc3fdd26410fbc6021f87bfbe528f" exitCode=0 Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.875455 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-npv6r" event={"ID":"644e2cae-40d8-4a56-a2d8-941d1b61efa4","Type":"ContainerDied","Data":"ff2e9d1be46f0b2258c946e148a06f55582dc3fdd26410fbc6021f87bfbe528f"} Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.875484 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-npv6r" event={"ID":"644e2cae-40d8-4a56-a2d8-941d1b61efa4","Type":"ContainerStarted","Data":"57b31e2dd74f9eda329d10315d221da86f9b3d1dbd43a528e2bae4f357c2a2c7"} Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.881224 4826 generic.go:334] "Generic (PLEG): container finished" podID="18c89e1f-a128-4289-ad90-49c11f2c89cd" containerID="3b16dc91c4cd7ce31b4ff6c7a4831ff1ec0c0e0999a5d166cdaf61d18f96c236" exitCode=0 Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.881314 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4cd8-account-create-update-5swv2" event={"ID":"18c89e1f-a128-4289-ad90-49c11f2c89cd","Type":"ContainerDied","Data":"3b16dc91c4cd7ce31b4ff6c7a4831ff1ec0c0e0999a5d166cdaf61d18f96c236"} Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.881341 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4cd8-account-create-update-5swv2" event={"ID":"18c89e1f-a128-4289-ad90-49c11f2c89cd","Type":"ContainerStarted","Data":"e044727dac87afd658ecbf5db802aaec04f5a76de6dc2e1b6869a63324c7d809"} Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.882893 4826 generic.go:334] "Generic (PLEG): container finished" podID="79bda2ef-7c8b-4970-b194-4f17c6c2c4cd" containerID="ee88680810e029e7435b67f77ba4c16aee4d06f22ee550823bd4f344ca8762f9" exitCode=0 Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.882954 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8bhcx" event={"ID":"79bda2ef-7c8b-4970-b194-4f17c6c2c4cd","Type":"ContainerDied","Data":"ee88680810e029e7435b67f77ba4c16aee4d06f22ee550823bd4f344ca8762f9"} Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.882974 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8bhcx" event={"ID":"79bda2ef-7c8b-4970-b194-4f17c6c2c4cd","Type":"ContainerStarted","Data":"fdba327a0e6fbf8ea4e81e4e561b33fb04e38b75b74bc3ab38ac647a33546e2d"} Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.889233 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-2845-account-create-update-ll4xp" podStartSLOduration=1.889210808 podStartE2EDuration="1.889210808s" podCreationTimestamp="2026-01-29 07:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:02:13.882979908 +0000 UTC m=+1117.744772987" watchObservedRunningTime="2026-01-29 07:02:13.889210808 +0000 UTC m=+1117.751003887" Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.889786 4826 generic.go:334] "Generic (PLEG): container finished" podID="812c49e0-db30-4cfe-9722-6ddb5931a2c0" containerID="137afef9d47f0a6a5614d4f2d7c8e1432be4bfbe5c3da69b6db4f80ef575253a" exitCode=0 Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.889917 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2f64-account-create-update-2n4t7" event={"ID":"812c49e0-db30-4cfe-9722-6ddb5931a2c0","Type":"ContainerDied","Data":"137afef9d47f0a6a5614d4f2d7c8e1432be4bfbe5c3da69b6db4f80ef575253a"} Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.930115 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-hbzx2"] Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.931197 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hbzx2" Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.935348 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.935573 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vphpw" Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.935875 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.936976 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 07:02:13 crc kubenswrapper[4826]: I0129 07:02:13.948219 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hbzx2"] Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.023197 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-54vbh" podStartSLOduration=2.023177276 podStartE2EDuration="2.023177276s" podCreationTimestamp="2026-01-29 07:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:02:14.01597873 +0000 UTC m=+1117.877771789" watchObservedRunningTime="2026-01-29 07:02:14.023177276 +0000 UTC m=+1117.884970335" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.057089 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b5abc4-73e1-4908-a776-0162b26a6d30-combined-ca-bundle\") pod \"keystone-db-sync-hbzx2\" (UID: \"d4b5abc4-73e1-4908-a776-0162b26a6d30\") " pod="openstack/keystone-db-sync-hbzx2" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.057165 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b5abc4-73e1-4908-a776-0162b26a6d30-config-data\") pod \"keystone-db-sync-hbzx2\" (UID: \"d4b5abc4-73e1-4908-a776-0162b26a6d30\") " pod="openstack/keystone-db-sync-hbzx2" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.057262 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp6fs\" (UniqueName: \"kubernetes.io/projected/d4b5abc4-73e1-4908-a776-0162b26a6d30-kube-api-access-mp6fs\") pod \"keystone-db-sync-hbzx2\" (UID: \"d4b5abc4-73e1-4908-a776-0162b26a6d30\") " pod="openstack/keystone-db-sync-hbzx2" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.159183 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b5abc4-73e1-4908-a776-0162b26a6d30-combined-ca-bundle\") pod \"keystone-db-sync-hbzx2\" (UID: \"d4b5abc4-73e1-4908-a776-0162b26a6d30\") " pod="openstack/keystone-db-sync-hbzx2" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.159243 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b5abc4-73e1-4908-a776-0162b26a6d30-config-data\") pod \"keystone-db-sync-hbzx2\" (UID: \"d4b5abc4-73e1-4908-a776-0162b26a6d30\") " pod="openstack/keystone-db-sync-hbzx2" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.159338 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp6fs\" (UniqueName: \"kubernetes.io/projected/d4b5abc4-73e1-4908-a776-0162b26a6d30-kube-api-access-mp6fs\") pod \"keystone-db-sync-hbzx2\" (UID: \"d4b5abc4-73e1-4908-a776-0162b26a6d30\") " pod="openstack/keystone-db-sync-hbzx2" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.166024 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b5abc4-73e1-4908-a776-0162b26a6d30-combined-ca-bundle\") pod \"keystone-db-sync-hbzx2\" (UID: \"d4b5abc4-73e1-4908-a776-0162b26a6d30\") " pod="openstack/keystone-db-sync-hbzx2" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.166265 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b5abc4-73e1-4908-a776-0162b26a6d30-config-data\") pod \"keystone-db-sync-hbzx2\" (UID: \"d4b5abc4-73e1-4908-a776-0162b26a6d30\") " pod="openstack/keystone-db-sync-hbzx2" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.175111 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp6fs\" (UniqueName: \"kubernetes.io/projected/d4b5abc4-73e1-4908-a776-0162b26a6d30-kube-api-access-mp6fs\") pod \"keystone-db-sync-hbzx2\" (UID: \"d4b5abc4-73e1-4908-a776-0162b26a6d30\") " pod="openstack/keystone-db-sync-hbzx2" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.288267 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.305892 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hbzx2" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.341202 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gnxn7"] Jan 29 07:02:14 crc kubenswrapper[4826]: E0129 07:02:14.341776 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005" containerName="ovn-config" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.341800 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005" containerName="ovn-config" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.342055 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005" containerName="ovn-config" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.342786 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.347534 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z4mwb" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.347780 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.351983 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gnxn7"] Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.361282 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-log-ovn\") pod \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.361344 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-additional-scripts\") pod \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.361410 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-run\") pod \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.361462 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6xw6\" (UniqueName: \"kubernetes.io/projected/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-kube-api-access-k6xw6\") pod \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.361543 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-scripts\") pod \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.361573 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-run-ovn\") pod \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\" (UID: \"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005\") " Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.361985 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005" (UID: "0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.362022 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005" (UID: "0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.362912 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005" (UID: "0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.362947 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-run" (OuterVolumeSpecName: "var-run") pod "0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005" (UID: "0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.364219 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-scripts" (OuterVolumeSpecName: "scripts") pod "0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005" (UID: "0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.367946 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-kube-api-access-k6xw6" (OuterVolumeSpecName: "kube-api-access-k6xw6") pod "0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005" (UID: "0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005"). InnerVolumeSpecName "kube-api-access-k6xw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.463991 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-db-sync-config-data\") pod \"glance-db-sync-gnxn7\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.464570 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-combined-ca-bundle\") pod \"glance-db-sync-gnxn7\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.464623 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-config-data\") pod \"glance-db-sync-gnxn7\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.464665 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksrgh\" (UniqueName: \"kubernetes.io/projected/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-kube-api-access-ksrgh\") pod \"glance-db-sync-gnxn7\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.465210 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.465245 4826 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.465260 4826 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.465271 4826 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.465285 4826 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.465326 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6xw6\" (UniqueName: \"kubernetes.io/projected/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005-kube-api-access-k6xw6\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.567694 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-db-sync-config-data\") pod \"glance-db-sync-gnxn7\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.567790 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-combined-ca-bundle\") pod \"glance-db-sync-gnxn7\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.567823 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-config-data\") pod \"glance-db-sync-gnxn7\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.567860 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksrgh\" (UniqueName: \"kubernetes.io/projected/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-kube-api-access-ksrgh\") pod \"glance-db-sync-gnxn7\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.574338 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-combined-ca-bundle\") pod \"glance-db-sync-gnxn7\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.574379 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-db-sync-config-data\") pod \"glance-db-sync-gnxn7\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.575605 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-config-data\") pod \"glance-db-sync-gnxn7\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.587555 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksrgh\" (UniqueName: \"kubernetes.io/projected/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-kube-api-access-ksrgh\") pod \"glance-db-sync-gnxn7\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.756473 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.797962 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-4zwtm" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.803653 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hbzx2"] Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.900861 4826 generic.go:334] "Generic (PLEG): container finished" podID="0ae2b787-9c3f-40ca-91e2-10a4257a1df1" containerID="d549f4b596986e823d47d770b85f06655aed211e20d2127f7baf8c70f10f5971" exitCode=0 Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.901064 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2845-account-create-update-ll4xp" event={"ID":"0ae2b787-9c3f-40ca-91e2-10a4257a1df1","Type":"ContainerDied","Data":"d549f4b596986e823d47d770b85f06655aed211e20d2127f7baf8c70f10f5971"} Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.904996 4826 generic.go:334] "Generic (PLEG): container finished" podID="e98edf07-b03d-47bd-bb1a-b942a1e66fc3" containerID="d2b2d9c846c9f62ce0471ed8392344e2eed633a6d6946652cf52b4e532c79a51" exitCode=0 Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.905114 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-54vbh" event={"ID":"e98edf07-b03d-47bd-bb1a-b942a1e66fc3","Type":"ContainerDied","Data":"d2b2d9c846c9f62ce0471ed8392344e2eed633a6d6946652cf52b4e532c79a51"} Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.922664 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4zwtm-config-cgtwg" event={"ID":"0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005","Type":"ContainerDied","Data":"83a7ca9ae0d825a0033d26407e00694c0f474c674988d224d7803877a07b7421"} Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.922732 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83a7ca9ae0d825a0033d26407e00694c0f474c674988d224d7803877a07b7421" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.922674 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4zwtm-config-cgtwg" Jan 29 07:02:14 crc kubenswrapper[4826]: I0129 07:02:14.925422 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hbzx2" event={"ID":"d4b5abc4-73e1-4908-a776-0162b26a6d30","Type":"ContainerStarted","Data":"5855845210dd2d450ce3b25763d547efe4e41076adda3d9f604af75724a7443d"} Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.449749 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4zwtm-config-cgtwg"] Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.455461 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4zwtm-config-cgtwg"] Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.672354 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8bhcx" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.680864 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-27drr" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.689206 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2f64-account-create-update-2n4t7" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.700986 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4cd8-account-create-update-5swv2" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.785957 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-npv6r" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.804145 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79bda2ef-7c8b-4970-b194-4f17c6c2c4cd-operator-scripts\") pod \"79bda2ef-7c8b-4970-b194-4f17c6c2c4cd\" (UID: \"79bda2ef-7c8b-4970-b194-4f17c6c2c4cd\") " Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.804224 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/812c49e0-db30-4cfe-9722-6ddb5931a2c0-operator-scripts\") pod \"812c49e0-db30-4cfe-9722-6ddb5931a2c0\" (UID: \"812c49e0-db30-4cfe-9722-6ddb5931a2c0\") " Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.804357 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r78m\" (UniqueName: \"kubernetes.io/projected/79bda2ef-7c8b-4970-b194-4f17c6c2c4cd-kube-api-access-8r78m\") pod \"79bda2ef-7c8b-4970-b194-4f17c6c2c4cd\" (UID: \"79bda2ef-7c8b-4970-b194-4f17c6c2c4cd\") " Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.804430 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt6sl\" (UniqueName: \"kubernetes.io/projected/4ddbe0a6-485b-4bde-b08a-8745e551bbf6-kube-api-access-lt6sl\") pod \"4ddbe0a6-485b-4bde-b08a-8745e551bbf6\" (UID: \"4ddbe0a6-485b-4bde-b08a-8745e551bbf6\") " Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.804452 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ddbe0a6-485b-4bde-b08a-8745e551bbf6-operator-scripts\") pod \"4ddbe0a6-485b-4bde-b08a-8745e551bbf6\" (UID: \"4ddbe0a6-485b-4bde-b08a-8745e551bbf6\") " Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.804472 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5f8c\" (UniqueName: \"kubernetes.io/projected/812c49e0-db30-4cfe-9722-6ddb5931a2c0-kube-api-access-z5f8c\") pod \"812c49e0-db30-4cfe-9722-6ddb5931a2c0\" (UID: \"812c49e0-db30-4cfe-9722-6ddb5931a2c0\") " Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.804520 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c89e1f-a128-4289-ad90-49c11f2c89cd-operator-scripts\") pod \"18c89e1f-a128-4289-ad90-49c11f2c89cd\" (UID: \"18c89e1f-a128-4289-ad90-49c11f2c89cd\") " Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.804534 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz642\" (UniqueName: \"kubernetes.io/projected/18c89e1f-a128-4289-ad90-49c11f2c89cd-kube-api-access-kz642\") pod \"18c89e1f-a128-4289-ad90-49c11f2c89cd\" (UID: \"18c89e1f-a128-4289-ad90-49c11f2c89cd\") " Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.805021 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79bda2ef-7c8b-4970-b194-4f17c6c2c4cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79bda2ef-7c8b-4970-b194-4f17c6c2c4cd" (UID: "79bda2ef-7c8b-4970-b194-4f17c6c2c4cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.806350 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812c49e0-db30-4cfe-9722-6ddb5931a2c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "812c49e0-db30-4cfe-9722-6ddb5931a2c0" (UID: "812c49e0-db30-4cfe-9722-6ddb5931a2c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.807697 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c89e1f-a128-4289-ad90-49c11f2c89cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18c89e1f-a128-4289-ad90-49c11f2c89cd" (UID: "18c89e1f-a128-4289-ad90-49c11f2c89cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.808231 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddbe0a6-485b-4bde-b08a-8745e551bbf6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ddbe0a6-485b-4bde-b08a-8745e551bbf6" (UID: "4ddbe0a6-485b-4bde-b08a-8745e551bbf6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.812652 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79bda2ef-7c8b-4970-b194-4f17c6c2c4cd-kube-api-access-8r78m" (OuterVolumeSpecName: "kube-api-access-8r78m") pod "79bda2ef-7c8b-4970-b194-4f17c6c2c4cd" (UID: "79bda2ef-7c8b-4970-b194-4f17c6c2c4cd"). InnerVolumeSpecName "kube-api-access-8r78m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.814086 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812c49e0-db30-4cfe-9722-6ddb5931a2c0-kube-api-access-z5f8c" (OuterVolumeSpecName: "kube-api-access-z5f8c") pod "812c49e0-db30-4cfe-9722-6ddb5931a2c0" (UID: "812c49e0-db30-4cfe-9722-6ddb5931a2c0"). InnerVolumeSpecName "kube-api-access-z5f8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.814518 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c89e1f-a128-4289-ad90-49c11f2c89cd-kube-api-access-kz642" (OuterVolumeSpecName: "kube-api-access-kz642") pod "18c89e1f-a128-4289-ad90-49c11f2c89cd" (UID: "18c89e1f-a128-4289-ad90-49c11f2c89cd"). InnerVolumeSpecName "kube-api-access-kz642". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.814627 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ddbe0a6-485b-4bde-b08a-8745e551bbf6-kube-api-access-lt6sl" (OuterVolumeSpecName: "kube-api-access-lt6sl") pod "4ddbe0a6-485b-4bde-b08a-8745e551bbf6" (UID: "4ddbe0a6-485b-4bde-b08a-8745e551bbf6"). InnerVolumeSpecName "kube-api-access-lt6sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.883667 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gnxn7"] Jan 29 07:02:15 crc kubenswrapper[4826]: W0129 07:02:15.900391 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4a11db9_4057_4f8b_bd9e_6b92e1c9fd8c.slice/crio-88c6347c00acb0b845ffbd9f4bf5e56ebe6445afc0069d262295b21da7a6a8ce WatchSource:0}: Error finding container 88c6347c00acb0b845ffbd9f4bf5e56ebe6445afc0069d262295b21da7a6a8ce: Status 404 returned error can't find the container with id 88c6347c00acb0b845ffbd9f4bf5e56ebe6445afc0069d262295b21da7a6a8ce Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.905883 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644e2cae-40d8-4a56-a2d8-941d1b61efa4-operator-scripts\") pod \"644e2cae-40d8-4a56-a2d8-941d1b61efa4\" (UID: \"644e2cae-40d8-4a56-a2d8-941d1b61efa4\") " Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.906061 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67xwg\" (UniqueName: \"kubernetes.io/projected/644e2cae-40d8-4a56-a2d8-941d1b61efa4-kube-api-access-67xwg\") pod \"644e2cae-40d8-4a56-a2d8-941d1b61efa4\" (UID: \"644e2cae-40d8-4a56-a2d8-941d1b61efa4\") " Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.906410 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644e2cae-40d8-4a56-a2d8-941d1b61efa4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "644e2cae-40d8-4a56-a2d8-941d1b61efa4" (UID: "644e2cae-40d8-4a56-a2d8-941d1b61efa4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.906620 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r78m\" (UniqueName: \"kubernetes.io/projected/79bda2ef-7c8b-4970-b194-4f17c6c2c4cd-kube-api-access-8r78m\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.906637 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt6sl\" (UniqueName: \"kubernetes.io/projected/4ddbe0a6-485b-4bde-b08a-8745e551bbf6-kube-api-access-lt6sl\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.906647 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ddbe0a6-485b-4bde-b08a-8745e551bbf6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.906655 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5f8c\" (UniqueName: \"kubernetes.io/projected/812c49e0-db30-4cfe-9722-6ddb5931a2c0-kube-api-access-z5f8c\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.906665 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644e2cae-40d8-4a56-a2d8-941d1b61efa4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.906673 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c89e1f-a128-4289-ad90-49c11f2c89cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.906682 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz642\" (UniqueName: \"kubernetes.io/projected/18c89e1f-a128-4289-ad90-49c11f2c89cd-kube-api-access-kz642\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.906691 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79bda2ef-7c8b-4970-b194-4f17c6c2c4cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.906699 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/812c49e0-db30-4cfe-9722-6ddb5931a2c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.908503 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644e2cae-40d8-4a56-a2d8-941d1b61efa4-kube-api-access-67xwg" (OuterVolumeSpecName: "kube-api-access-67xwg") pod "644e2cae-40d8-4a56-a2d8-941d1b61efa4" (UID: "644e2cae-40d8-4a56-a2d8-941d1b61efa4"). InnerVolumeSpecName "kube-api-access-67xwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.933781 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-27drr" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.933841 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-27drr" event={"ID":"4ddbe0a6-485b-4bde-b08a-8745e551bbf6","Type":"ContainerDied","Data":"6e1a2882c5626ffba31ef89d13d55d0d13a847245e1ec1571e0b704ba5a99072"} Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.933899 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e1a2882c5626ffba31ef89d13d55d0d13a847245e1ec1571e0b704ba5a99072" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.938928 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-npv6r" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.938949 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-npv6r" event={"ID":"644e2cae-40d8-4a56-a2d8-941d1b61efa4","Type":"ContainerDied","Data":"57b31e2dd74f9eda329d10315d221da86f9b3d1dbd43a528e2bae4f357c2a2c7"} Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.938991 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57b31e2dd74f9eda329d10315d221da86f9b3d1dbd43a528e2bae4f357c2a2c7" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.940756 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4cd8-account-create-update-5swv2" event={"ID":"18c89e1f-a128-4289-ad90-49c11f2c89cd","Type":"ContainerDied","Data":"e044727dac87afd658ecbf5db802aaec04f5a76de6dc2e1b6869a63324c7d809"} Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.940795 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e044727dac87afd658ecbf5db802aaec04f5a76de6dc2e1b6869a63324c7d809" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.940789 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4cd8-account-create-update-5swv2" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.942193 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnxn7" event={"ID":"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c","Type":"ContainerStarted","Data":"88c6347c00acb0b845ffbd9f4bf5e56ebe6445afc0069d262295b21da7a6a8ce"} Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.943585 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8bhcx" event={"ID":"79bda2ef-7c8b-4970-b194-4f17c6c2c4cd","Type":"ContainerDied","Data":"fdba327a0e6fbf8ea4e81e4e561b33fb04e38b75b74bc3ab38ac647a33546e2d"} Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.943628 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdba327a0e6fbf8ea4e81e4e561b33fb04e38b75b74bc3ab38ac647a33546e2d" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.943657 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8bhcx" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.944736 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2f64-account-create-update-2n4t7" event={"ID":"812c49e0-db30-4cfe-9722-6ddb5931a2c0","Type":"ContainerDied","Data":"b761f8cf4cc1bdb2dc355933b100c6d9fb05e5d74e56180e73b3a03ea3f3c400"} Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.944774 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b761f8cf4cc1bdb2dc355933b100c6d9fb05e5d74e56180e73b3a03ea3f3c400" Jan 29 07:02:15 crc kubenswrapper[4826]: I0129 07:02:15.944745 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2f64-account-create-update-2n4t7" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.007744 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67xwg\" (UniqueName: \"kubernetes.io/projected/644e2cae-40d8-4a56-a2d8-941d1b61efa4-kube-api-access-67xwg\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.318600 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-54vbh" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.427525 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2845-account-create-update-ll4xp" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.438362 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e98edf07-b03d-47bd-bb1a-b942a1e66fc3-operator-scripts\") pod \"e98edf07-b03d-47bd-bb1a-b942a1e66fc3\" (UID: \"e98edf07-b03d-47bd-bb1a-b942a1e66fc3\") " Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.438602 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rtct\" (UniqueName: \"kubernetes.io/projected/e98edf07-b03d-47bd-bb1a-b942a1e66fc3-kube-api-access-7rtct\") pod \"e98edf07-b03d-47bd-bb1a-b942a1e66fc3\" (UID: \"e98edf07-b03d-47bd-bb1a-b942a1e66fc3\") " Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.440242 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e98edf07-b03d-47bd-bb1a-b942a1e66fc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e98edf07-b03d-47bd-bb1a-b942a1e66fc3" (UID: "e98edf07-b03d-47bd-bb1a-b942a1e66fc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.454034 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e98edf07-b03d-47bd-bb1a-b942a1e66fc3-kube-api-access-7rtct" (OuterVolumeSpecName: "kube-api-access-7rtct") pod "e98edf07-b03d-47bd-bb1a-b942a1e66fc3" (UID: "e98edf07-b03d-47bd-bb1a-b942a1e66fc3"). InnerVolumeSpecName "kube-api-access-7rtct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.540360 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjslv\" (UniqueName: \"kubernetes.io/projected/0ae2b787-9c3f-40ca-91e2-10a4257a1df1-kube-api-access-tjslv\") pod \"0ae2b787-9c3f-40ca-91e2-10a4257a1df1\" (UID: \"0ae2b787-9c3f-40ca-91e2-10a4257a1df1\") " Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.540411 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae2b787-9c3f-40ca-91e2-10a4257a1df1-operator-scripts\") pod \"0ae2b787-9c3f-40ca-91e2-10a4257a1df1\" (UID: \"0ae2b787-9c3f-40ca-91e2-10a4257a1df1\") " Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.540956 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e98edf07-b03d-47bd-bb1a-b942a1e66fc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.540970 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rtct\" (UniqueName: \"kubernetes.io/projected/e98edf07-b03d-47bd-bb1a-b942a1e66fc3-kube-api-access-7rtct\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.540986 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ae2b787-9c3f-40ca-91e2-10a4257a1df1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ae2b787-9c3f-40ca-91e2-10a4257a1df1" (UID: "0ae2b787-9c3f-40ca-91e2-10a4257a1df1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.545227 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae2b787-9c3f-40ca-91e2-10a4257a1df1-kube-api-access-tjslv" (OuterVolumeSpecName: "kube-api-access-tjslv") pod "0ae2b787-9c3f-40ca-91e2-10a4257a1df1" (UID: "0ae2b787-9c3f-40ca-91e2-10a4257a1df1"). InnerVolumeSpecName "kube-api-access-tjslv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.644650 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjslv\" (UniqueName: \"kubernetes.io/projected/0ae2b787-9c3f-40ca-91e2-10a4257a1df1-kube-api-access-tjslv\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.644716 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae2b787-9c3f-40ca-91e2-10a4257a1df1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.826031 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005" path="/var/lib/kubelet/pods/0d9e4fdf-ad83-4f6e-abbd-55d3a6acf005/volumes" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.956799 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2845-account-create-update-ll4xp" event={"ID":"0ae2b787-9c3f-40ca-91e2-10a4257a1df1","Type":"ContainerDied","Data":"e204f1c5f5eab37c1c17ed9c5a0ecb51fd47d4b65d7e447a47a1a568406da5c2"} Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.956849 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2845-account-create-update-ll4xp" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.956875 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e204f1c5f5eab37c1c17ed9c5a0ecb51fd47d4b65d7e447a47a1a568406da5c2" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.961820 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-54vbh" event={"ID":"e98edf07-b03d-47bd-bb1a-b942a1e66fc3","Type":"ContainerDied","Data":"0688b552d3bfbe9c57ece1eb7e97fe6391fc22274c295c2a24c04db232e9f3ac"} Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.961861 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0688b552d3bfbe9c57ece1eb7e97fe6391fc22274c295c2a24c04db232e9f3ac" Jan 29 07:02:16 crc kubenswrapper[4826]: I0129 07:02:16.961898 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-54vbh" Jan 29 07:02:18 crc kubenswrapper[4826]: I0129 07:02:18.885463 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:02:18 crc kubenswrapper[4826]: I0129 07:02:18.897043 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift\") pod \"swift-storage-0\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " pod="openstack/swift-storage-0" Jan 29 07:02:19 crc kubenswrapper[4826]: I0129 07:02:19.017704 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 07:02:20 crc kubenswrapper[4826]: I0129 07:02:20.439537 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 07:02:20 crc kubenswrapper[4826]: W0129 07:02:20.446738 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85b51a36_8aa5_46e7_b8ab_a7e672c491d7.slice/crio-c5ffe9f77bd5baddc552c700e4feec61734365d73b6268a0736162bddafa3c78 WatchSource:0}: Error finding container c5ffe9f77bd5baddc552c700e4feec61734365d73b6268a0736162bddafa3c78: Status 404 returned error can't find the container with id c5ffe9f77bd5baddc552c700e4feec61734365d73b6268a0736162bddafa3c78 Jan 29 07:02:21 crc kubenswrapper[4826]: I0129 07:02:21.016993 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"c5ffe9f77bd5baddc552c700e4feec61734365d73b6268a0736162bddafa3c78"} Jan 29 07:02:21 crc kubenswrapper[4826]: I0129 07:02:21.018412 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hbzx2" event={"ID":"d4b5abc4-73e1-4908-a776-0162b26a6d30","Type":"ContainerStarted","Data":"8a1a13c0fa6aa99b5ae28c30e096d59872f199829e1f0dd92715f6a3e51161b6"} Jan 29 07:02:21 crc kubenswrapper[4826]: I0129 07:02:21.041618 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-hbzx2" podStartSLOduration=2.8877615739999998 podStartE2EDuration="8.041602498s" podCreationTimestamp="2026-01-29 07:02:13 +0000 UTC" firstStartedPulling="2026-01-29 07:02:14.841030962 +0000 UTC m=+1118.702824021" lastFinishedPulling="2026-01-29 07:02:19.994871866 +0000 UTC m=+1123.856664945" observedRunningTime="2026-01-29 07:02:21.03858038 +0000 UTC m=+1124.900373449" watchObservedRunningTime="2026-01-29 07:02:21.041602498 +0000 UTC m=+1124.903395567" Jan 29 07:02:24 crc kubenswrapper[4826]: I0129 07:02:24.050029 4826 generic.go:334] "Generic (PLEG): container finished" podID="d4b5abc4-73e1-4908-a776-0162b26a6d30" containerID="8a1a13c0fa6aa99b5ae28c30e096d59872f199829e1f0dd92715f6a3e51161b6" exitCode=0 Jan 29 07:02:24 crc kubenswrapper[4826]: I0129 07:02:24.050610 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hbzx2" event={"ID":"d4b5abc4-73e1-4908-a776-0162b26a6d30","Type":"ContainerDied","Data":"8a1a13c0fa6aa99b5ae28c30e096d59872f199829e1f0dd92715f6a3e51161b6"} Jan 29 07:02:27 crc kubenswrapper[4826]: I0129 07:02:27.976471 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hbzx2" Jan 29 07:02:28 crc kubenswrapper[4826]: I0129 07:02:28.087095 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hbzx2" event={"ID":"d4b5abc4-73e1-4908-a776-0162b26a6d30","Type":"ContainerDied","Data":"5855845210dd2d450ce3b25763d547efe4e41076adda3d9f604af75724a7443d"} Jan 29 07:02:28 crc kubenswrapper[4826]: I0129 07:02:28.087141 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5855845210dd2d450ce3b25763d547efe4e41076adda3d9f604af75724a7443d" Jan 29 07:02:28 crc kubenswrapper[4826]: I0129 07:02:28.087209 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hbzx2" Jan 29 07:02:28 crc kubenswrapper[4826]: I0129 07:02:28.154055 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b5abc4-73e1-4908-a776-0162b26a6d30-config-data\") pod \"d4b5abc4-73e1-4908-a776-0162b26a6d30\" (UID: \"d4b5abc4-73e1-4908-a776-0162b26a6d30\") " Jan 29 07:02:28 crc kubenswrapper[4826]: I0129 07:02:28.154181 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b5abc4-73e1-4908-a776-0162b26a6d30-combined-ca-bundle\") pod \"d4b5abc4-73e1-4908-a776-0162b26a6d30\" (UID: \"d4b5abc4-73e1-4908-a776-0162b26a6d30\") " Jan 29 07:02:28 crc kubenswrapper[4826]: I0129 07:02:28.154320 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp6fs\" (UniqueName: \"kubernetes.io/projected/d4b5abc4-73e1-4908-a776-0162b26a6d30-kube-api-access-mp6fs\") pod \"d4b5abc4-73e1-4908-a776-0162b26a6d30\" (UID: \"d4b5abc4-73e1-4908-a776-0162b26a6d30\") " Jan 29 07:02:28 crc kubenswrapper[4826]: I0129 07:02:28.180214 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b5abc4-73e1-4908-a776-0162b26a6d30-kube-api-access-mp6fs" (OuterVolumeSpecName: "kube-api-access-mp6fs") pod "d4b5abc4-73e1-4908-a776-0162b26a6d30" (UID: "d4b5abc4-73e1-4908-a776-0162b26a6d30"). InnerVolumeSpecName "kube-api-access-mp6fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:28 crc kubenswrapper[4826]: I0129 07:02:28.184940 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b5abc4-73e1-4908-a776-0162b26a6d30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4b5abc4-73e1-4908-a776-0162b26a6d30" (UID: "d4b5abc4-73e1-4908-a776-0162b26a6d30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:02:28 crc kubenswrapper[4826]: I0129 07:02:28.237710 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b5abc4-73e1-4908-a776-0162b26a6d30-config-data" (OuterVolumeSpecName: "config-data") pod "d4b5abc4-73e1-4908-a776-0162b26a6d30" (UID: "d4b5abc4-73e1-4908-a776-0162b26a6d30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:02:28 crc kubenswrapper[4826]: I0129 07:02:28.256349 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b5abc4-73e1-4908-a776-0162b26a6d30-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:28 crc kubenswrapper[4826]: I0129 07:02:28.256407 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b5abc4-73e1-4908-a776-0162b26a6d30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:28 crc kubenswrapper[4826]: I0129 07:02:28.256433 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp6fs\" (UniqueName: \"kubernetes.io/projected/d4b5abc4-73e1-4908-a776-0162b26a6d30-kube-api-access-mp6fs\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:28 crc kubenswrapper[4826]: I0129 07:02:28.428849 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.095661 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnxn7" event={"ID":"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c","Type":"ContainerStarted","Data":"727b87c675e49b86f65ef5c3d398c7e86e56f8ef827c047c69e48782eb0c3773"} Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.099793 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"44307ca66666a832b9005e4a358a9e508afcfb3595db492ed5623df991aeca7a"} Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.099855 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"a26eddb89adfadac24b26e3ff2294f83b0c4d77a9551462e60d51f7c34fff67e"} Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.099870 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"3db554d1986fc0fb902da044bdce99172798bdf604c6dcbc79f7a7a8d3cf339a"} Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.099881 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"6f8be9e7ae68c9cdb46477974e5f9323232d52e97765253a593be96d962264d1"} Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.122395 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gnxn7" podStartSLOduration=2.692558665 podStartE2EDuration="15.122371766s" podCreationTimestamp="2026-01-29 07:02:14 +0000 UTC" firstStartedPulling="2026-01-29 07:02:15.903867021 +0000 UTC m=+1119.765660090" lastFinishedPulling="2026-01-29 07:02:28.333680122 +0000 UTC m=+1132.195473191" observedRunningTime="2026-01-29 07:02:29.120067576 +0000 UTC m=+1132.981860665" watchObservedRunningTime="2026-01-29 07:02:29.122371766 +0000 UTC m=+1132.984164855" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.246443 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bd87b98bc-m626b"] Jan 29 07:02:29 crc kubenswrapper[4826]: E0129 07:02:29.247110 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812c49e0-db30-4cfe-9722-6ddb5931a2c0" containerName="mariadb-account-create-update" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247130 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="812c49e0-db30-4cfe-9722-6ddb5931a2c0" containerName="mariadb-account-create-update" Jan 29 07:02:29 crc kubenswrapper[4826]: E0129 07:02:29.247141 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c89e1f-a128-4289-ad90-49c11f2c89cd" containerName="mariadb-account-create-update" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247147 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c89e1f-a128-4289-ad90-49c11f2c89cd" containerName="mariadb-account-create-update" Jan 29 07:02:29 crc kubenswrapper[4826]: E0129 07:02:29.247157 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79bda2ef-7c8b-4970-b194-4f17c6c2c4cd" containerName="mariadb-database-create" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247163 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="79bda2ef-7c8b-4970-b194-4f17c6c2c4cd" containerName="mariadb-database-create" Jan 29 07:02:29 crc kubenswrapper[4826]: E0129 07:02:29.247177 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b5abc4-73e1-4908-a776-0162b26a6d30" containerName="keystone-db-sync" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247183 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b5abc4-73e1-4908-a776-0162b26a6d30" containerName="keystone-db-sync" Jan 29 07:02:29 crc kubenswrapper[4826]: E0129 07:02:29.247190 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ddbe0a6-485b-4bde-b08a-8745e551bbf6" containerName="mariadb-database-create" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247196 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ddbe0a6-485b-4bde-b08a-8745e551bbf6" containerName="mariadb-database-create" Jan 29 07:02:29 crc kubenswrapper[4826]: E0129 07:02:29.247208 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98edf07-b03d-47bd-bb1a-b942a1e66fc3" containerName="mariadb-account-create-update" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247215 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98edf07-b03d-47bd-bb1a-b942a1e66fc3" containerName="mariadb-account-create-update" Jan 29 07:02:29 crc kubenswrapper[4826]: E0129 07:02:29.247235 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae2b787-9c3f-40ca-91e2-10a4257a1df1" containerName="mariadb-account-create-update" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247241 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae2b787-9c3f-40ca-91e2-10a4257a1df1" containerName="mariadb-account-create-update" Jan 29 07:02:29 crc kubenswrapper[4826]: E0129 07:02:29.247252 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644e2cae-40d8-4a56-a2d8-941d1b61efa4" containerName="mariadb-database-create" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247257 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="644e2cae-40d8-4a56-a2d8-941d1b61efa4" containerName="mariadb-database-create" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247421 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c89e1f-a128-4289-ad90-49c11f2c89cd" containerName="mariadb-account-create-update" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247431 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b5abc4-73e1-4908-a776-0162b26a6d30" containerName="keystone-db-sync" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247443 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e98edf07-b03d-47bd-bb1a-b942a1e66fc3" containerName="mariadb-account-create-update" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247449 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="812c49e0-db30-4cfe-9722-6ddb5931a2c0" containerName="mariadb-account-create-update" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247460 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="79bda2ef-7c8b-4970-b194-4f17c6c2c4cd" containerName="mariadb-database-create" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247469 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ddbe0a6-485b-4bde-b08a-8745e551bbf6" containerName="mariadb-database-create" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247476 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="644e2cae-40d8-4a56-a2d8-941d1b61efa4" containerName="mariadb-database-create" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.247487 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae2b787-9c3f-40ca-91e2-10a4257a1df1" containerName="mariadb-account-create-update" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.248273 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.275540 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd87b98bc-m626b"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.312043 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l5498"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.313248 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.315689 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.316866 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.317048 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.318234 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vphpw" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.318701 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l5498"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.321779 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.378425 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8hpp\" (UniqueName: \"kubernetes.io/projected/ad14a23d-71a9-4348-9e06-61db9b024821-kube-api-access-q8hpp\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.378466 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.378516 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-dns-svc\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.378590 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-config\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.378612 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.462525 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-8qxjd"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.463658 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.467332 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.467665 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pr242" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.467802 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.470425 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8qxjd"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.476414 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.479095 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.479442 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8hpp\" (UniqueName: \"kubernetes.io/projected/ad14a23d-71a9-4348-9e06-61db9b024821-kube-api-access-q8hpp\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.479479 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.479516 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-credential-keys\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.479540 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-dns-svc\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.479563 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-combined-ca-bundle\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.479584 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-scripts\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.479600 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-fernet-keys\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.479620 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-config-data\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.479672 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-config\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.479695 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7vp\" (UniqueName: \"kubernetes.io/projected/863b254d-c246-4b2c-b687-c6314dae3841-kube-api-access-zz7vp\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.479715 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.480499 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.481062 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-config\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.481120 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-dns-svc\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.480928 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.483475 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.483914 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.501255 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.535139 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8hpp\" (UniqueName: \"kubernetes.io/projected/ad14a23d-71a9-4348-9e06-61db9b024821-kube-api-access-q8hpp\") pod \"dnsmasq-dns-7bd87b98bc-m626b\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.568191 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-vgqpr"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.569244 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vgqpr" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.571676 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.573702 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.573966 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-f6t52" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.579099 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sqz8v"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582102 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-config-data\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582140 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-scripts\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582189 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582206 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-db-sync-config-data\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582242 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b459d81c-6b13-40f7-a460-524b0082a05d-log-httpd\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582274 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-config-data\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582289 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-combined-ca-bundle\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582335 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz7vp\" (UniqueName: \"kubernetes.io/projected/863b254d-c246-4b2c-b687-c6314dae3841-kube-api-access-zz7vp\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582365 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txm99\" (UniqueName: \"kubernetes.io/projected/b459d81c-6b13-40f7-a460-524b0082a05d-kube-api-access-txm99\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582397 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a1157a3-fc93-4d73-8200-b55bfa626a09-etc-machine-id\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582423 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxj8d\" (UniqueName: \"kubernetes.io/projected/4a1157a3-fc93-4d73-8200-b55bfa626a09-kube-api-access-xxj8d\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582478 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-credential-keys\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582503 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582563 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-combined-ca-bundle\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582583 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b459d81c-6b13-40f7-a460-524b0082a05d-run-httpd\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582602 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-scripts\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582635 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-fernet-keys\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582661 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-config-data\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.582678 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-scripts\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.585774 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sqz8v" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.598648 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.599022 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cggbm" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.599144 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.617164 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vgqpr"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.640224 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-fernet-keys\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.645054 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-config-data\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.645754 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-scripts\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.650039 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-combined-ca-bundle\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.651399 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-credential-keys\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.651409 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sqz8v"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.652421 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz7vp\" (UniqueName: \"kubernetes.io/projected/863b254d-c246-4b2c-b687-c6314dae3841-kube-api-access-zz7vp\") pod \"keystone-bootstrap-l5498\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685173 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd87b98bc-m626b"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685489 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-config-data\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685525 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-scripts\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685557 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685577 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-db-sync-config-data\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685594 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b459d81c-6b13-40f7-a460-524b0082a05d-log-httpd\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685622 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-config-data\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685637 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-combined-ca-bundle\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685669 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txm99\" (UniqueName: \"kubernetes.io/projected/b459d81c-6b13-40f7-a460-524b0082a05d-kube-api-access-txm99\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685687 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a1157a3-fc93-4d73-8200-b55bfa626a09-etc-machine-id\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685710 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff69218e-ec33-4818-86be-a9ff92d3f40d-combined-ca-bundle\") pod \"barbican-db-sync-vgqpr\" (UID: \"ff69218e-ec33-4818-86be-a9ff92d3f40d\") " pod="openstack/barbican-db-sync-vgqpr" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685749 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxj8d\" (UniqueName: \"kubernetes.io/projected/4a1157a3-fc93-4d73-8200-b55bfa626a09-kube-api-access-xxj8d\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685777 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff69218e-ec33-4818-86be-a9ff92d3f40d-db-sync-config-data\") pod \"barbican-db-sync-vgqpr\" (UID: \"ff69218e-ec33-4818-86be-a9ff92d3f40d\") " pod="openstack/barbican-db-sync-vgqpr" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685811 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b2hc\" (UniqueName: \"kubernetes.io/projected/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-kube-api-access-2b2hc\") pod \"neutron-db-sync-sqz8v\" (UID: \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\") " pod="openstack/neutron-db-sync-sqz8v" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685830 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685849 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-combined-ca-bundle\") pod \"neutron-db-sync-sqz8v\" (UID: \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\") " pod="openstack/neutron-db-sync-sqz8v" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685872 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwxms\" (UniqueName: \"kubernetes.io/projected/ff69218e-ec33-4818-86be-a9ff92d3f40d-kube-api-access-rwxms\") pod \"barbican-db-sync-vgqpr\" (UID: \"ff69218e-ec33-4818-86be-a9ff92d3f40d\") " pod="openstack/barbican-db-sync-vgqpr" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685889 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b459d81c-6b13-40f7-a460-524b0082a05d-run-httpd\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685916 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-scripts\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.685934 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-config\") pod \"neutron-db-sync-sqz8v\" (UID: \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\") " pod="openstack/neutron-db-sync-sqz8v" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.697017 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-scripts\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.698743 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-config-data\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.701572 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.702246 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b459d81c-6b13-40f7-a460-524b0082a05d-run-httpd\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.703965 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.704027 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a1157a3-fc93-4d73-8200-b55bfa626a09-etc-machine-id\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.704276 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b459d81c-6b13-40f7-a460-524b0082a05d-log-httpd\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.712213 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-config-data\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.716913 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-scripts\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.723597 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-combined-ca-bundle\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.736999 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-db-sync-config-data\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.788908 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b2hc\" (UniqueName: \"kubernetes.io/projected/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-kube-api-access-2b2hc\") pod \"neutron-db-sync-sqz8v\" (UID: \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\") " pod="openstack/neutron-db-sync-sqz8v" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.788967 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-combined-ca-bundle\") pod \"neutron-db-sync-sqz8v\" (UID: \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\") " pod="openstack/neutron-db-sync-sqz8v" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.788997 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwxms\" (UniqueName: \"kubernetes.io/projected/ff69218e-ec33-4818-86be-a9ff92d3f40d-kube-api-access-rwxms\") pod \"barbican-db-sync-vgqpr\" (UID: \"ff69218e-ec33-4818-86be-a9ff92d3f40d\") " pod="openstack/barbican-db-sync-vgqpr" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.789037 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-config\") pod \"neutron-db-sync-sqz8v\" (UID: \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\") " pod="openstack/neutron-db-sync-sqz8v" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.789128 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff69218e-ec33-4818-86be-a9ff92d3f40d-combined-ca-bundle\") pod \"barbican-db-sync-vgqpr\" (UID: \"ff69218e-ec33-4818-86be-a9ff92d3f40d\") " pod="openstack/barbican-db-sync-vgqpr" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.789176 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff69218e-ec33-4818-86be-a9ff92d3f40d-db-sync-config-data\") pod \"barbican-db-sync-vgqpr\" (UID: \"ff69218e-ec33-4818-86be-a9ff92d3f40d\") " pod="openstack/barbican-db-sync-vgqpr" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.808236 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txm99\" (UniqueName: \"kubernetes.io/projected/b459d81c-6b13-40f7-a460-524b0082a05d-kube-api-access-txm99\") pod \"ceilometer-0\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " pod="openstack/ceilometer-0" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.809020 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxj8d\" (UniqueName: \"kubernetes.io/projected/4a1157a3-fc93-4d73-8200-b55bfa626a09-kube-api-access-xxj8d\") pod \"cinder-db-sync-8qxjd\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.809820 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff69218e-ec33-4818-86be-a9ff92d3f40d-db-sync-config-data\") pod \"barbican-db-sync-vgqpr\" (UID: \"ff69218e-ec33-4818-86be-a9ff92d3f40d\") " pod="openstack/barbican-db-sync-vgqpr" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.811071 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff69218e-ec33-4818-86be-a9ff92d3f40d-combined-ca-bundle\") pod \"barbican-db-sync-vgqpr\" (UID: \"ff69218e-ec33-4818-86be-a9ff92d3f40d\") " pod="openstack/barbican-db-sync-vgqpr" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.811930 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-config\") pod \"neutron-db-sync-sqz8v\" (UID: \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\") " pod="openstack/neutron-db-sync-sqz8v" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.812139 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-combined-ca-bundle\") pod \"neutron-db-sync-sqz8v\" (UID: \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\") " pod="openstack/neutron-db-sync-sqz8v" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.830468 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b2hc\" (UniqueName: \"kubernetes.io/projected/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-kube-api-access-2b2hc\") pod \"neutron-db-sync-sqz8v\" (UID: \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\") " pod="openstack/neutron-db-sync-sqz8v" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.842441 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwxms\" (UniqueName: \"kubernetes.io/projected/ff69218e-ec33-4818-86be-a9ff92d3f40d-kube-api-access-rwxms\") pod \"barbican-db-sync-vgqpr\" (UID: \"ff69218e-ec33-4818-86be-a9ff92d3f40d\") " pod="openstack/barbican-db-sync-vgqpr" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.842217 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-lbrr5"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.889317 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vgqpr" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.910074 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b56f9767-4458r"] Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.910777 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.923074 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.942411 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.945219 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.945840 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ctkq4" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.946146 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.979897 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sqz8v" Jan 29 07:02:29 crc kubenswrapper[4826]: I0129 07:02:29.983875 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lbrr5"] Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.003250 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-combined-ca-bundle\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.003351 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-config-data\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.003381 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-ovsdbserver-sb\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.003416 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-dns-svc\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.003480 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfnzn\" (UniqueName: \"kubernetes.io/projected/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-kube-api-access-cfnzn\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.003608 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcmvz\" (UniqueName: \"kubernetes.io/projected/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-kube-api-access-qcmvz\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.004340 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-logs\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.004371 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-config\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.004668 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-ovsdbserver-nb\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.004702 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-scripts\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.041728 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b56f9767-4458r"] Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.078539 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.094923 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.109890 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-logs\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.109924 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-config\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.109953 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-ovsdbserver-nb\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.109971 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-scripts\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.110023 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-combined-ca-bundle\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.110052 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-config-data\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.110072 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-ovsdbserver-sb\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.110092 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-dns-svc\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.110125 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfnzn\" (UniqueName: \"kubernetes.io/projected/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-kube-api-access-cfnzn\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.110184 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcmvz\" (UniqueName: \"kubernetes.io/projected/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-kube-api-access-qcmvz\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.111497 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-config\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.111927 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-logs\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.112056 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-ovsdbserver-sb\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.112393 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-dns-svc\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.121045 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-ovsdbserver-nb\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.122151 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-scripts\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.130200 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-config-data\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.146095 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfnzn\" (UniqueName: \"kubernetes.io/projected/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-kube-api-access-cfnzn\") pod \"dnsmasq-dns-8b56f9767-4458r\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.148233 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcmvz\" (UniqueName: \"kubernetes.io/projected/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-kube-api-access-qcmvz\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.153205 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-combined-ca-bundle\") pod \"placement-db-sync-lbrr5\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.385582 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lbrr5" Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.393839 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd87b98bc-m626b"] Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.410552 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:30 crc kubenswrapper[4826]: W0129 07:02:30.447290 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad14a23d_71a9_4348_9e06_61db9b024821.slice/crio-32221b2b22f27820444f88b1c4d786966450cc34e86964c6b693b53b6785cd38 WatchSource:0}: Error finding container 32221b2b22f27820444f88b1c4d786966450cc34e86964c6b693b53b6785cd38: Status 404 returned error can't find the container with id 32221b2b22f27820444f88b1c4d786966450cc34e86964c6b693b53b6785cd38 Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.578949 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l5498"] Jan 29 07:02:30 crc kubenswrapper[4826]: W0129 07:02:30.649864 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod863b254d_c246_4b2c_b687_c6314dae3841.slice/crio-6f20b25d465ce7d6105060b7229a533c93567f76c60698d8adfe535fc155821a WatchSource:0}: Error finding container 6f20b25d465ce7d6105060b7229a533c93567f76c60698d8adfe535fc155821a: Status 404 returned error can't find the container with id 6f20b25d465ce7d6105060b7229a533c93567f76c60698d8adfe535fc155821a Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.769676 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sqz8v"] Jan 29 07:02:30 crc kubenswrapper[4826]: W0129 07:02:30.800308 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff69218e_ec33_4818_86be_a9ff92d3f40d.slice/crio-ddebc548afcf68c456135e1dc58471c66b4194c9895fa526947a2a81bfcbf70a WatchSource:0}: Error finding container ddebc548afcf68c456135e1dc58471c66b4194c9895fa526947a2a81bfcbf70a: Status 404 returned error can't find the container with id ddebc548afcf68c456135e1dc58471c66b4194c9895fa526947a2a81bfcbf70a Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.826537 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vgqpr"] Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.844191 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8qxjd"] Jan 29 07:02:30 crc kubenswrapper[4826]: I0129 07:02:30.858517 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:02:31 crc kubenswrapper[4826]: I0129 07:02:31.172889 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b459d81c-6b13-40f7-a460-524b0082a05d","Type":"ContainerStarted","Data":"785fada7912bea4a1ccef43f26d7364f8e26d4055765206142242f171e1d9795"} Jan 29 07:02:31 crc kubenswrapper[4826]: I0129 07:02:31.174667 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8qxjd" event={"ID":"4a1157a3-fc93-4d73-8200-b55bfa626a09","Type":"ContainerStarted","Data":"7e2d68686e89f835fcf89962aabea00a0c0d8dcfd85c5615ccd6219908e00b0b"} Jan 29 07:02:31 crc kubenswrapper[4826]: I0129 07:02:31.176778 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l5498" event={"ID":"863b254d-c246-4b2c-b687-c6314dae3841","Type":"ContainerStarted","Data":"6f20b25d465ce7d6105060b7229a533c93567f76c60698d8adfe535fc155821a"} Jan 29 07:02:31 crc kubenswrapper[4826]: I0129 07:02:31.190056 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b56f9767-4458r"] Jan 29 07:02:31 crc kubenswrapper[4826]: I0129 07:02:31.199186 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vgqpr" event={"ID":"ff69218e-ec33-4818-86be-a9ff92d3f40d","Type":"ContainerStarted","Data":"ddebc548afcf68c456135e1dc58471c66b4194c9895fa526947a2a81bfcbf70a"} Jan 29 07:02:31 crc kubenswrapper[4826]: I0129 07:02:31.201769 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" event={"ID":"ad14a23d-71a9-4348-9e06-61db9b024821","Type":"ContainerStarted","Data":"32221b2b22f27820444f88b1c4d786966450cc34e86964c6b693b53b6785cd38"} Jan 29 07:02:31 crc kubenswrapper[4826]: I0129 07:02:31.205663 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sqz8v" event={"ID":"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06","Type":"ContainerStarted","Data":"600d92178d75e55b585ade507b3d221b36cb27a023845c6e7bdb9c6f0fdcbf2c"} Jan 29 07:02:31 crc kubenswrapper[4826]: I0129 07:02:31.365409 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lbrr5"] Jan 29 07:02:32 crc kubenswrapper[4826]: I0129 07:02:32.195372 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:02:32 crc kubenswrapper[4826]: I0129 07:02:32.215398 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b56f9767-4458r" event={"ID":"d89e9402-7e3d-42c5-9f2a-b219113f9b2f","Type":"ContainerStarted","Data":"8e4242a7204b98c5e497469639c4377815d1af9dfb13e7f3ddef0da3d84ff683"} Jan 29 07:02:34 crc kubenswrapper[4826]: W0129 07:02:34.349436 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3ec8717_30a1_46a8_9224_4802c2b1c3e6.slice/crio-9df5ed2c58f31197e9cbdd1939dea04d33b3fc18d4a78ab7a64dcc151ce633d6 WatchSource:0}: Error finding container 9df5ed2c58f31197e9cbdd1939dea04d33b3fc18d4a78ab7a64dcc151ce633d6: Status 404 returned error can't find the container with id 9df5ed2c58f31197e9cbdd1939dea04d33b3fc18d4a78ab7a64dcc151ce633d6 Jan 29 07:02:35 crc kubenswrapper[4826]: I0129 07:02:35.243437 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lbrr5" event={"ID":"e3ec8717-30a1-46a8-9224-4802c2b1c3e6","Type":"ContainerStarted","Data":"9df5ed2c58f31197e9cbdd1939dea04d33b3fc18d4a78ab7a64dcc151ce633d6"} Jan 29 07:02:35 crc kubenswrapper[4826]: I0129 07:02:35.246013 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" event={"ID":"ad14a23d-71a9-4348-9e06-61db9b024821","Type":"ContainerStarted","Data":"a2cfe254b466736a3a9915b734e4aa4865f2e53fc5a7ad41a4a33f8db52188be"} Jan 29 07:02:35 crc kubenswrapper[4826]: I0129 07:02:35.656636 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:02:35 crc kubenswrapper[4826]: I0129 07:02:35.656716 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.265272 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l5498" event={"ID":"863b254d-c246-4b2c-b687-c6314dae3841","Type":"ContainerStarted","Data":"a480955a0615881272dba83f8f5857808fefb8de24dbded4f680c4605d685e59"} Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.288351 4826 generic.go:334] "Generic (PLEG): container finished" podID="ad14a23d-71a9-4348-9e06-61db9b024821" containerID="a2cfe254b466736a3a9915b734e4aa4865f2e53fc5a7ad41a4a33f8db52188be" exitCode=0 Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.288434 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" event={"ID":"ad14a23d-71a9-4348-9e06-61db9b024821","Type":"ContainerDied","Data":"a2cfe254b466736a3a9915b734e4aa4865f2e53fc5a7ad41a4a33f8db52188be"} Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.296269 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l5498" podStartSLOduration=7.296254439 podStartE2EDuration="7.296254439s" podCreationTimestamp="2026-01-29 07:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:02:36.293805606 +0000 UTC m=+1140.155598665" watchObservedRunningTime="2026-01-29 07:02:36.296254439 +0000 UTC m=+1140.158047508" Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.301999 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sqz8v" event={"ID":"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06","Type":"ContainerStarted","Data":"ee90b5e3403529b38e18ca6e7743dca447faa8f7d746c8a4314b599953d2bdc2"} Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.311182 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"d88ad576d400eb2984c89463da7988fbb988847d02187cc4b64734122dc40271"} Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.326475 4826 generic.go:334] "Generic (PLEG): container finished" podID="d89e9402-7e3d-42c5-9f2a-b219113f9b2f" containerID="dacaf576740c1b7320658b7b467611c39833cdd967e3888267beee657d040b71" exitCode=0 Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.326526 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b56f9767-4458r" event={"ID":"d89e9402-7e3d-42c5-9f2a-b219113f9b2f","Type":"ContainerDied","Data":"dacaf576740c1b7320658b7b467611c39833cdd967e3888267beee657d040b71"} Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.349930 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sqz8v" podStartSLOduration=7.349910214 podStartE2EDuration="7.349910214s" podCreationTimestamp="2026-01-29 07:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:02:36.344492104 +0000 UTC m=+1140.206285173" watchObservedRunningTime="2026-01-29 07:02:36.349910214 +0000 UTC m=+1140.211703283" Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.658694 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.709468 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-ovsdbserver-sb\") pod \"ad14a23d-71a9-4348-9e06-61db9b024821\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.709531 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-config\") pod \"ad14a23d-71a9-4348-9e06-61db9b024821\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.709589 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-ovsdbserver-nb\") pod \"ad14a23d-71a9-4348-9e06-61db9b024821\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.709752 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-dns-svc\") pod \"ad14a23d-71a9-4348-9e06-61db9b024821\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.709842 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8hpp\" (UniqueName: \"kubernetes.io/projected/ad14a23d-71a9-4348-9e06-61db9b024821-kube-api-access-q8hpp\") pod \"ad14a23d-71a9-4348-9e06-61db9b024821\" (UID: \"ad14a23d-71a9-4348-9e06-61db9b024821\") " Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.713801 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad14a23d-71a9-4348-9e06-61db9b024821-kube-api-access-q8hpp" (OuterVolumeSpecName: "kube-api-access-q8hpp") pod "ad14a23d-71a9-4348-9e06-61db9b024821" (UID: "ad14a23d-71a9-4348-9e06-61db9b024821"). InnerVolumeSpecName "kube-api-access-q8hpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.729882 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-config" (OuterVolumeSpecName: "config") pod "ad14a23d-71a9-4348-9e06-61db9b024821" (UID: "ad14a23d-71a9-4348-9e06-61db9b024821"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.740140 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad14a23d-71a9-4348-9e06-61db9b024821" (UID: "ad14a23d-71a9-4348-9e06-61db9b024821"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.742945 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ad14a23d-71a9-4348-9e06-61db9b024821" (UID: "ad14a23d-71a9-4348-9e06-61db9b024821"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.746347 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ad14a23d-71a9-4348-9e06-61db9b024821" (UID: "ad14a23d-71a9-4348-9e06-61db9b024821"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.811923 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8hpp\" (UniqueName: \"kubernetes.io/projected/ad14a23d-71a9-4348-9e06-61db9b024821-kube-api-access-q8hpp\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.811972 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.811982 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.811991 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:36 crc kubenswrapper[4826]: I0129 07:02:36.812002 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad14a23d-71a9-4348-9e06-61db9b024821-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:37 crc kubenswrapper[4826]: I0129 07:02:37.339851 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" event={"ID":"ad14a23d-71a9-4348-9e06-61db9b024821","Type":"ContainerDied","Data":"32221b2b22f27820444f88b1c4d786966450cc34e86964c6b693b53b6785cd38"} Jan 29 07:02:37 crc kubenswrapper[4826]: I0129 07:02:37.340450 4826 scope.go:117] "RemoveContainer" containerID="a2cfe254b466736a3a9915b734e4aa4865f2e53fc5a7ad41a4a33f8db52188be" Jan 29 07:02:37 crc kubenswrapper[4826]: I0129 07:02:37.340161 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd87b98bc-m626b" Jan 29 07:02:37 crc kubenswrapper[4826]: I0129 07:02:37.356292 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"4b4aa223c22b7eac65d63841c37b12133011befa1024ace90050e7ee1a72c510"} Jan 29 07:02:37 crc kubenswrapper[4826]: I0129 07:02:37.356386 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"af48e0d4ae8e00f830fb238c9c15333c7f43281d0531755b7ffe26f6fbf4c8c6"} Jan 29 07:02:37 crc kubenswrapper[4826]: I0129 07:02:37.356397 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"1aa4a9af19d077a22d1e3b460ff6f966acbc829b62c52982dbfc8cc5b918e542"} Jan 29 07:02:37 crc kubenswrapper[4826]: I0129 07:02:37.361738 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b56f9767-4458r" event={"ID":"d89e9402-7e3d-42c5-9f2a-b219113f9b2f","Type":"ContainerStarted","Data":"e11025ebf0c23d357e172a87e100c47b44d7562a5e48cebef71c0eba8494e5e5"} Jan 29 07:02:37 crc kubenswrapper[4826]: I0129 07:02:37.361816 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:37 crc kubenswrapper[4826]: I0129 07:02:37.406378 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd87b98bc-m626b"] Jan 29 07:02:37 crc kubenswrapper[4826]: I0129 07:02:37.422486 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bd87b98bc-m626b"] Jan 29 07:02:37 crc kubenswrapper[4826]: I0129 07:02:37.436381 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b56f9767-4458r" podStartSLOduration=8.436359652 podStartE2EDuration="8.436359652s" podCreationTimestamp="2026-01-29 07:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:02:37.406105891 +0000 UTC m=+1141.267898970" watchObservedRunningTime="2026-01-29 07:02:37.436359652 +0000 UTC m=+1141.298152721" Jan 29 07:02:38 crc kubenswrapper[4826]: I0129 07:02:38.817107 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad14a23d-71a9-4348-9e06-61db9b024821" path="/var/lib/kubelet/pods/ad14a23d-71a9-4348-9e06-61db9b024821/volumes" Jan 29 07:02:39 crc kubenswrapper[4826]: I0129 07:02:39.390735 4826 generic.go:334] "Generic (PLEG): container finished" podID="863b254d-c246-4b2c-b687-c6314dae3841" containerID="a480955a0615881272dba83f8f5857808fefb8de24dbded4f680c4605d685e59" exitCode=0 Jan 29 07:02:39 crc kubenswrapper[4826]: I0129 07:02:39.391035 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l5498" event={"ID":"863b254d-c246-4b2c-b687-c6314dae3841","Type":"ContainerDied","Data":"a480955a0615881272dba83f8f5857808fefb8de24dbded4f680c4605d685e59"} Jan 29 07:02:45 crc kubenswrapper[4826]: E0129 07:02:45.068982 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad14a23d_71a9_4348_9e06_61db9b024821.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad14a23d_71a9_4348_9e06_61db9b024821.slice/crio-32221b2b22f27820444f88b1c4d786966450cc34e86964c6b693b53b6785cd38\": RecentStats: unable to find data in memory cache]" Jan 29 07:02:45 crc kubenswrapper[4826]: I0129 07:02:45.412642 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:02:45 crc kubenswrapper[4826]: I0129 07:02:45.505184 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-47qwr"] Jan 29 07:02:45 crc kubenswrapper[4826]: I0129 07:02:45.505487 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" podUID="82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" containerName="dnsmasq-dns" containerID="cri-o://e372534610518a0d89c9dff4aa3d8785b79ab67dd83f0a16b4a1b8ba01759085" gracePeriod=10 Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.511078 4826 generic.go:334] "Generic (PLEG): container finished" podID="82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" containerID="e372534610518a0d89c9dff4aa3d8785b79ab67dd83f0a16b4a1b8ba01759085" exitCode=0 Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.511156 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" event={"ID":"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a","Type":"ContainerDied","Data":"e372534610518a0d89c9dff4aa3d8785b79ab67dd83f0a16b4a1b8ba01759085"} Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.513807 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l5498" event={"ID":"863b254d-c246-4b2c-b687-c6314dae3841","Type":"ContainerDied","Data":"6f20b25d465ce7d6105060b7229a533c93567f76c60698d8adfe535fc155821a"} Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.513861 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f20b25d465ce7d6105060b7229a533c93567f76c60698d8adfe535fc155821a" Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.532002 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.626638 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz7vp\" (UniqueName: \"kubernetes.io/projected/863b254d-c246-4b2c-b687-c6314dae3841-kube-api-access-zz7vp\") pod \"863b254d-c246-4b2c-b687-c6314dae3841\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.627033 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-fernet-keys\") pod \"863b254d-c246-4b2c-b687-c6314dae3841\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.627174 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-config-data\") pod \"863b254d-c246-4b2c-b687-c6314dae3841\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.627256 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-scripts\") pod \"863b254d-c246-4b2c-b687-c6314dae3841\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.627389 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-combined-ca-bundle\") pod \"863b254d-c246-4b2c-b687-c6314dae3841\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.627432 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-credential-keys\") pod \"863b254d-c246-4b2c-b687-c6314dae3841\" (UID: \"863b254d-c246-4b2c-b687-c6314dae3841\") " Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.633858 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "863b254d-c246-4b2c-b687-c6314dae3841" (UID: "863b254d-c246-4b2c-b687-c6314dae3841"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.633907 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863b254d-c246-4b2c-b687-c6314dae3841-kube-api-access-zz7vp" (OuterVolumeSpecName: "kube-api-access-zz7vp") pod "863b254d-c246-4b2c-b687-c6314dae3841" (UID: "863b254d-c246-4b2c-b687-c6314dae3841"). InnerVolumeSpecName "kube-api-access-zz7vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.634845 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz7vp\" (UniqueName: \"kubernetes.io/projected/863b254d-c246-4b2c-b687-c6314dae3841-kube-api-access-zz7vp\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.634873 4826 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.656537 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-scripts" (OuterVolumeSpecName: "scripts") pod "863b254d-c246-4b2c-b687-c6314dae3841" (UID: "863b254d-c246-4b2c-b687-c6314dae3841"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.657845 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "863b254d-c246-4b2c-b687-c6314dae3841" (UID: "863b254d-c246-4b2c-b687-c6314dae3841"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.664922 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "863b254d-c246-4b2c-b687-c6314dae3841" (UID: "863b254d-c246-4b2c-b687-c6314dae3841"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.671778 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-config-data" (OuterVolumeSpecName: "config-data") pod "863b254d-c246-4b2c-b687-c6314dae3841" (UID: "863b254d-c246-4b2c-b687-c6314dae3841"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.737721 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.737762 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.737775 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:46 crc kubenswrapper[4826]: I0129 07:02:46.737790 4826 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/863b254d-c246-4b2c-b687-c6314dae3841-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:47 crc kubenswrapper[4826]: E0129 07:02:47.484981 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 29 07:02:47 crc kubenswrapper[4826]: E0129 07:02:47.485158 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rwxms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-vgqpr_openstack(ff69218e-ec33-4818-86be-a9ff92d3f40d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 07:02:47 crc kubenswrapper[4826]: E0129 07:02:47.486331 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-vgqpr" podUID="ff69218e-ec33-4818-86be-a9ff92d3f40d" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.521061 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l5498" Jan 29 07:02:47 crc kubenswrapper[4826]: E0129 07:02:47.524327 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-vgqpr" podUID="ff69218e-ec33-4818-86be-a9ff92d3f40d" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.619244 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l5498"] Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.625246 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l5498"] Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.723431 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4tptt"] Jan 29 07:02:47 crc kubenswrapper[4826]: E0129 07:02:47.723933 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863b254d-c246-4b2c-b687-c6314dae3841" containerName="keystone-bootstrap" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.723956 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="863b254d-c246-4b2c-b687-c6314dae3841" containerName="keystone-bootstrap" Jan 29 07:02:47 crc kubenswrapper[4826]: E0129 07:02:47.723990 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad14a23d-71a9-4348-9e06-61db9b024821" containerName="init" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.724004 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad14a23d-71a9-4348-9e06-61db9b024821" containerName="init" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.724325 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="863b254d-c246-4b2c-b687-c6314dae3841" containerName="keystone-bootstrap" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.724365 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad14a23d-71a9-4348-9e06-61db9b024821" containerName="init" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.725139 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.730573 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.730637 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.730956 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vphpw" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.731065 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.731080 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.741135 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4tptt"] Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.860159 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-fernet-keys\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.860214 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-combined-ca-bundle\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.860244 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4sm\" (UniqueName: \"kubernetes.io/projected/fde1114c-0d7b-4f87-9203-2dff8fd98201-kube-api-access-vp4sm\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.860351 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-credential-keys\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.860384 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-config-data\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.860401 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-scripts\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.961559 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-combined-ca-bundle\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.961604 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4sm\" (UniqueName: \"kubernetes.io/projected/fde1114c-0d7b-4f87-9203-2dff8fd98201-kube-api-access-vp4sm\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.961690 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-credential-keys\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.961754 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-config-data\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.962037 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-scripts\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.962403 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-fernet-keys\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.967120 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-scripts\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.968976 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-config-data\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.970199 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-fernet-keys\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.976420 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-credential-keys\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.989790 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-combined-ca-bundle\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:47 crc kubenswrapper[4826]: I0129 07:02:47.989925 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4sm\" (UniqueName: \"kubernetes.io/projected/fde1114c-0d7b-4f87-9203-2dff8fd98201-kube-api-access-vp4sm\") pod \"keystone-bootstrap-4tptt\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:48 crc kubenswrapper[4826]: I0129 07:02:48.048027 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:02:48 crc kubenswrapper[4826]: I0129 07:02:48.165334 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" podUID="82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Jan 29 07:02:48 crc kubenswrapper[4826]: I0129 07:02:48.821361 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="863b254d-c246-4b2c-b687-c6314dae3841" path="/var/lib/kubelet/pods/863b254d-c246-4b2c-b687-c6314dae3841/volumes" Jan 29 07:02:53 crc kubenswrapper[4826]: I0129 07:02:53.165166 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" podUID="82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Jan 29 07:02:54 crc kubenswrapper[4826]: I0129 07:02:54.588592 4826 generic.go:334] "Generic (PLEG): container finished" podID="b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c" containerID="727b87c675e49b86f65ef5c3d398c7e86e56f8ef827c047c69e48782eb0c3773" exitCode=0 Jan 29 07:02:54 crc kubenswrapper[4826]: I0129 07:02:54.588664 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnxn7" event={"ID":"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c","Type":"ContainerDied","Data":"727b87c675e49b86f65ef5c3d398c7e86e56f8ef827c047c69e48782eb0c3773"} Jan 29 07:02:55 crc kubenswrapper[4826]: E0129 07:02:55.295908 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad14a23d_71a9_4348_9e06_61db9b024821.slice/crio-32221b2b22f27820444f88b1c4d786966450cc34e86964c6b693b53b6785cd38\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad14a23d_71a9_4348_9e06_61db9b024821.slice\": RecentStats: unable to find data in memory cache]" Jan 29 07:02:55 crc kubenswrapper[4826]: E0129 07:02:55.785317 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 29 07:02:55 crc kubenswrapper[4826]: E0129 07:02:55.785743 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxj8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-8qxjd_openstack(4a1157a3-fc93-4d73-8200-b55bfa626a09): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 07:02:55 crc kubenswrapper[4826]: E0129 07:02:55.786915 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-8qxjd" podUID="4a1157a3-fc93-4d73-8200-b55bfa626a09" Jan 29 07:02:55 crc kubenswrapper[4826]: I0129 07:02:55.835658 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:02:55 crc kubenswrapper[4826]: I0129 07:02:55.977923 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-ovsdbserver-nb\") pod \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " Jan 29 07:02:55 crc kubenswrapper[4826]: I0129 07:02:55.977994 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-ovsdbserver-sb\") pod \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " Jan 29 07:02:55 crc kubenswrapper[4826]: I0129 07:02:55.978061 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-dns-svc\") pod \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " Jan 29 07:02:55 crc kubenswrapper[4826]: I0129 07:02:55.978121 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-config\") pod \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " Jan 29 07:02:55 crc kubenswrapper[4826]: I0129 07:02:55.978154 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhvjh\" (UniqueName: \"kubernetes.io/projected/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-kube-api-access-nhvjh\") pod \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\" (UID: \"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a\") " Jan 29 07:02:55 crc kubenswrapper[4826]: I0129 07:02:55.983646 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-kube-api-access-nhvjh" (OuterVolumeSpecName: "kube-api-access-nhvjh") pod "82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" (UID: "82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a"). InnerVolumeSpecName "kube-api-access-nhvjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.020649 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" (UID: "82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.031897 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" (UID: "82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.034722 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-config" (OuterVolumeSpecName: "config") pod "82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" (UID: "82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.034801 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" (UID: "82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.080857 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.080894 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.080906 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.080915 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.080925 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhvjh\" (UniqueName: \"kubernetes.io/projected/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a-kube-api-access-nhvjh\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.127211 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.284459 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-config-data\") pod \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.284624 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-db-sync-config-data\") pod \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.285631 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-combined-ca-bundle\") pod \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.285854 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksrgh\" (UniqueName: \"kubernetes.io/projected/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-kube-api-access-ksrgh\") pod \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\" (UID: \"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c\") " Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.289262 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-kube-api-access-ksrgh" (OuterVolumeSpecName: "kube-api-access-ksrgh") pod "b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c" (UID: "b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c"). InnerVolumeSpecName "kube-api-access-ksrgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.289362 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c" (UID: "b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.315260 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c" (UID: "b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.391062 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksrgh\" (UniqueName: \"kubernetes.io/projected/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-kube-api-access-ksrgh\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.391482 4826 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.391498 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.398263 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-config-data" (OuterVolumeSpecName: "config-data") pod "b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c" (UID: "b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.493231 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.557442 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4tptt"] Jan 29 07:02:56 crc kubenswrapper[4826]: W0129 07:02:56.560949 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfde1114c_0d7b_4f87_9203_2dff8fd98201.slice/crio-269476f29444fc78b2f59580c86020a93d913a5868baa8d54935a9aa5f2b166b WatchSource:0}: Error finding container 269476f29444fc78b2f59580c86020a93d913a5868baa8d54935a9aa5f2b166b: Status 404 returned error can't find the container with id 269476f29444fc78b2f59580c86020a93d913a5868baa8d54935a9aa5f2b166b Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.616937 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"38125c7287003d38ece9a28cd533530a5bf461facb796759edb517b616c413a5"} Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.616983 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"b530291b6bb170ceb4af6e542a3feb436b2acdf8fb99e834d63a533292236ca3"} Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.619727 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lbrr5" event={"ID":"e3ec8717-30a1-46a8-9224-4802c2b1c3e6","Type":"ContainerStarted","Data":"e1e42bde75bc8599c637749a61aa4b362868edadf725f9ac179d9cfb49bf7db4"} Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.623390 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4tptt" event={"ID":"fde1114c-0d7b-4f87-9203-2dff8fd98201","Type":"ContainerStarted","Data":"269476f29444fc78b2f59580c86020a93d913a5868baa8d54935a9aa5f2b166b"} Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.625045 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b459d81c-6b13-40f7-a460-524b0082a05d","Type":"ContainerStarted","Data":"8228f51af815f47f376f9cb3bea8904c10ecbbbee7393c8f84f57f8e9aa9614d"} Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.633269 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnxn7" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.634632 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnxn7" event={"ID":"b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c","Type":"ContainerDied","Data":"88c6347c00acb0b845ffbd9f4bf5e56ebe6445afc0069d262295b21da7a6a8ce"} Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.634683 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88c6347c00acb0b845ffbd9f4bf5e56ebe6445afc0069d262295b21da7a6a8ce" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.641402 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.641996 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-47qwr" event={"ID":"82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a","Type":"ContainerDied","Data":"32e65aaf7ada5214518d3d9472f41ebb6a12256462856e703bbf936b27354fe4"} Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.642071 4826 scope.go:117] "RemoveContainer" containerID="e372534610518a0d89c9dff4aa3d8785b79ab67dd83f0a16b4a1b8ba01759085" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.648148 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-lbrr5" podStartSLOduration=5.893171951 podStartE2EDuration="27.648128163s" podCreationTimestamp="2026-01-29 07:02:29 +0000 UTC" firstStartedPulling="2026-01-29 07:02:34.352767035 +0000 UTC m=+1138.214560144" lastFinishedPulling="2026-01-29 07:02:56.107723297 +0000 UTC m=+1159.969516356" observedRunningTime="2026-01-29 07:02:56.636941904 +0000 UTC m=+1160.498734973" watchObservedRunningTime="2026-01-29 07:02:56.648128163 +0000 UTC m=+1160.509921232" Jan 29 07:02:56 crc kubenswrapper[4826]: E0129 07:02:56.649108 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-8qxjd" podUID="4a1157a3-fc93-4d73-8200-b55bfa626a09" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.688329 4826 scope.go:117] "RemoveContainer" containerID="3ba3b1d66aae82bf152504940a914bd66dc28d5355f475f5e2d087451fddf4b5" Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.713271 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-47qwr"] Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.720551 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-47qwr"] Jan 29 07:02:56 crc kubenswrapper[4826]: I0129 07:02:56.831407 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" path="/var/lib/kubelet/pods/82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a/volumes" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.034112 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74c95c887-hwqvr"] Jan 29 07:02:57 crc kubenswrapper[4826]: E0129 07:02:57.035900 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c" containerName="glance-db-sync" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.035921 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c" containerName="glance-db-sync" Jan 29 07:02:57 crc kubenswrapper[4826]: E0129 07:02:57.035932 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" containerName="init" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.035939 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" containerName="init" Jan 29 07:02:57 crc kubenswrapper[4826]: E0129 07:02:57.035948 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" containerName="dnsmasq-dns" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.035973 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" containerName="dnsmasq-dns" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.036250 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="82fc89ef-0d6c-4f94-b36b-2d7b4ea3aa1a" containerName="dnsmasq-dns" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.036365 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c" containerName="glance-db-sync" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.043488 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.056728 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74c95c887-hwqvr"] Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.118356 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-dns-svc\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.118421 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv8m4\" (UniqueName: \"kubernetes.io/projected/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-kube-api-access-hv8m4\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.118455 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-ovsdbserver-nb\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.118557 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-ovsdbserver-sb\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.118579 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-config\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.220257 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-ovsdbserver-sb\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.220495 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-config\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.220589 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-dns-svc\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.220660 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv8m4\" (UniqueName: \"kubernetes.io/projected/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-kube-api-access-hv8m4\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.220736 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-ovsdbserver-nb\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.221187 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-ovsdbserver-sb\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.221644 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-ovsdbserver-nb\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.222098 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-dns-svc\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.222492 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-config\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.243427 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv8m4\" (UniqueName: \"kubernetes.io/projected/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-kube-api-access-hv8m4\") pod \"dnsmasq-dns-74c95c887-hwqvr\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.383855 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.664950 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"b8b353bcfe9ffab3fe12e51fc1add01cbe0b68e3df21ff9ba62958988fa40c6a"} Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.665345 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"efc661e13d0a3ae031a931433c46d983a09a4a66496a4085e3c7846558e05913"} Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.665358 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"18c3f34b023f38c953270c50875c33014ac0c890dbc279a1a0f7ee0521285e95"} Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.691815 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4tptt" event={"ID":"fde1114c-0d7b-4f87-9203-2dff8fd98201","Type":"ContainerStarted","Data":"413d8ba5077330d1cb894502ce4b2de16c2c1508b66a60974ea40c424b79ef12"} Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.714903 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4tptt" podStartSLOduration=10.714883421 podStartE2EDuration="10.714883421s" podCreationTimestamp="2026-01-29 07:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:02:57.707791258 +0000 UTC m=+1161.569584327" watchObservedRunningTime="2026-01-29 07:02:57.714883421 +0000 UTC m=+1161.576676490" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.955599 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.974440 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.982853 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.984029 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 07:02:57 crc kubenswrapper[4826]: I0129 07:02:57.985561 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z4mwb" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.021671 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.040125 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74c95c887-hwqvr"] Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.143268 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.143672 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nb5m\" (UniqueName: \"kubernetes.io/projected/775143aa-5dd0-4e34-bb39-d0246bb86fed-kube-api-access-4nb5m\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.143692 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/775143aa-5dd0-4e34-bb39-d0246bb86fed-logs\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.143733 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/775143aa-5dd0-4e34-bb39-d0246bb86fed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.143755 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-scripts\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.143933 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-config-data\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.144178 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.190413 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.192171 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.199411 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.199850 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.245538 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.245640 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.245662 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nb5m\" (UniqueName: \"kubernetes.io/projected/775143aa-5dd0-4e34-bb39-d0246bb86fed-kube-api-access-4nb5m\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.245680 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/775143aa-5dd0-4e34-bb39-d0246bb86fed-logs\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.245717 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/775143aa-5dd0-4e34-bb39-d0246bb86fed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.245737 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-scripts\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.245776 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-config-data\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.246717 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.247072 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/775143aa-5dd0-4e34-bb39-d0246bb86fed-logs\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.247096 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/775143aa-5dd0-4e34-bb39-d0246bb86fed-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.259945 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-config-data\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.260573 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.261098 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-scripts\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.270951 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nb5m\" (UniqueName: \"kubernetes.io/projected/775143aa-5dd0-4e34-bb39-d0246bb86fed-kube-api-access-4nb5m\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.282423 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.331653 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.347562 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-logs\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.347609 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.347630 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.347653 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.347674 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.347699 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.347723 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsswh\" (UniqueName: \"kubernetes.io/projected/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-kube-api-access-jsswh\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.449918 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-logs\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.449989 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.450015 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.450042 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.450072 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.450114 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.450151 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsswh\" (UniqueName: \"kubernetes.io/projected/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-kube-api-access-jsswh\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.450373 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-logs\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.450596 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.450833 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.455253 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.457459 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.464437 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.476769 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsswh\" (UniqueName: \"kubernetes.io/projected/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-kube-api-access-jsswh\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.492987 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.510920 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.702657 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c95c887-hwqvr" event={"ID":"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33","Type":"ContainerStarted","Data":"943eb9237afaee75f79de939feaaeeb80943fac24e4a3e6419bbe43f26e5c40e"} Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.724961 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"c55f19f9268bb7c5e036640126bfced1629e2a36cd93c83c41856348935b8605"} Jan 29 07:02:58 crc kubenswrapper[4826]: I0129 07:02:58.725000 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerStarted","Data":"68c20d8c7fd999c8a7abcb16322015c670f6cd5b3f9b312e5fcdd9e5080bab19"} Jan 29 07:02:59 crc kubenswrapper[4826]: I0129 07:02:59.351772 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:02:59 crc kubenswrapper[4826]: I0129 07:02:59.654651 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:02:59 crc kubenswrapper[4826]: I0129 07:02:59.730074 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:02:59 crc kubenswrapper[4826]: I0129 07:02:59.733673 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171","Type":"ContainerStarted","Data":"2e90269d3c6f0e1608a325f406dc9e7f28f7fe0a1cddd1fdd7330a2f29500651"} Jan 29 07:02:59 crc kubenswrapper[4826]: I0129 07:02:59.735551 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c95c887-hwqvr" event={"ID":"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33","Type":"ContainerStarted","Data":"1f4d560bfedfe2f5f1b61ec8429dc568b6f5c3273715bfd6cf5af2ad7bcb1790"} Jan 29 07:03:00 crc kubenswrapper[4826]: I0129 07:03:00.169996 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:03:00 crc kubenswrapper[4826]: I0129 07:03:00.753716 4826 generic.go:334] "Generic (PLEG): container finished" podID="5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" containerID="1f4d560bfedfe2f5f1b61ec8429dc568b6f5c3273715bfd6cf5af2ad7bcb1790" exitCode=0 Jan 29 07:03:00 crc kubenswrapper[4826]: I0129 07:03:00.753814 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c95c887-hwqvr" event={"ID":"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33","Type":"ContainerDied","Data":"1f4d560bfedfe2f5f1b61ec8429dc568b6f5c3273715bfd6cf5af2ad7bcb1790"} Jan 29 07:03:00 crc kubenswrapper[4826]: I0129 07:03:00.759412 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"775143aa-5dd0-4e34-bb39-d0246bb86fed","Type":"ContainerStarted","Data":"571125d14d443ce985ee2dfbd9622a73fe88e6d5132451e2dc890abb03b08ebe"} Jan 29 07:03:00 crc kubenswrapper[4826]: I0129 07:03:00.821162 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.06391611 podStartE2EDuration="1m15.821118583s" podCreationTimestamp="2026-01-29 07:01:45 +0000 UTC" firstStartedPulling="2026-01-29 07:02:20.44956028 +0000 UTC m=+1124.311353349" lastFinishedPulling="2026-01-29 07:02:56.206762753 +0000 UTC m=+1160.068555822" observedRunningTime="2026-01-29 07:03:00.811220497 +0000 UTC m=+1164.673013566" watchObservedRunningTime="2026-01-29 07:03:00.821118583 +0000 UTC m=+1164.682911652" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.121962 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74c95c887-hwqvr"] Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.155571 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-w9nrn"] Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.156892 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.161615 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.173136 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-w9nrn"] Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.317504 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.317565 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-config\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.317599 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvzjs\" (UniqueName: \"kubernetes.io/projected/f17521c8-b15d-42ac-b28d-692acd3063ef-kube-api-access-bvzjs\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.317644 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.317820 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.317927 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.419719 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.420420 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.420603 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.420786 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.420898 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-config\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.420939 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvzjs\" (UniqueName: \"kubernetes.io/projected/f17521c8-b15d-42ac-b28d-692acd3063ef-kube-api-access-bvzjs\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.421079 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.421576 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.421969 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-config\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.422504 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.422627 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.446978 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvzjs\" (UniqueName: \"kubernetes.io/projected/f17521c8-b15d-42ac-b28d-692acd3063ef-kube-api-access-bvzjs\") pod \"dnsmasq-dns-5dc4fcdbc-w9nrn\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.488369 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.772310 4826 generic.go:334] "Generic (PLEG): container finished" podID="fde1114c-0d7b-4f87-9203-2dff8fd98201" containerID="413d8ba5077330d1cb894502ce4b2de16c2c1508b66a60974ea40c424b79ef12" exitCode=0 Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.772453 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4tptt" event={"ID":"fde1114c-0d7b-4f87-9203-2dff8fd98201","Type":"ContainerDied","Data":"413d8ba5077330d1cb894502ce4b2de16c2c1508b66a60974ea40c424b79ef12"} Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.776508 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b459d81c-6b13-40f7-a460-524b0082a05d","Type":"ContainerStarted","Data":"df52ac6f977bbeb210e63377b69a96a0c4c93e4a270aad2ac19139ea7c47727e"} Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.779155 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"775143aa-5dd0-4e34-bb39-d0246bb86fed","Type":"ContainerStarted","Data":"d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691"} Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.781922 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171","Type":"ContainerStarted","Data":"d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac"} Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.785644 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vgqpr" event={"ID":"ff69218e-ec33-4818-86be-a9ff92d3f40d","Type":"ContainerStarted","Data":"c8f387f435d969f8cf85b78f6f85ede7c54c71cb9b6213cda2569b918c6e2d3b"} Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.796589 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c95c887-hwqvr" event={"ID":"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33","Type":"ContainerStarted","Data":"7a3a768af9fbd872c25141acad85da9d3fc38d83feabb6065d494b53a0c5e709"} Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.796760 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74c95c887-hwqvr" podUID="5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" containerName="dnsmasq-dns" containerID="cri-o://7a3a768af9fbd872c25141acad85da9d3fc38d83feabb6065d494b53a0c5e709" gracePeriod=10 Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.796829 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.816289 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-vgqpr" podStartSLOduration=2.14758372 podStartE2EDuration="32.816264554s" podCreationTimestamp="2026-01-29 07:02:29 +0000 UTC" firstStartedPulling="2026-01-29 07:02:30.814071493 +0000 UTC m=+1134.675864552" lastFinishedPulling="2026-01-29 07:03:01.482752317 +0000 UTC m=+1165.344545386" observedRunningTime="2026-01-29 07:03:01.805713402 +0000 UTC m=+1165.667506471" watchObservedRunningTime="2026-01-29 07:03:01.816264554 +0000 UTC m=+1165.678057623" Jan 29 07:03:01 crc kubenswrapper[4826]: I0129 07:03:01.829991 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74c95c887-hwqvr" podStartSLOduration=4.829968408 podStartE2EDuration="4.829968408s" podCreationTimestamp="2026-01-29 07:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:01.820854253 +0000 UTC m=+1165.682647322" watchObservedRunningTime="2026-01-29 07:03:01.829968408 +0000 UTC m=+1165.691761477" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.048536 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-w9nrn"] Jan 29 07:03:02 crc kubenswrapper[4826]: W0129 07:03:02.070217 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf17521c8_b15d_42ac_b28d_692acd3063ef.slice/crio-5aaa428e405eb0c9eb8454f2e671bb1f8744777c8316d7054e8c1d536e6b3448 WatchSource:0}: Error finding container 5aaa428e405eb0c9eb8454f2e671bb1f8744777c8316d7054e8c1d536e6b3448: Status 404 returned error can't find the container with id 5aaa428e405eb0c9eb8454f2e671bb1f8744777c8316d7054e8c1d536e6b3448 Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.259596 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.343344 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-config\") pod \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.343875 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-dns-svc\") pod \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.343941 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-ovsdbserver-nb\") pod \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.344036 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-ovsdbserver-sb\") pod \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.344063 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv8m4\" (UniqueName: \"kubernetes.io/projected/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-kube-api-access-hv8m4\") pod \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\" (UID: \"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33\") " Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.349909 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-kube-api-access-hv8m4" (OuterVolumeSpecName: "kube-api-access-hv8m4") pod "5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" (UID: "5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33"). InnerVolumeSpecName "kube-api-access-hv8m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.397554 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-config" (OuterVolumeSpecName: "config") pod "5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" (UID: "5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.397793 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" (UID: "5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.407018 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" (UID: "5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.411111 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" (UID: "5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.445615 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.445643 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.445652 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.445660 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv8m4\" (UniqueName: \"kubernetes.io/projected/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-kube-api-access-hv8m4\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.445672 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.830158 4826 generic.go:334] "Generic (PLEG): container finished" podID="e3ec8717-30a1-46a8-9224-4802c2b1c3e6" containerID="e1e42bde75bc8599c637749a61aa4b362868edadf725f9ac179d9cfb49bf7db4" exitCode=0 Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.837670 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" containerName="glance-log" containerID="cri-o://d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac" gracePeriod=30 Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.837783 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lbrr5" event={"ID":"e3ec8717-30a1-46a8-9224-4802c2b1c3e6","Type":"ContainerDied","Data":"e1e42bde75bc8599c637749a61aa4b362868edadf725f9ac179d9cfb49bf7db4"} Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.837820 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171","Type":"ContainerStarted","Data":"10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8"} Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.837805 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" containerName="glance-httpd" containerID="cri-o://10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8" gracePeriod=30 Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.846270 4826 generic.go:334] "Generic (PLEG): container finished" podID="5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" containerID="7a3a768af9fbd872c25141acad85da9d3fc38d83feabb6065d494b53a0c5e709" exitCode=0 Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.846386 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c95c887-hwqvr" event={"ID":"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33","Type":"ContainerDied","Data":"7a3a768af9fbd872c25141acad85da9d3fc38d83feabb6065d494b53a0c5e709"} Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.846417 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74c95c887-hwqvr" event={"ID":"5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33","Type":"ContainerDied","Data":"943eb9237afaee75f79de939feaaeeb80943fac24e4a3e6419bbe43f26e5c40e"} Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.846433 4826 scope.go:117] "RemoveContainer" containerID="7a3a768af9fbd872c25141acad85da9d3fc38d83feabb6065d494b53a0c5e709" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.846443 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74c95c887-hwqvr" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.855799 4826 generic.go:334] "Generic (PLEG): container finished" podID="f17521c8-b15d-42ac-b28d-692acd3063ef" containerID="651cfac3940c8449d8f5850ae4afe52c361938cb37193122cae92e991f5bc493" exitCode=0 Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.855886 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" event={"ID":"f17521c8-b15d-42ac-b28d-692acd3063ef","Type":"ContainerDied","Data":"651cfac3940c8449d8f5850ae4afe52c361938cb37193122cae92e991f5bc493"} Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.855916 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" event={"ID":"f17521c8-b15d-42ac-b28d-692acd3063ef","Type":"ContainerStarted","Data":"5aaa428e405eb0c9eb8454f2e671bb1f8744777c8316d7054e8c1d536e6b3448"} Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.870398 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.870381498 podStartE2EDuration="5.870381498s" podCreationTimestamp="2026-01-29 07:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:02.863518141 +0000 UTC m=+1166.725311220" watchObservedRunningTime="2026-01-29 07:03:02.870381498 +0000 UTC m=+1166.732174567" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.873904 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="775143aa-5dd0-4e34-bb39-d0246bb86fed" containerName="glance-log" containerID="cri-o://d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691" gracePeriod=30 Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.874041 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="775143aa-5dd0-4e34-bb39-d0246bb86fed" containerName="glance-httpd" containerID="cri-o://b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b" gracePeriod=30 Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.874279 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"775143aa-5dd0-4e34-bb39-d0246bb86fed","Type":"ContainerStarted","Data":"b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b"} Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.883901 4826 scope.go:117] "RemoveContainer" containerID="1f4d560bfedfe2f5f1b61ec8429dc568b6f5c3273715bfd6cf5af2ad7bcb1790" Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.936618 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74c95c887-hwqvr"] Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.946591 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74c95c887-hwqvr"] Jan 29 07:03:02 crc kubenswrapper[4826]: I0129 07:03:02.947224 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.94720664 podStartE2EDuration="6.94720664s" podCreationTimestamp="2026-01-29 07:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:02.94213622 +0000 UTC m=+1166.803929289" watchObservedRunningTime="2026-01-29 07:03:02.94720664 +0000 UTC m=+1166.808999709" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.166108 4826 scope.go:117] "RemoveContainer" containerID="7a3a768af9fbd872c25141acad85da9d3fc38d83feabb6065d494b53a0c5e709" Jan 29 07:03:03 crc kubenswrapper[4826]: E0129 07:03:03.172013 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a3a768af9fbd872c25141acad85da9d3fc38d83feabb6065d494b53a0c5e709\": container with ID starting with 7a3a768af9fbd872c25141acad85da9d3fc38d83feabb6065d494b53a0c5e709 not found: ID does not exist" containerID="7a3a768af9fbd872c25141acad85da9d3fc38d83feabb6065d494b53a0c5e709" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.172058 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a3a768af9fbd872c25141acad85da9d3fc38d83feabb6065d494b53a0c5e709"} err="failed to get container status \"7a3a768af9fbd872c25141acad85da9d3fc38d83feabb6065d494b53a0c5e709\": rpc error: code = NotFound desc = could not find container \"7a3a768af9fbd872c25141acad85da9d3fc38d83feabb6065d494b53a0c5e709\": container with ID starting with 7a3a768af9fbd872c25141acad85da9d3fc38d83feabb6065d494b53a0c5e709 not found: ID does not exist" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.172086 4826 scope.go:117] "RemoveContainer" containerID="1f4d560bfedfe2f5f1b61ec8429dc568b6f5c3273715bfd6cf5af2ad7bcb1790" Jan 29 07:03:03 crc kubenswrapper[4826]: E0129 07:03:03.172809 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f4d560bfedfe2f5f1b61ec8429dc568b6f5c3273715bfd6cf5af2ad7bcb1790\": container with ID starting with 1f4d560bfedfe2f5f1b61ec8429dc568b6f5c3273715bfd6cf5af2ad7bcb1790 not found: ID does not exist" containerID="1f4d560bfedfe2f5f1b61ec8429dc568b6f5c3273715bfd6cf5af2ad7bcb1790" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.172834 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4d560bfedfe2f5f1b61ec8429dc568b6f5c3273715bfd6cf5af2ad7bcb1790"} err="failed to get container status \"1f4d560bfedfe2f5f1b61ec8429dc568b6f5c3273715bfd6cf5af2ad7bcb1790\": rpc error: code = NotFound desc = could not find container \"1f4d560bfedfe2f5f1b61ec8429dc568b6f5c3273715bfd6cf5af2ad7bcb1790\": container with ID starting with 1f4d560bfedfe2f5f1b61ec8429dc568b6f5c3273715bfd6cf5af2ad7bcb1790 not found: ID does not exist" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.211657 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.358231 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp4sm\" (UniqueName: \"kubernetes.io/projected/fde1114c-0d7b-4f87-9203-2dff8fd98201-kube-api-access-vp4sm\") pod \"fde1114c-0d7b-4f87-9203-2dff8fd98201\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.358584 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-config-data\") pod \"fde1114c-0d7b-4f87-9203-2dff8fd98201\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.358702 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-scripts\") pod \"fde1114c-0d7b-4f87-9203-2dff8fd98201\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.358738 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-fernet-keys\") pod \"fde1114c-0d7b-4f87-9203-2dff8fd98201\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.358754 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-credential-keys\") pod \"fde1114c-0d7b-4f87-9203-2dff8fd98201\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.358802 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-combined-ca-bundle\") pod \"fde1114c-0d7b-4f87-9203-2dff8fd98201\" (UID: \"fde1114c-0d7b-4f87-9203-2dff8fd98201\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.365209 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde1114c-0d7b-4f87-9203-2dff8fd98201-kube-api-access-vp4sm" (OuterVolumeSpecName: "kube-api-access-vp4sm") pod "fde1114c-0d7b-4f87-9203-2dff8fd98201" (UID: "fde1114c-0d7b-4f87-9203-2dff8fd98201"). InnerVolumeSpecName "kube-api-access-vp4sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.366357 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fde1114c-0d7b-4f87-9203-2dff8fd98201" (UID: "fde1114c-0d7b-4f87-9203-2dff8fd98201"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.366751 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-scripts" (OuterVolumeSpecName: "scripts") pod "fde1114c-0d7b-4f87-9203-2dff8fd98201" (UID: "fde1114c-0d7b-4f87-9203-2dff8fd98201"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.370555 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fde1114c-0d7b-4f87-9203-2dff8fd98201" (UID: "fde1114c-0d7b-4f87-9203-2dff8fd98201"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.384898 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-config-data" (OuterVolumeSpecName: "config-data") pod "fde1114c-0d7b-4f87-9203-2dff8fd98201" (UID: "fde1114c-0d7b-4f87-9203-2dff8fd98201"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.390862 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fde1114c-0d7b-4f87-9203-2dff8fd98201" (UID: "fde1114c-0d7b-4f87-9203-2dff8fd98201"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.452758 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.460871 4826 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.460896 4826 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.460908 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.460918 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp4sm\" (UniqueName: \"kubernetes.io/projected/fde1114c-0d7b-4f87-9203-2dff8fd98201-kube-api-access-vp4sm\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.460938 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.460974 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde1114c-0d7b-4f87-9203-2dff8fd98201-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.526914 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.561544 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-config-data\") pod \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.561630 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsswh\" (UniqueName: \"kubernetes.io/projected/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-kube-api-access-jsswh\") pod \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.561661 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-httpd-run\") pod \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.561719 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.561743 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-logs\") pod \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.561788 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-scripts\") pod \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.561877 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-combined-ca-bundle\") pod \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\" (UID: \"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.562958 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" (UID: "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.563660 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-logs" (OuterVolumeSpecName: "logs") pod "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" (UID: "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.566404 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-kube-api-access-jsswh" (OuterVolumeSpecName: "kube-api-access-jsswh") pod "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" (UID: "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171"). InnerVolumeSpecName "kube-api-access-jsswh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.566763 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" (UID: "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.566915 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-scripts" (OuterVolumeSpecName: "scripts") pod "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" (UID: "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.590044 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" (UID: "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.608564 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-config-data" (OuterVolumeSpecName: "config-data") pod "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" (UID: "5ff3b927-9cd2-45f7-ac9e-abc9bcf63171"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.663830 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nb5m\" (UniqueName: \"kubernetes.io/projected/775143aa-5dd0-4e34-bb39-d0246bb86fed-kube-api-access-4nb5m\") pod \"775143aa-5dd0-4e34-bb39-d0246bb86fed\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.663928 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-scripts\") pod \"775143aa-5dd0-4e34-bb39-d0246bb86fed\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.663963 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"775143aa-5dd0-4e34-bb39-d0246bb86fed\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.663982 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-combined-ca-bundle\") pod \"775143aa-5dd0-4e34-bb39-d0246bb86fed\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.664144 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/775143aa-5dd0-4e34-bb39-d0246bb86fed-logs\") pod \"775143aa-5dd0-4e34-bb39-d0246bb86fed\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.664161 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/775143aa-5dd0-4e34-bb39-d0246bb86fed-httpd-run\") pod \"775143aa-5dd0-4e34-bb39-d0246bb86fed\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.664180 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-config-data\") pod \"775143aa-5dd0-4e34-bb39-d0246bb86fed\" (UID: \"775143aa-5dd0-4e34-bb39-d0246bb86fed\") " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.664549 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.664567 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsswh\" (UniqueName: \"kubernetes.io/projected/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-kube-api-access-jsswh\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.664577 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.664598 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.664607 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.664615 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.664623 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.664939 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/775143aa-5dd0-4e34-bb39-d0246bb86fed-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "775143aa-5dd0-4e34-bb39-d0246bb86fed" (UID: "775143aa-5dd0-4e34-bb39-d0246bb86fed"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.665161 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/775143aa-5dd0-4e34-bb39-d0246bb86fed-logs" (OuterVolumeSpecName: "logs") pod "775143aa-5dd0-4e34-bb39-d0246bb86fed" (UID: "775143aa-5dd0-4e34-bb39-d0246bb86fed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.667847 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775143aa-5dd0-4e34-bb39-d0246bb86fed-kube-api-access-4nb5m" (OuterVolumeSpecName: "kube-api-access-4nb5m") pod "775143aa-5dd0-4e34-bb39-d0246bb86fed" (UID: "775143aa-5dd0-4e34-bb39-d0246bb86fed"). InnerVolumeSpecName "kube-api-access-4nb5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.668077 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-scripts" (OuterVolumeSpecName: "scripts") pod "775143aa-5dd0-4e34-bb39-d0246bb86fed" (UID: "775143aa-5dd0-4e34-bb39-d0246bb86fed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.669419 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "775143aa-5dd0-4e34-bb39-d0246bb86fed" (UID: "775143aa-5dd0-4e34-bb39-d0246bb86fed"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.688084 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.690199 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "775143aa-5dd0-4e34-bb39-d0246bb86fed" (UID: "775143aa-5dd0-4e34-bb39-d0246bb86fed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.713760 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-config-data" (OuterVolumeSpecName: "config-data") pod "775143aa-5dd0-4e34-bb39-d0246bb86fed" (UID: "775143aa-5dd0-4e34-bb39-d0246bb86fed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.765899 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/775143aa-5dd0-4e34-bb39-d0246bb86fed-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.765969 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/775143aa-5dd0-4e34-bb39-d0246bb86fed-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.765978 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.765994 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nb5m\" (UniqueName: \"kubernetes.io/projected/775143aa-5dd0-4e34-bb39-d0246bb86fed-kube-api-access-4nb5m\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.766004 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.766037 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.766046 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775143aa-5dd0-4e34-bb39-d0246bb86fed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.766056 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.785771 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.859276 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-86487d6456-mmjgq"] Jan 29 07:03:03 crc kubenswrapper[4826]: E0129 07:03:03.860829 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" containerName="glance-httpd" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.861122 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" containerName="glance-httpd" Jan 29 07:03:03 crc kubenswrapper[4826]: E0129 07:03:03.861180 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775143aa-5dd0-4e34-bb39-d0246bb86fed" containerName="glance-log" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.861226 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="775143aa-5dd0-4e34-bb39-d0246bb86fed" containerName="glance-log" Jan 29 07:03:03 crc kubenswrapper[4826]: E0129 07:03:03.861276 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde1114c-0d7b-4f87-9203-2dff8fd98201" containerName="keystone-bootstrap" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.861337 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde1114c-0d7b-4f87-9203-2dff8fd98201" containerName="keystone-bootstrap" Jan 29 07:03:03 crc kubenswrapper[4826]: E0129 07:03:03.861394 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" containerName="glance-log" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.861448 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" containerName="glance-log" Jan 29 07:03:03 crc kubenswrapper[4826]: E0129 07:03:03.861501 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" containerName="dnsmasq-dns" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.861545 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" containerName="dnsmasq-dns" Jan 29 07:03:03 crc kubenswrapper[4826]: E0129 07:03:03.861593 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" containerName="init" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.861637 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" containerName="init" Jan 29 07:03:03 crc kubenswrapper[4826]: E0129 07:03:03.861681 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775143aa-5dd0-4e34-bb39-d0246bb86fed" containerName="glance-httpd" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.861726 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="775143aa-5dd0-4e34-bb39-d0246bb86fed" containerName="glance-httpd" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.861982 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde1114c-0d7b-4f87-9203-2dff8fd98201" containerName="keystone-bootstrap" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.862047 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" containerName="glance-httpd" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.862106 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" containerName="glance-log" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.862157 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="775143aa-5dd0-4e34-bb39-d0246bb86fed" containerName="glance-httpd" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.862216 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="775143aa-5dd0-4e34-bb39-d0246bb86fed" containerName="glance-log" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.862277 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" containerName="dnsmasq-dns" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.862903 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.866013 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.867266 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.867691 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.879042 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-86487d6456-mmjgq"] Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.897382 4826 generic.go:334] "Generic (PLEG): container finished" podID="5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" containerID="10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8" exitCode=0 Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.897424 4826 generic.go:334] "Generic (PLEG): container finished" podID="5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" containerID="d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac" exitCode=143 Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.897472 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.897521 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171","Type":"ContainerDied","Data":"10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8"} Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.897578 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171","Type":"ContainerDied","Data":"d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac"} Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.897590 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ff3b927-9cd2-45f7-ac9e-abc9bcf63171","Type":"ContainerDied","Data":"2e90269d3c6f0e1608a325f406dc9e7f28f7fe0a1cddd1fdd7330a2f29500651"} Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.897610 4826 scope.go:117] "RemoveContainer" containerID="10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.901609 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" event={"ID":"f17521c8-b15d-42ac-b28d-692acd3063ef","Type":"ContainerStarted","Data":"9813ded057206a93193655b78747d9dd0b05f6eebc6a55ef94e1161f52274619"} Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.902871 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.905386 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4tptt" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.905396 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4tptt" event={"ID":"fde1114c-0d7b-4f87-9203-2dff8fd98201","Type":"ContainerDied","Data":"269476f29444fc78b2f59580c86020a93d913a5868baa8d54935a9aa5f2b166b"} Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.905432 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="269476f29444fc78b2f59580c86020a93d913a5868baa8d54935a9aa5f2b166b" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.909766 4826 generic.go:334] "Generic (PLEG): container finished" podID="775143aa-5dd0-4e34-bb39-d0246bb86fed" containerID="b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b" exitCode=0 Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.909800 4826 generic.go:334] "Generic (PLEG): container finished" podID="775143aa-5dd0-4e34-bb39-d0246bb86fed" containerID="d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691" exitCode=143 Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.910021 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.910465 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"775143aa-5dd0-4e34-bb39-d0246bb86fed","Type":"ContainerDied","Data":"b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b"} Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.910499 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"775143aa-5dd0-4e34-bb39-d0246bb86fed","Type":"ContainerDied","Data":"d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691"} Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.910530 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"775143aa-5dd0-4e34-bb39-d0246bb86fed","Type":"ContainerDied","Data":"571125d14d443ce985ee2dfbd9622a73fe88e6d5132451e2dc890abb03b08ebe"} Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.930100 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" podStartSLOduration=2.930080514 podStartE2EDuration="2.930080514s" podCreationTimestamp="2026-01-29 07:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:03.924035658 +0000 UTC m=+1167.785828727" watchObservedRunningTime="2026-01-29 07:03:03.930080514 +0000 UTC m=+1167.791873583" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.968700 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-config-data\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.968744 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-internal-tls-certs\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.968792 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-combined-ca-bundle\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.968908 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-scripts\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.968955 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98tzx\" (UniqueName: \"kubernetes.io/projected/d9016472-5ff0-4849-bc8a-c1d815d27931-kube-api-access-98tzx\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.969041 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-fernet-keys\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.969120 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-credential-keys\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.969286 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-public-tls-certs\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:03 crc kubenswrapper[4826]: I0129 07:03:03.985750 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.000215 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.012705 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.041918 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.043555 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.045758 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.046117 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z4mwb" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.046364 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.046516 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.058699 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.071272 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-combined-ca-bundle\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.071350 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-scripts\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.071383 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98tzx\" (UniqueName: \"kubernetes.io/projected/d9016472-5ff0-4849-bc8a-c1d815d27931-kube-api-access-98tzx\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.071430 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-fernet-keys\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.071470 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-credential-keys\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.071503 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-public-tls-certs\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.071564 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-config-data\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.071588 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-internal-tls-certs\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.082763 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-internal-tls-certs\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.083477 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-credential-keys\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.089176 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-fernet-keys\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.093716 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.097246 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-scripts\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.098075 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-config-data\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.106311 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-public-tls-certs\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.109275 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98tzx\" (UniqueName: \"kubernetes.io/projected/d9016472-5ff0-4849-bc8a-c1d815d27931-kube-api-access-98tzx\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.133537 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.143687 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.150649 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.152077 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.155002 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-combined-ca-bundle\") pod \"keystone-86487d6456-mmjgq\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.155920 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.184393 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf55335-feed-4467-9375-9543d111bc55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.187100 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.187435 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4xzt\" (UniqueName: \"kubernetes.io/projected/cbf55335-feed-4467-9375-9543d111bc55-kube-api-access-c4xzt\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.187566 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.192214 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.192259 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.192473 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.192510 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbf55335-feed-4467-9375-9543d111bc55-logs\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.193737 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.294817 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.294871 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.294895 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbf55335-feed-4467-9375-9543d111bc55-logs\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.294929 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/017465e9-fb85-458e-8eca-109192bf47e7-logs\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.294950 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf55335-feed-4467-9375-9543d111bc55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.294986 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.295008 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/017465e9-fb85-458e-8eca-109192bf47e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.295043 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.295076 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.295169 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4xzt\" (UniqueName: \"kubernetes.io/projected/cbf55335-feed-4467-9375-9543d111bc55-kube-api-access-c4xzt\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.295205 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.295231 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.295251 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.295266 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.295284 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.295316 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hkgx\" (UniqueName: \"kubernetes.io/projected/017465e9-fb85-458e-8eca-109192bf47e7-kube-api-access-7hkgx\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.295780 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf55335-feed-4467-9375-9543d111bc55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.295997 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.296215 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbf55335-feed-4467-9375-9543d111bc55-logs\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.302657 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.303253 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.304896 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.321976 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4xzt\" (UniqueName: \"kubernetes.io/projected/cbf55335-feed-4467-9375-9543d111bc55-kube-api-access-c4xzt\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.323055 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.331521 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.397578 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.397623 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.397650 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hkgx\" (UniqueName: \"kubernetes.io/projected/017465e9-fb85-458e-8eca-109192bf47e7-kube-api-access-7hkgx\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.397721 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.398019 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/017465e9-fb85-458e-8eca-109192bf47e7-logs\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.398060 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/017465e9-fb85-458e-8eca-109192bf47e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.398092 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.398116 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.399913 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/017465e9-fb85-458e-8eca-109192bf47e7-logs\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.399904 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/017465e9-fb85-458e-8eca-109192bf47e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.400185 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.402793 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.405840 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.415878 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.416374 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.421388 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hkgx\" (UniqueName: \"kubernetes.io/projected/017465e9-fb85-458e-8eca-109192bf47e7-kube-api-access-7hkgx\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.433579 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.470989 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.499727 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.822060 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33" path="/var/lib/kubelet/pods/5ebc1e81-f0ce-49f2-94bb-4b3f8358ec33/volumes" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.822911 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff3b927-9cd2-45f7-ac9e-abc9bcf63171" path="/var/lib/kubelet/pods/5ff3b927-9cd2-45f7-ac9e-abc9bcf63171/volumes" Jan 29 07:03:04 crc kubenswrapper[4826]: I0129 07:03:04.823606 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="775143aa-5dd0-4e34-bb39-d0246bb86fed" path="/var/lib/kubelet/pods/775143aa-5dd0-4e34-bb39-d0246bb86fed/volumes" Jan 29 07:03:05 crc kubenswrapper[4826]: E0129 07:03:05.617153 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad14a23d_71a9_4348_9e06_61db9b024821.slice/crio-32221b2b22f27820444f88b1c4d786966450cc34e86964c6b693b53b6785cd38\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad14a23d_71a9_4348_9e06_61db9b024821.slice\": RecentStats: unable to find data in memory cache]" Jan 29 07:03:05 crc kubenswrapper[4826]: I0129 07:03:05.656819 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:03:05 crc kubenswrapper[4826]: I0129 07:03:05.656882 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:03:06 crc kubenswrapper[4826]: I0129 07:03:06.932871 4826 generic.go:334] "Generic (PLEG): container finished" podID="ff69218e-ec33-4818-86be-a9ff92d3f40d" containerID="c8f387f435d969f8cf85b78f6f85ede7c54c71cb9b6213cda2569b918c6e2d3b" exitCode=0 Jan 29 07:03:06 crc kubenswrapper[4826]: I0129 07:03:06.932947 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vgqpr" event={"ID":"ff69218e-ec33-4818-86be-a9ff92d3f40d","Type":"ContainerDied","Data":"c8f387f435d969f8cf85b78f6f85ede7c54c71cb9b6213cda2569b918c6e2d3b"} Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.371018 4826 scope.go:117] "RemoveContainer" containerID="d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.526308 4826 scope.go:117] "RemoveContainer" containerID="10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8" Jan 29 07:03:07 crc kubenswrapper[4826]: E0129 07:03:07.527652 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8\": container with ID starting with 10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8 not found: ID does not exist" containerID="10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.527710 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8"} err="failed to get container status \"10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8\": rpc error: code = NotFound desc = could not find container \"10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8\": container with ID starting with 10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8 not found: ID does not exist" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.527756 4826 scope.go:117] "RemoveContainer" containerID="d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac" Jan 29 07:03:07 crc kubenswrapper[4826]: E0129 07:03:07.529488 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac\": container with ID starting with d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac not found: ID does not exist" containerID="d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.529509 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac"} err="failed to get container status \"d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac\": rpc error: code = NotFound desc = could not find container \"d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac\": container with ID starting with d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac not found: ID does not exist" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.529523 4826 scope.go:117] "RemoveContainer" containerID="10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.529814 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8"} err="failed to get container status \"10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8\": rpc error: code = NotFound desc = could not find container \"10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8\": container with ID starting with 10528f9af18e031d36d0c74e8719c81944c9ed6619091c98794c4215f82d3ca8 not found: ID does not exist" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.529835 4826 scope.go:117] "RemoveContainer" containerID="d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.530189 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac"} err="failed to get container status \"d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac\": rpc error: code = NotFound desc = could not find container \"d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac\": container with ID starting with d93dda2774193b92acd8ca5b3ef6dfc8815cf6ad892e4d5c6003a965fa350bac not found: ID does not exist" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.530230 4826 scope.go:117] "RemoveContainer" containerID="b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.556230 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lbrr5" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.567836 4826 scope.go:117] "RemoveContainer" containerID="d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.573069 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-combined-ca-bundle\") pod \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.573140 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-config-data\") pod \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.573170 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcmvz\" (UniqueName: \"kubernetes.io/projected/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-kube-api-access-qcmvz\") pod \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.573204 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-scripts\") pod \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.573306 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-logs\") pod \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\" (UID: \"e3ec8717-30a1-46a8-9224-4802c2b1c3e6\") " Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.573940 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-logs" (OuterVolumeSpecName: "logs") pod "e3ec8717-30a1-46a8-9224-4802c2b1c3e6" (UID: "e3ec8717-30a1-46a8-9224-4802c2b1c3e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.587370 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-scripts" (OuterVolumeSpecName: "scripts") pod "e3ec8717-30a1-46a8-9224-4802c2b1c3e6" (UID: "e3ec8717-30a1-46a8-9224-4802c2b1c3e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.598142 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-kube-api-access-qcmvz" (OuterVolumeSpecName: "kube-api-access-qcmvz") pod "e3ec8717-30a1-46a8-9224-4802c2b1c3e6" (UID: "e3ec8717-30a1-46a8-9224-4802c2b1c3e6"). InnerVolumeSpecName "kube-api-access-qcmvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.610772 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3ec8717-30a1-46a8-9224-4802c2b1c3e6" (UID: "e3ec8717-30a1-46a8-9224-4802c2b1c3e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.648430 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-config-data" (OuterVolumeSpecName: "config-data") pod "e3ec8717-30a1-46a8-9224-4802c2b1c3e6" (UID: "e3ec8717-30a1-46a8-9224-4802c2b1c3e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.675236 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.675270 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.675279 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcmvz\" (UniqueName: \"kubernetes.io/projected/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-kube-api-access-qcmvz\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.675289 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.675336 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3ec8717-30a1-46a8-9224-4802c2b1c3e6-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.699351 4826 scope.go:117] "RemoveContainer" containerID="b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b" Jan 29 07:03:07 crc kubenswrapper[4826]: E0129 07:03:07.699726 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b\": container with ID starting with b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b not found: ID does not exist" containerID="b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.699770 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b"} err="failed to get container status \"b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b\": rpc error: code = NotFound desc = could not find container \"b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b\": container with ID starting with b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b not found: ID does not exist" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.699797 4826 scope.go:117] "RemoveContainer" containerID="d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691" Jan 29 07:03:07 crc kubenswrapper[4826]: E0129 07:03:07.700060 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691\": container with ID starting with d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691 not found: ID does not exist" containerID="d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.700087 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691"} err="failed to get container status \"d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691\": rpc error: code = NotFound desc = could not find container \"d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691\": container with ID starting with d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691 not found: ID does not exist" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.700102 4826 scope.go:117] "RemoveContainer" containerID="b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.700286 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b"} err="failed to get container status \"b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b\": rpc error: code = NotFound desc = could not find container \"b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b\": container with ID starting with b0df0cf1754d8922de02e44bf3832702d7d6bb554c61dd6b0c8e325797de1a9b not found: ID does not exist" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.700318 4826 scope.go:117] "RemoveContainer" containerID="d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.700489 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691"} err="failed to get container status \"d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691\": rpc error: code = NotFound desc = could not find container \"d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691\": container with ID starting with d58bcef8ecec359412c453edfdb86194575f98e21ab68c876f49aa50585bc691 not found: ID does not exist" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.948636 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lbrr5" event={"ID":"e3ec8717-30a1-46a8-9224-4802c2b1c3e6","Type":"ContainerDied","Data":"9df5ed2c58f31197e9cbdd1939dea04d33b3fc18d4a78ab7a64dcc151ce633d6"} Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.948894 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9df5ed2c58f31197e9cbdd1939dea04d33b3fc18d4a78ab7a64dcc151ce633d6" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.948697 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lbrr5" Jan 29 07:03:07 crc kubenswrapper[4826]: I0129 07:03:07.962398 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b459d81c-6b13-40f7-a460-524b0082a05d","Type":"ContainerStarted","Data":"9300df9956cc05af505ba0f1eca70bce92e138296b32ab0e9465d64a0af2bc46"} Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.008239 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:03:08 crc kubenswrapper[4826]: W0129 07:03:08.018320 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod017465e9_fb85_458e_8eca_109192bf47e7.slice/crio-389b3d025aec4080b309b392d3a76c1f5eab4a2b380ccad5f64d117da1f3fa96 WatchSource:0}: Error finding container 389b3d025aec4080b309b392d3a76c1f5eab4a2b380ccad5f64d117da1f3fa96: Status 404 returned error can't find the container with id 389b3d025aec4080b309b392d3a76c1f5eab4a2b380ccad5f64d117da1f3fa96 Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.024424 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-86487d6456-mmjgq"] Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.237178 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vgqpr" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.286720 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff69218e-ec33-4818-86be-a9ff92d3f40d-db-sync-config-data\") pod \"ff69218e-ec33-4818-86be-a9ff92d3f40d\" (UID: \"ff69218e-ec33-4818-86be-a9ff92d3f40d\") " Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.286951 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwxms\" (UniqueName: \"kubernetes.io/projected/ff69218e-ec33-4818-86be-a9ff92d3f40d-kube-api-access-rwxms\") pod \"ff69218e-ec33-4818-86be-a9ff92d3f40d\" (UID: \"ff69218e-ec33-4818-86be-a9ff92d3f40d\") " Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.287497 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff69218e-ec33-4818-86be-a9ff92d3f40d-combined-ca-bundle\") pod \"ff69218e-ec33-4818-86be-a9ff92d3f40d\" (UID: \"ff69218e-ec33-4818-86be-a9ff92d3f40d\") " Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.291580 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff69218e-ec33-4818-86be-a9ff92d3f40d-kube-api-access-rwxms" (OuterVolumeSpecName: "kube-api-access-rwxms") pod "ff69218e-ec33-4818-86be-a9ff92d3f40d" (UID: "ff69218e-ec33-4818-86be-a9ff92d3f40d"). InnerVolumeSpecName "kube-api-access-rwxms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.292419 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff69218e-ec33-4818-86be-a9ff92d3f40d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ff69218e-ec33-4818-86be-a9ff92d3f40d" (UID: "ff69218e-ec33-4818-86be-a9ff92d3f40d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.292716 4826 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff69218e-ec33-4818-86be-a9ff92d3f40d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.292739 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwxms\" (UniqueName: \"kubernetes.io/projected/ff69218e-ec33-4818-86be-a9ff92d3f40d-kube-api-access-rwxms\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.321906 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff69218e-ec33-4818-86be-a9ff92d3f40d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff69218e-ec33-4818-86be-a9ff92d3f40d" (UID: "ff69218e-ec33-4818-86be-a9ff92d3f40d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.394694 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff69218e-ec33-4818-86be-a9ff92d3f40d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.691555 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7c8d5fc944-9m8wp"] Jan 29 07:03:08 crc kubenswrapper[4826]: E0129 07:03:08.691970 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff69218e-ec33-4818-86be-a9ff92d3f40d" containerName="barbican-db-sync" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.691987 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff69218e-ec33-4818-86be-a9ff92d3f40d" containerName="barbican-db-sync" Jan 29 07:03:08 crc kubenswrapper[4826]: E0129 07:03:08.692009 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ec8717-30a1-46a8-9224-4802c2b1c3e6" containerName="placement-db-sync" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.692017 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ec8717-30a1-46a8-9224-4802c2b1c3e6" containerName="placement-db-sync" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.692190 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff69218e-ec33-4818-86be-a9ff92d3f40d" containerName="barbican-db-sync" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.692210 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ec8717-30a1-46a8-9224-4802c2b1c3e6" containerName="placement-db-sync" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.693344 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.697460 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.697737 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.697893 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.698714 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.698958 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ctkq4" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.719183 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c8d5fc944-9m8wp"] Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.776423 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:03:08 crc kubenswrapper[4826]: W0129 07:03:08.783705 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbf55335_feed_4467_9375_9543d111bc55.slice/crio-aabb7bf5dbd735aa0be2bb411b1859ac593328e2447df4235ed017bf57b85db0 WatchSource:0}: Error finding container aabb7bf5dbd735aa0be2bb411b1859ac593328e2447df4235ed017bf57b85db0: Status 404 returned error can't find the container with id aabb7bf5dbd735aa0be2bb411b1859ac593328e2447df4235ed017bf57b85db0 Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.800091 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-config-data\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.800130 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-combined-ca-bundle\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.800201 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-logs\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.800261 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-public-tls-certs\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.800285 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-internal-tls-certs\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.800331 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmg5b\" (UniqueName: \"kubernetes.io/projected/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-kube-api-access-kmg5b\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.800377 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-scripts\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.902245 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-logs\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.902318 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-public-tls-certs\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.902344 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-internal-tls-certs\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.902386 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmg5b\" (UniqueName: \"kubernetes.io/projected/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-kube-api-access-kmg5b\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.902415 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-scripts\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.902450 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-config-data\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.902472 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-combined-ca-bundle\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.902827 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-logs\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.906860 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-scripts\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.906916 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-public-tls-certs\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.907811 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-internal-tls-certs\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.909180 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-combined-ca-bundle\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.910885 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-config-data\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.917459 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmg5b\" (UniqueName: \"kubernetes.io/projected/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-kube-api-access-kmg5b\") pod \"placement-7c8d5fc944-9m8wp\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.996425 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-86487d6456-mmjgq" event={"ID":"d9016472-5ff0-4849-bc8a-c1d815d27931","Type":"ContainerStarted","Data":"8a6671e350fa8c53b6824335f80df2067d73193d492de3abd1a20ab459f41264"} Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.996477 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-86487d6456-mmjgq" event={"ID":"d9016472-5ff0-4849-bc8a-c1d815d27931","Type":"ContainerStarted","Data":"efb8fd134a87551d54e3fa93081ddbe23f586ddcad219eda821f4bc217938f12"} Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.997706 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:08 crc kubenswrapper[4826]: I0129 07:03:08.998569 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbf55335-feed-4467-9375-9543d111bc55","Type":"ContainerStarted","Data":"aabb7bf5dbd735aa0be2bb411b1859ac593328e2447df4235ed017bf57b85db0"} Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.005843 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vgqpr" event={"ID":"ff69218e-ec33-4818-86be-a9ff92d3f40d","Type":"ContainerDied","Data":"ddebc548afcf68c456135e1dc58471c66b4194c9895fa526947a2a81bfcbf70a"} Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.005896 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddebc548afcf68c456135e1dc58471c66b4194c9895fa526947a2a81bfcbf70a" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.006612 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vgqpr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.008643 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"017465e9-fb85-458e-8eca-109192bf47e7","Type":"ContainerStarted","Data":"02fd3aeeeceec4c65c45b602c7ab22ebc9bd69ec465aea4e012b65c127ca4a4a"} Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.008774 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"017465e9-fb85-458e-8eca-109192bf47e7","Type":"ContainerStarted","Data":"389b3d025aec4080b309b392d3a76c1f5eab4a2b380ccad5f64d117da1f3fa96"} Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.014175 4826 generic.go:334] "Generic (PLEG): container finished" podID="5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06" containerID="ee90b5e3403529b38e18ca6e7743dca447faa8f7d746c8a4314b599953d2bdc2" exitCode=0 Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.014252 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sqz8v" event={"ID":"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06","Type":"ContainerDied","Data":"ee90b5e3403529b38e18ca6e7743dca447faa8f7d746c8a4314b599953d2bdc2"} Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.015904 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-86487d6456-mmjgq" podStartSLOduration=6.015885292 podStartE2EDuration="6.015885292s" podCreationTimestamp="2026-01-29 07:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:09.013351237 +0000 UTC m=+1172.875144306" watchObservedRunningTime="2026-01-29 07:03:09.015885292 +0000 UTC m=+1172.877678361" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.049651 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.202610 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6797f89db9-wjtvh"] Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.204050 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.212152 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-f6t52" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.212756 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.212983 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.222344 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6797f89db9-wjtvh"] Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.251742 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-c497f4886-n5gtr"] Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.253173 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.257020 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.267841 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c497f4886-n5gtr"] Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.314003 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data-custom\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.314052 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426fe450-4a4d-4048-8ea4-422d39482ceb-logs\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.314090 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216e612b-abc2-4d7c-8b10-28a595de5302-logs\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.314115 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.314151 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-config-data-custom\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.314177 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-combined-ca-bundle\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.314223 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-combined-ca-bundle\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.314261 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwthd\" (UniqueName: \"kubernetes.io/projected/216e612b-abc2-4d7c-8b10-28a595de5302-kube-api-access-vwthd\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.314288 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-config-data\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.314329 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d92xq\" (UniqueName: \"kubernetes.io/projected/426fe450-4a4d-4048-8ea4-422d39482ceb-kube-api-access-d92xq\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.316015 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-w9nrn"] Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.316244 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" podUID="f17521c8-b15d-42ac-b28d-692acd3063ef" containerName="dnsmasq-dns" containerID="cri-o://9813ded057206a93193655b78747d9dd0b05f6eebc6a55ef94e1161f52274619" gracePeriod=10 Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.317445 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.358620 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-64cn4"] Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.360152 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.374607 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-64cn4"] Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416001 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426fe450-4a4d-4048-8ea4-422d39482ceb-logs\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416266 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416292 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-dns-svc\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416337 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216e612b-abc2-4d7c-8b10-28a595de5302-logs\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416366 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416400 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-config-data-custom\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416425 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-combined-ca-bundle\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416462 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-combined-ca-bundle\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416486 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlqv2\" (UniqueName: \"kubernetes.io/projected/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-kube-api-access-xlqv2\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416503 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416528 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416548 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwthd\" (UniqueName: \"kubernetes.io/projected/216e612b-abc2-4d7c-8b10-28a595de5302-kube-api-access-vwthd\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416578 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-config-data\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416600 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-config\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416626 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d92xq\" (UniqueName: \"kubernetes.io/projected/426fe450-4a4d-4048-8ea4-422d39482ceb-kube-api-access-d92xq\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.416643 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data-custom\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.417178 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216e612b-abc2-4d7c-8b10-28a595de5302-logs\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.417468 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426fe450-4a4d-4048-8ea4-422d39482ceb-logs\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.425933 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-combined-ca-bundle\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.429290 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data-custom\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.433079 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-config-data-custom\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.436387 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.437779 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-config-data\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.441720 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-combined-ca-bundle\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.442038 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwthd\" (UniqueName: \"kubernetes.io/projected/216e612b-abc2-4d7c-8b10-28a595de5302-kube-api-access-vwthd\") pod \"barbican-keystone-listener-c497f4886-n5gtr\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.449604 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5bbdf69cb4-gr8s9"] Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.451456 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d92xq\" (UniqueName: \"kubernetes.io/projected/426fe450-4a4d-4048-8ea4-422d39482ceb-kube-api-access-d92xq\") pod \"barbican-worker-6797f89db9-wjtvh\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.455236 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.457214 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.487714 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5bbdf69cb4-gr8s9"] Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.519030 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlqv2\" (UniqueName: \"kubernetes.io/projected/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-kube-api-access-xlqv2\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.519088 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.519125 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.519169 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-logs\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.519244 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-config\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.519266 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-config-data\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.519353 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.519380 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-dns-svc\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.519442 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-combined-ca-bundle\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.519486 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-config-data-custom\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.519529 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgfrd\" (UniqueName: \"kubernetes.io/projected/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-kube-api-access-jgfrd\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.522136 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.522986 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-config\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.523211 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.523653 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-dns-svc\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.523851 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.560726 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.561939 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlqv2\" (UniqueName: \"kubernetes.io/projected/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-kube-api-access-xlqv2\") pod \"dnsmasq-dns-6554f656b5-64cn4\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.580960 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.622139 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-config-data\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.622259 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-combined-ca-bundle\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.622287 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-config-data-custom\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.622328 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgfrd\" (UniqueName: \"kubernetes.io/projected/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-kube-api-access-jgfrd\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.622441 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-logs\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.623218 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-logs\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.626842 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-combined-ca-bundle\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.627606 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-config-data\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.630111 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-config-data-custom\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.638857 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgfrd\" (UniqueName: \"kubernetes.io/projected/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-kube-api-access-jgfrd\") pod \"barbican-api-5bbdf69cb4-gr8s9\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.653759 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.660111 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.773193 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c8d5fc944-9m8wp"] Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.854378 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.954602 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-ovsdbserver-nb\") pod \"f17521c8-b15d-42ac-b28d-692acd3063ef\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.954718 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-ovsdbserver-sb\") pod \"f17521c8-b15d-42ac-b28d-692acd3063ef\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.954753 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-dns-svc\") pod \"f17521c8-b15d-42ac-b28d-692acd3063ef\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.954796 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvzjs\" (UniqueName: \"kubernetes.io/projected/f17521c8-b15d-42ac-b28d-692acd3063ef-kube-api-access-bvzjs\") pod \"f17521c8-b15d-42ac-b28d-692acd3063ef\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.954827 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-dns-swift-storage-0\") pod \"f17521c8-b15d-42ac-b28d-692acd3063ef\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.954879 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-config\") pod \"f17521c8-b15d-42ac-b28d-692acd3063ef\" (UID: \"f17521c8-b15d-42ac-b28d-692acd3063ef\") " Jan 29 07:03:09 crc kubenswrapper[4826]: I0129 07:03:09.973889 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17521c8-b15d-42ac-b28d-692acd3063ef-kube-api-access-bvzjs" (OuterVolumeSpecName: "kube-api-access-bvzjs") pod "f17521c8-b15d-42ac-b28d-692acd3063ef" (UID: "f17521c8-b15d-42ac-b28d-692acd3063ef"). InnerVolumeSpecName "kube-api-access-bvzjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.049746 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbf55335-feed-4467-9375-9543d111bc55","Type":"ContainerStarted","Data":"284517534f5a17580a2a91b0b750cf4fe7d6b57ad048cb55c376801303f8f9ff"} Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.053664 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"017465e9-fb85-458e-8eca-109192bf47e7","Type":"ContainerStarted","Data":"41219edc8dc3d793f63b17ea4b062b042171564ccc15cff867a70046f0e09625"} Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.057091 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvzjs\" (UniqueName: \"kubernetes.io/projected/f17521c8-b15d-42ac-b28d-692acd3063ef-kube-api-access-bvzjs\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.058790 4826 generic.go:334] "Generic (PLEG): container finished" podID="f17521c8-b15d-42ac-b28d-692acd3063ef" containerID="9813ded057206a93193655b78747d9dd0b05f6eebc6a55ef94e1161f52274619" exitCode=0 Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.058851 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.058924 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" event={"ID":"f17521c8-b15d-42ac-b28d-692acd3063ef","Type":"ContainerDied","Data":"9813ded057206a93193655b78747d9dd0b05f6eebc6a55ef94e1161f52274619"} Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.059030 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-w9nrn" event={"ID":"f17521c8-b15d-42ac-b28d-692acd3063ef","Type":"ContainerDied","Data":"5aaa428e405eb0c9eb8454f2e671bb1f8744777c8316d7054e8c1d536e6b3448"} Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.059104 4826 scope.go:117] "RemoveContainer" containerID="9813ded057206a93193655b78747d9dd0b05f6eebc6a55ef94e1161f52274619" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.062261 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8d5fc944-9m8wp" event={"ID":"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe","Type":"ContainerStarted","Data":"31478edadab8f16a7bb95ef42169e72c4ddf2d292e43ee2ebbf72df8acd2d7c6"} Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.087120 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.087095867 podStartE2EDuration="7.087095867s" podCreationTimestamp="2026-01-29 07:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:10.08642987 +0000 UTC m=+1173.948222939" watchObservedRunningTime="2026-01-29 07:03:10.087095867 +0000 UTC m=+1173.948888936" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.104277 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f17521c8-b15d-42ac-b28d-692acd3063ef" (UID: "f17521c8-b15d-42ac-b28d-692acd3063ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.106398 4826 scope.go:117] "RemoveContainer" containerID="651cfac3940c8449d8f5850ae4afe52c361938cb37193122cae92e991f5bc493" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.111093 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f17521c8-b15d-42ac-b28d-692acd3063ef" (UID: "f17521c8-b15d-42ac-b28d-692acd3063ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.111774 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f17521c8-b15d-42ac-b28d-692acd3063ef" (UID: "f17521c8-b15d-42ac-b28d-692acd3063ef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.126960 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f17521c8-b15d-42ac-b28d-692acd3063ef" (UID: "f17521c8-b15d-42ac-b28d-692acd3063ef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.132776 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-config" (OuterVolumeSpecName: "config") pod "f17521c8-b15d-42ac-b28d-692acd3063ef" (UID: "f17521c8-b15d-42ac-b28d-692acd3063ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.156364 4826 scope.go:117] "RemoveContainer" containerID="9813ded057206a93193655b78747d9dd0b05f6eebc6a55ef94e1161f52274619" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.159595 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.159636 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.159651 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.159660 4826 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.159670 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f17521c8-b15d-42ac-b28d-692acd3063ef-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:10 crc kubenswrapper[4826]: E0129 07:03:10.161537 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9813ded057206a93193655b78747d9dd0b05f6eebc6a55ef94e1161f52274619\": container with ID starting with 9813ded057206a93193655b78747d9dd0b05f6eebc6a55ef94e1161f52274619 not found: ID does not exist" containerID="9813ded057206a93193655b78747d9dd0b05f6eebc6a55ef94e1161f52274619" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.161571 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9813ded057206a93193655b78747d9dd0b05f6eebc6a55ef94e1161f52274619"} err="failed to get container status \"9813ded057206a93193655b78747d9dd0b05f6eebc6a55ef94e1161f52274619\": rpc error: code = NotFound desc = could not find container \"9813ded057206a93193655b78747d9dd0b05f6eebc6a55ef94e1161f52274619\": container with ID starting with 9813ded057206a93193655b78747d9dd0b05f6eebc6a55ef94e1161f52274619 not found: ID does not exist" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.161591 4826 scope.go:117] "RemoveContainer" containerID="651cfac3940c8449d8f5850ae4afe52c361938cb37193122cae92e991f5bc493" Jan 29 07:03:10 crc kubenswrapper[4826]: E0129 07:03:10.162609 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"651cfac3940c8449d8f5850ae4afe52c361938cb37193122cae92e991f5bc493\": container with ID starting with 651cfac3940c8449d8f5850ae4afe52c361938cb37193122cae92e991f5bc493 not found: ID does not exist" containerID="651cfac3940c8449d8f5850ae4afe52c361938cb37193122cae92e991f5bc493" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.162708 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"651cfac3940c8449d8f5850ae4afe52c361938cb37193122cae92e991f5bc493"} err="failed to get container status \"651cfac3940c8449d8f5850ae4afe52c361938cb37193122cae92e991f5bc493\": rpc error: code = NotFound desc = could not find container \"651cfac3940c8449d8f5850ae4afe52c361938cb37193122cae92e991f5bc493\": container with ID starting with 651cfac3940c8449d8f5850ae4afe52c361938cb37193122cae92e991f5bc493 not found: ID does not exist" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.237024 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6797f89db9-wjtvh"] Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.317526 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-64cn4"] Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.410228 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5bbdf69cb4-gr8s9"] Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.428030 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-w9nrn"] Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.437614 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-w9nrn"] Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.437679 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c497f4886-n5gtr"] Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.730469 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sqz8v" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.802387 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-combined-ca-bundle\") pod \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\" (UID: \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\") " Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.802536 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-config\") pod \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\" (UID: \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\") " Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.802707 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b2hc\" (UniqueName: \"kubernetes.io/projected/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-kube-api-access-2b2hc\") pod \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\" (UID: \"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06\") " Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.808821 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-kube-api-access-2b2hc" (OuterVolumeSpecName: "kube-api-access-2b2hc") pod "5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06" (UID: "5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06"). InnerVolumeSpecName "kube-api-access-2b2hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.831264 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f17521c8-b15d-42ac-b28d-692acd3063ef" path="/var/lib/kubelet/pods/f17521c8-b15d-42ac-b28d-692acd3063ef/volumes" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.831559 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-config" (OuterVolumeSpecName: "config") pod "5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06" (UID: "5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.838457 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06" (UID: "5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.905503 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b2hc\" (UniqueName: \"kubernetes.io/projected/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-kube-api-access-2b2hc\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.905539 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:10 crc kubenswrapper[4826]: I0129 07:03:10.905553 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.093628 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" event={"ID":"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc","Type":"ContainerStarted","Data":"8d001fc08bdfca6298a0cb1f0aafa07394f9cc78483f3edba7f193904b53e8ea"} Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.098611 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8d5fc944-9m8wp" event={"ID":"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe","Type":"ContainerStarted","Data":"f1db27671e7941a3fe5f409f368278faebc2c9c102ed39040c62e814c55b33f6"} Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.098696 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.098713 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8d5fc944-9m8wp" event={"ID":"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe","Type":"ContainerStarted","Data":"4d9bbe77efa1079486e055eb220a04a3be411395b46dcaaf31c558f3d4ccb6f8"} Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.098727 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.100041 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-64cn4" event={"ID":"d00b8435-987c-47a3-b8d2-6f17d4fd91b0","Type":"ContainerStarted","Data":"d979517e6813376ce4d873700c617d57a49a6f6f1f2a6609788a93e7c18a7f6c"} Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.102551 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" event={"ID":"216e612b-abc2-4d7c-8b10-28a595de5302","Type":"ContainerStarted","Data":"2aacfd948fe1cf28fbe85300411e8fa41adc5b1b90180e6edc493406d891f327"} Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.106085 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6797f89db9-wjtvh" event={"ID":"426fe450-4a4d-4048-8ea4-422d39482ceb","Type":"ContainerStarted","Data":"06232c89b34b48939244698fda2e55d6ecd53ac19a0d80fadf84228fe6965098"} Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.114083 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbf55335-feed-4467-9375-9543d111bc55","Type":"ContainerStarted","Data":"e705f84552b82451981b2667b42d642a119793c5b4249ad22f285c72e2bd3759"} Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.130894 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sqz8v" event={"ID":"5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06","Type":"ContainerDied","Data":"600d92178d75e55b585ade507b3d221b36cb27a023845c6e7bdb9c6f0fdcbf2c"} Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.130956 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="600d92178d75e55b585ade507b3d221b36cb27a023845c6e7bdb9c6f0fdcbf2c" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.131075 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sqz8v" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.204272 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7c8d5fc944-9m8wp" podStartSLOduration=3.204246976 podStartE2EDuration="3.204246976s" podCreationTimestamp="2026-01-29 07:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:11.142791 +0000 UTC m=+1175.004584069" watchObservedRunningTime="2026-01-29 07:03:11.204246976 +0000 UTC m=+1175.066040045" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.228800 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.22649433 podStartE2EDuration="8.22649433s" podCreationTimestamp="2026-01-29 07:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:11.199462813 +0000 UTC m=+1175.061255882" watchObservedRunningTime="2026-01-29 07:03:11.22649433 +0000 UTC m=+1175.088287399" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.239834 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-64cn4"] Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.262277 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-bcxcz"] Jan 29 07:03:11 crc kubenswrapper[4826]: E0129 07:03:11.263028 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17521c8-b15d-42ac-b28d-692acd3063ef" containerName="init" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.263052 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17521c8-b15d-42ac-b28d-692acd3063ef" containerName="init" Jan 29 07:03:11 crc kubenswrapper[4826]: E0129 07:03:11.263070 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17521c8-b15d-42ac-b28d-692acd3063ef" containerName="dnsmasq-dns" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.263077 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17521c8-b15d-42ac-b28d-692acd3063ef" containerName="dnsmasq-dns" Jan 29 07:03:11 crc kubenswrapper[4826]: E0129 07:03:11.263096 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06" containerName="neutron-db-sync" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.263101 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06" containerName="neutron-db-sync" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.263269 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17521c8-b15d-42ac-b28d-692acd3063ef" containerName="dnsmasq-dns" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.263287 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06" containerName="neutron-db-sync" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.264165 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.278003 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-bcxcz"] Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.319553 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.319598 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.319664 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99jkp\" (UniqueName: \"kubernetes.io/projected/a9a33d25-bc6c-4615-9997-2ce3b421499f-kube-api-access-99jkp\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.319696 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-config\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.319770 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.319793 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.366423 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-589db6d64d-xbt79"] Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.368195 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.370783 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.371052 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.371171 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.371281 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cggbm" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.393316 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-589db6d64d-xbt79"] Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.421440 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfl56\" (UniqueName: \"kubernetes.io/projected/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-kube-api-access-kfl56\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.421487 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-config\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.421536 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.421559 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.421581 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-ovndb-tls-certs\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.421603 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-httpd-config\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.421623 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.421644 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.421707 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99jkp\" (UniqueName: \"kubernetes.io/projected/a9a33d25-bc6c-4615-9997-2ce3b421499f-kube-api-access-99jkp\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.421742 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-config\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.422233 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-combined-ca-bundle\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.422440 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.424707 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.425245 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.430000 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.431664 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-config\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.442928 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99jkp\" (UniqueName: \"kubernetes.io/projected/a9a33d25-bc6c-4615-9997-2ce3b421499f-kube-api-access-99jkp\") pod \"dnsmasq-dns-7bdf86f46f-bcxcz\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.523413 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-combined-ca-bundle\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.523502 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfl56\" (UniqueName: \"kubernetes.io/projected/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-kube-api-access-kfl56\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.523528 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-config\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.523563 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-ovndb-tls-certs\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.523585 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-httpd-config\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.527922 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-httpd-config\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.529938 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-combined-ca-bundle\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.530662 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-config\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.533117 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-ovndb-tls-certs\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.546143 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfl56\" (UniqueName: \"kubernetes.io/projected/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-kube-api-access-kfl56\") pod \"neutron-589db6d64d-xbt79\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.601153 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:11 crc kubenswrapper[4826]: I0129 07:03:11.706128 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.080054 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-bcxcz"] Jan 29 07:03:12 crc kubenswrapper[4826]: W0129 07:03:12.118470 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9a33d25_bc6c_4615_9997_2ce3b421499f.slice/crio-8ca59e938e97ce896b8e1438e372918f8b26cf9e28c95b57f77c81c199fe321c WatchSource:0}: Error finding container 8ca59e938e97ce896b8e1438e372918f8b26cf9e28c95b57f77c81c199fe321c: Status 404 returned error can't find the container with id 8ca59e938e97ce896b8e1438e372918f8b26cf9e28c95b57f77c81c199fe321c Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.185432 4826 generic.go:334] "Generic (PLEG): container finished" podID="d00b8435-987c-47a3-b8d2-6f17d4fd91b0" containerID="ff3de29241ae33a012acfef279bd6f2585e7d74f4fd0d43a038149deee4671ea" exitCode=0 Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.185619 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-64cn4" event={"ID":"d00b8435-987c-47a3-b8d2-6f17d4fd91b0","Type":"ContainerDied","Data":"ff3de29241ae33a012acfef279bd6f2585e7d74f4fd0d43a038149deee4671ea"} Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.194956 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" event={"ID":"a9a33d25-bc6c-4615-9997-2ce3b421499f","Type":"ContainerStarted","Data":"8ca59e938e97ce896b8e1438e372918f8b26cf9e28c95b57f77c81c199fe321c"} Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.199115 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" event={"ID":"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc","Type":"ContainerStarted","Data":"40dee22b22b805d753c85a95814c2121692adf4e2cda244989185957f128d1ee"} Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.199139 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" event={"ID":"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc","Type":"ContainerStarted","Data":"89d9b65a6513ec2ac9c95c0c32c9367a352eb2b4baa44d3869f66002ef08a2f2"} Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.199340 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.199380 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.233777 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" podStartSLOduration=3.233750905 podStartE2EDuration="3.233750905s" podCreationTimestamp="2026-01-29 07:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:12.23162467 +0000 UTC m=+1176.093417749" watchObservedRunningTime="2026-01-29 07:03:12.233750905 +0000 UTC m=+1176.095543974" Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.467198 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-589db6d64d-xbt79"] Jan 29 07:03:12 crc kubenswrapper[4826]: W0129 07:03:12.475489 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fadc18f_3e45_4d36_ae10_f4ee19d24cf8.slice/crio-f35795da78d4628e02199d662b2f71badcd8a0ea9936ea558515b78be502314d WatchSource:0}: Error finding container f35795da78d4628e02199d662b2f71badcd8a0ea9936ea558515b78be502314d: Status 404 returned error can't find the container with id f35795da78d4628e02199d662b2f71badcd8a0ea9936ea558515b78be502314d Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.932616 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.973476 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-dns-swift-storage-0\") pod \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.973533 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-ovsdbserver-sb\") pod \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.973576 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlqv2\" (UniqueName: \"kubernetes.io/projected/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-kube-api-access-xlqv2\") pod \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.973668 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-dns-svc\") pod \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.973712 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-config\") pod \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.973740 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-ovsdbserver-nb\") pod \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\" (UID: \"d00b8435-987c-47a3-b8d2-6f17d4fd91b0\") " Jan 29 07:03:12 crc kubenswrapper[4826]: I0129 07:03:12.997666 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-kube-api-access-xlqv2" (OuterVolumeSpecName: "kube-api-access-xlqv2") pod "d00b8435-987c-47a3-b8d2-6f17d4fd91b0" (UID: "d00b8435-987c-47a3-b8d2-6f17d4fd91b0"). InnerVolumeSpecName "kube-api-access-xlqv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.022965 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d00b8435-987c-47a3-b8d2-6f17d4fd91b0" (UID: "d00b8435-987c-47a3-b8d2-6f17d4fd91b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.026623 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d00b8435-987c-47a3-b8d2-6f17d4fd91b0" (UID: "d00b8435-987c-47a3-b8d2-6f17d4fd91b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.033719 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-config" (OuterVolumeSpecName: "config") pod "d00b8435-987c-47a3-b8d2-6f17d4fd91b0" (UID: "d00b8435-987c-47a3-b8d2-6f17d4fd91b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.035717 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d00b8435-987c-47a3-b8d2-6f17d4fd91b0" (UID: "d00b8435-987c-47a3-b8d2-6f17d4fd91b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.037566 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d00b8435-987c-47a3-b8d2-6f17d4fd91b0" (UID: "d00b8435-987c-47a3-b8d2-6f17d4fd91b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.076007 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.076039 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.076071 4826 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.076080 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.076089 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlqv2\" (UniqueName: \"kubernetes.io/projected/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-kube-api-access-xlqv2\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.076099 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d00b8435-987c-47a3-b8d2-6f17d4fd91b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.207955 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-589db6d64d-xbt79" event={"ID":"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8","Type":"ContainerStarted","Data":"caaa9065a7eec3d3207f7501f03a024e35fc55d3a277d53adee97f01fb2a3514"} Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.207996 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-589db6d64d-xbt79" event={"ID":"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8","Type":"ContainerStarted","Data":"f35795da78d4628e02199d662b2f71badcd8a0ea9936ea558515b78be502314d"} Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.209918 4826 generic.go:334] "Generic (PLEG): container finished" podID="a9a33d25-bc6c-4615-9997-2ce3b421499f" containerID="accf2a866cee7804315032498e741ee2d80532fada34cbc8f12e6ca88b80a3d1" exitCode=0 Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.211565 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" event={"ID":"a9a33d25-bc6c-4615-9997-2ce3b421499f","Type":"ContainerDied","Data":"accf2a866cee7804315032498e741ee2d80532fada34cbc8f12e6ca88b80a3d1"} Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.211595 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8qxjd" event={"ID":"4a1157a3-fc93-4d73-8200-b55bfa626a09","Type":"ContainerStarted","Data":"581b263ab80d201cfd6ee41c67707a16c2ed78463b0e46ab2d2fbca441113216"} Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.213097 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-64cn4" event={"ID":"d00b8435-987c-47a3-b8d2-6f17d4fd91b0","Type":"ContainerDied","Data":"d979517e6813376ce4d873700c617d57a49a6f6f1f2a6609788a93e7c18a7f6c"} Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.213153 4826 scope.go:117] "RemoveContainer" containerID="ff3de29241ae33a012acfef279bd6f2585e7d74f4fd0d43a038149deee4671ea" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.213109 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-64cn4" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.245551 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-8qxjd" podStartSLOduration=3.523589261 podStartE2EDuration="44.245533946s" podCreationTimestamp="2026-01-29 07:02:29 +0000 UTC" firstStartedPulling="2026-01-29 07:02:30.858818798 +0000 UTC m=+1134.720611867" lastFinishedPulling="2026-01-29 07:03:11.580763483 +0000 UTC m=+1175.442556552" observedRunningTime="2026-01-29 07:03:13.244496969 +0000 UTC m=+1177.106290038" watchObservedRunningTime="2026-01-29 07:03:13.245533946 +0000 UTC m=+1177.107327015" Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.305845 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-64cn4"] Jan 29 07:03:13 crc kubenswrapper[4826]: I0129 07:03:13.317089 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-64cn4"] Jan 29 07:03:14 crc kubenswrapper[4826]: I0129 07:03:14.475614 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:14 crc kubenswrapper[4826]: I0129 07:03:14.478092 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:14 crc kubenswrapper[4826]: I0129 07:03:14.500685 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 07:03:14 crc kubenswrapper[4826]: I0129 07:03:14.500745 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 07:03:14 crc kubenswrapper[4826]: I0129 07:03:14.508572 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:14 crc kubenswrapper[4826]: I0129 07:03:14.544041 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:14 crc kubenswrapper[4826]: I0129 07:03:14.632720 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 07:03:14 crc kubenswrapper[4826]: I0129 07:03:14.632960 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 07:03:14 crc kubenswrapper[4826]: I0129 07:03:14.829798 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d00b8435-987c-47a3-b8d2-6f17d4fd91b0" path="/var/lib/kubelet/pods/d00b8435-987c-47a3-b8d2-6f17d4fd91b0/volumes" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.232999 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" event={"ID":"216e612b-abc2-4d7c-8b10-28a595de5302","Type":"ContainerStarted","Data":"3bfc6875edaf5adbf81402c08f2ba55583e837eda4f1c32161d763c93ada8c24"} Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.233562 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" event={"ID":"216e612b-abc2-4d7c-8b10-28a595de5302","Type":"ContainerStarted","Data":"0bbf6d62e960b5a682af4aa5e41584da96c6f57e7c9e6126855743ce70f35375"} Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.238872 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-589db6d64d-xbt79" event={"ID":"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8","Type":"ContainerStarted","Data":"d01e3c5369a5de9d7a1e41286fbd0292a5532a8b46a78005b8868dce2183bf7e"} Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.239179 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.240610 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6797f89db9-wjtvh" event={"ID":"426fe450-4a4d-4048-8ea4-422d39482ceb","Type":"ContainerStarted","Data":"b3e85ce7ed4e1d2758772a2b894824794e2f18c81bade5dce37e78c8548d5969"} Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.240635 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6797f89db9-wjtvh" event={"ID":"426fe450-4a4d-4048-8ea4-422d39482ceb","Type":"ContainerStarted","Data":"c3be21ae398eff67e22c85ef224b354ffbbd7d7ab54b85ce0a04e1584c9e8486"} Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.244128 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" event={"ID":"a9a33d25-bc6c-4615-9997-2ce3b421499f","Type":"ContainerStarted","Data":"beb9ef7ee57ddf401eae74af871fe7cd6e0befc2f2b5340f33b2e08e5fad94a7"} Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.244154 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.244166 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.244943 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.244962 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.244971 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.258078 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" podStartSLOduration=2.631830891 podStartE2EDuration="6.258061642s" podCreationTimestamp="2026-01-29 07:03:09 +0000 UTC" firstStartedPulling="2026-01-29 07:03:10.460445242 +0000 UTC m=+1174.322238311" lastFinishedPulling="2026-01-29 07:03:14.086675993 +0000 UTC m=+1177.948469062" observedRunningTime="2026-01-29 07:03:15.253004791 +0000 UTC m=+1179.114797860" watchObservedRunningTime="2026-01-29 07:03:15.258061642 +0000 UTC m=+1179.119854711" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.284545 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" podStartSLOduration=4.284524505 podStartE2EDuration="4.284524505s" podCreationTimestamp="2026-01-29 07:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:15.278794167 +0000 UTC m=+1179.140587236" watchObservedRunningTime="2026-01-29 07:03:15.284524505 +0000 UTC m=+1179.146317574" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.337673 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-589db6d64d-xbt79" podStartSLOduration=4.337653596 podStartE2EDuration="4.337653596s" podCreationTimestamp="2026-01-29 07:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:15.310828533 +0000 UTC m=+1179.172621602" watchObservedRunningTime="2026-01-29 07:03:15.337653596 +0000 UTC m=+1179.199446665" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.340195 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6797f89db9-wjtvh" podStartSLOduration=2.494484226 podStartE2EDuration="6.340187491s" podCreationTimestamp="2026-01-29 07:03:09 +0000 UTC" firstStartedPulling="2026-01-29 07:03:10.24222579 +0000 UTC m=+1174.104018859" lastFinishedPulling="2026-01-29 07:03:14.087929055 +0000 UTC m=+1177.949722124" observedRunningTime="2026-01-29 07:03:15.331033265 +0000 UTC m=+1179.192826334" watchObservedRunningTime="2026-01-29 07:03:15.340187491 +0000 UTC m=+1179.201980560" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.613541 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-669bb5748f-zjsxt"] Jan 29 07:03:15 crc kubenswrapper[4826]: E0129 07:03:15.613958 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00b8435-987c-47a3-b8d2-6f17d4fd91b0" containerName="init" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.613970 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00b8435-987c-47a3-b8d2-6f17d4fd91b0" containerName="init" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.614159 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d00b8435-987c-47a3-b8d2-6f17d4fd91b0" containerName="init" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.615337 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.623206 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.623477 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.640610 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-669bb5748f-zjsxt"] Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.737439 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp7z9\" (UniqueName: \"kubernetes.io/projected/258e4d75-ecca-4001-9f56-aeb39557b326-kube-api-access-tp7z9\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.737532 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-httpd-config\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.737579 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-config\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.737615 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-public-tls-certs\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.737646 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-combined-ca-bundle\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.737665 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-ovndb-tls-certs\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.737720 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-internal-tls-certs\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.847517 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-combined-ca-bundle\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.847896 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-ovndb-tls-certs\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.847992 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-internal-tls-certs\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.848024 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp7z9\" (UniqueName: \"kubernetes.io/projected/258e4d75-ecca-4001-9f56-aeb39557b326-kube-api-access-tp7z9\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.848098 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-httpd-config\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.848126 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-config\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.848179 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-public-tls-certs\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.863985 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-config\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.875912 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-ovndb-tls-certs\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.877986 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-httpd-config\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.879348 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-public-tls-certs\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.883909 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp7z9\" (UniqueName: \"kubernetes.io/projected/258e4d75-ecca-4001-9f56-aeb39557b326-kube-api-access-tp7z9\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.886021 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-combined-ca-bundle\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.886523 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-internal-tls-certs\") pod \"neutron-669bb5748f-zjsxt\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:15 crc kubenswrapper[4826]: E0129 07:03:15.908071 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad14a23d_71a9_4348_9e06_61db9b024821.slice/crio-32221b2b22f27820444f88b1c4d786966450cc34e86964c6b693b53b6785cd38\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad14a23d_71a9_4348_9e06_61db9b024821.slice\": RecentStats: unable to find data in memory cache]" Jan 29 07:03:15 crc kubenswrapper[4826]: I0129 07:03:15.941912 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.022961 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-596b9c7d4-2m8gc"] Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.024707 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.028790 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.028840 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.051369 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-596b9c7d4-2m8gc"] Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.166553 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pqlw\" (UniqueName: \"kubernetes.io/projected/ca39ae08-94df-4778-8203-bcff5806eff0-kube-api-access-6pqlw\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.166603 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-config-data-custom\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.166635 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-internal-tls-certs\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.166868 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca39ae08-94df-4778-8203-bcff5806eff0-logs\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.166986 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-config-data\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.167123 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-public-tls-certs\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.167210 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-combined-ca-bundle\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.268402 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pqlw\" (UniqueName: \"kubernetes.io/projected/ca39ae08-94df-4778-8203-bcff5806eff0-kube-api-access-6pqlw\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.268459 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-config-data-custom\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.268486 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-internal-tls-certs\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.268539 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca39ae08-94df-4778-8203-bcff5806eff0-logs\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.268580 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-config-data\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.268598 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-public-tls-certs\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.268618 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-combined-ca-bundle\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.269433 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca39ae08-94df-4778-8203-bcff5806eff0-logs\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.276826 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-config-data\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.279756 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-internal-tls-certs\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.280941 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-combined-ca-bundle\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.281310 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-config-data-custom\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.288495 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-public-tls-certs\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.296050 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pqlw\" (UniqueName: \"kubernetes.io/projected/ca39ae08-94df-4778-8203-bcff5806eff0-kube-api-access-6pqlw\") pod \"barbican-api-596b9c7d4-2m8gc\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:16 crc kubenswrapper[4826]: I0129 07:03:16.364875 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:17 crc kubenswrapper[4826]: I0129 07:03:17.260688 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 07:03:17 crc kubenswrapper[4826]: I0129 07:03:17.260726 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 07:03:17 crc kubenswrapper[4826]: I0129 07:03:17.260708 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 07:03:17 crc kubenswrapper[4826]: I0129 07:03:17.261699 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 07:03:17 crc kubenswrapper[4826]: I0129 07:03:17.508421 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:17 crc kubenswrapper[4826]: I0129 07:03:17.650116 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:17 crc kubenswrapper[4826]: I0129 07:03:17.714375 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 07:03:17 crc kubenswrapper[4826]: I0129 07:03:17.731340 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 07:03:20 crc kubenswrapper[4826]: I0129 07:03:20.300718 4826 generic.go:334] "Generic (PLEG): container finished" podID="4a1157a3-fc93-4d73-8200-b55bfa626a09" containerID="581b263ab80d201cfd6ee41c67707a16c2ed78463b0e46ab2d2fbca441113216" exitCode=0 Jan 29 07:03:20 crc kubenswrapper[4826]: I0129 07:03:20.300809 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8qxjd" event={"ID":"4a1157a3-fc93-4d73-8200-b55bfa626a09","Type":"ContainerDied","Data":"581b263ab80d201cfd6ee41c67707a16c2ed78463b0e46ab2d2fbca441113216"} Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.274468 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.313250 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b459d81c-6b13-40f7-a460-524b0082a05d","Type":"ContainerStarted","Data":"b2c2e63a3bd83d1403d7cf7e391df590b210b9eb16bd39e883e4d3cd4a61c6f9"} Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.313333 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="ceilometer-central-agent" containerID="cri-o://8228f51af815f47f376f9cb3bea8904c10ecbbbee7393c8f84f57f8e9aa9614d" gracePeriod=30 Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.313380 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.313506 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="proxy-httpd" containerID="cri-o://b2c2e63a3bd83d1403d7cf7e391df590b210b9eb16bd39e883e4d3cd4a61c6f9" gracePeriod=30 Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.313547 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="sg-core" containerID="cri-o://9300df9956cc05af505ba0f1eca70bce92e138296b32ab0e9465d64a0af2bc46" gracePeriod=30 Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.313585 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="ceilometer-notification-agent" containerID="cri-o://df52ac6f977bbeb210e63377b69a96a0c4c93e4a270aad2ac19139ea7c47727e" gracePeriod=30 Jan 29 07:03:21 crc kubenswrapper[4826]: W0129 07:03:21.355356 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca39ae08_94df_4778_8203_bcff5806eff0.slice/crio-147c8f4f8d013adf16e3865c378c30844d579a6c359678d682e8226a1f076502 WatchSource:0}: Error finding container 147c8f4f8d013adf16e3865c378c30844d579a6c359678d682e8226a1f076502: Status 404 returned error can't find the container with id 147c8f4f8d013adf16e3865c378c30844d579a6c359678d682e8226a1f076502 Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.359443 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-596b9c7d4-2m8gc"] Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.370205 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.415974217 podStartE2EDuration="52.370178886s" podCreationTimestamp="2026-01-29 07:02:29 +0000 UTC" firstStartedPulling="2026-01-29 07:02:30.947361643 +0000 UTC m=+1134.809154702" lastFinishedPulling="2026-01-29 07:03:20.901566302 +0000 UTC m=+1184.763359371" observedRunningTime="2026-01-29 07:03:21.339780751 +0000 UTC m=+1185.201573820" watchObservedRunningTime="2026-01-29 07:03:21.370178886 +0000 UTC m=+1185.231971975" Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.411683 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.507721 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-669bb5748f-zjsxt"] Jan 29 07:03:21 crc kubenswrapper[4826]: W0129 07:03:21.522408 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod258e4d75_ecca_4001_9f56_aeb39557b326.slice/crio-3693f8212ab3bdea973eefc6771205df90b07ff7f38ccabae234cd26fecefbff WatchSource:0}: Error finding container 3693f8212ab3bdea973eefc6771205df90b07ff7f38ccabae234cd26fecefbff: Status 404 returned error can't find the container with id 3693f8212ab3bdea973eefc6771205df90b07ff7f38ccabae234cd26fecefbff Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.602913 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.687734 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b56f9767-4458r"] Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.687974 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b56f9767-4458r" podUID="d89e9402-7e3d-42c5-9f2a-b219113f9b2f" containerName="dnsmasq-dns" containerID="cri-o://e11025ebf0c23d357e172a87e100c47b44d7562a5e48cebef71c0eba8494e5e5" gracePeriod=10 Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.754335 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.915628 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-scripts\") pod \"4a1157a3-fc93-4d73-8200-b55bfa626a09\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.915689 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a1157a3-fc93-4d73-8200-b55bfa626a09-etc-machine-id\") pod \"4a1157a3-fc93-4d73-8200-b55bfa626a09\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.915712 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-config-data\") pod \"4a1157a3-fc93-4d73-8200-b55bfa626a09\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.915735 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-combined-ca-bundle\") pod \"4a1157a3-fc93-4d73-8200-b55bfa626a09\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.915765 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-db-sync-config-data\") pod \"4a1157a3-fc93-4d73-8200-b55bfa626a09\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.915807 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxj8d\" (UniqueName: \"kubernetes.io/projected/4a1157a3-fc93-4d73-8200-b55bfa626a09-kube-api-access-xxj8d\") pod \"4a1157a3-fc93-4d73-8200-b55bfa626a09\" (UID: \"4a1157a3-fc93-4d73-8200-b55bfa626a09\") " Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.917528 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a1157a3-fc93-4d73-8200-b55bfa626a09-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4a1157a3-fc93-4d73-8200-b55bfa626a09" (UID: "4a1157a3-fc93-4d73-8200-b55bfa626a09"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.927429 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-scripts" (OuterVolumeSpecName: "scripts") pod "4a1157a3-fc93-4d73-8200-b55bfa626a09" (UID: "4a1157a3-fc93-4d73-8200-b55bfa626a09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.934158 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a1157a3-fc93-4d73-8200-b55bfa626a09-kube-api-access-xxj8d" (OuterVolumeSpecName: "kube-api-access-xxj8d") pod "4a1157a3-fc93-4d73-8200-b55bfa626a09" (UID: "4a1157a3-fc93-4d73-8200-b55bfa626a09"). InnerVolumeSpecName "kube-api-access-xxj8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:21 crc kubenswrapper[4826]: I0129 07:03:21.938449 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4a1157a3-fc93-4d73-8200-b55bfa626a09" (UID: "4a1157a3-fc93-4d73-8200-b55bfa626a09"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.012291 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a1157a3-fc93-4d73-8200-b55bfa626a09" (UID: "4a1157a3-fc93-4d73-8200-b55bfa626a09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.017493 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-config-data" (OuterVolumeSpecName: "config-data") pod "4a1157a3-fc93-4d73-8200-b55bfa626a09" (UID: "4a1157a3-fc93-4d73-8200-b55bfa626a09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.018825 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a1157a3-fc93-4d73-8200-b55bfa626a09-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.018858 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.018871 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.018880 4826 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.018891 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxj8d\" (UniqueName: \"kubernetes.io/projected/4a1157a3-fc93-4d73-8200-b55bfa626a09-kube-api-access-xxj8d\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.018900 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a1157a3-fc93-4d73-8200-b55bfa626a09-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.252797 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.330287 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8qxjd" event={"ID":"4a1157a3-fc93-4d73-8200-b55bfa626a09","Type":"ContainerDied","Data":"7e2d68686e89f835fcf89962aabea00a0c0d8dcfd85c5615ccd6219908e00b0b"} Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.330345 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e2d68686e89f835fcf89962aabea00a0c0d8dcfd85c5615ccd6219908e00b0b" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.330423 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8qxjd" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.340402 4826 generic.go:334] "Generic (PLEG): container finished" podID="d89e9402-7e3d-42c5-9f2a-b219113f9b2f" containerID="e11025ebf0c23d357e172a87e100c47b44d7562a5e48cebef71c0eba8494e5e5" exitCode=0 Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.340485 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b56f9767-4458r" event={"ID":"d89e9402-7e3d-42c5-9f2a-b219113f9b2f","Type":"ContainerDied","Data":"e11025ebf0c23d357e172a87e100c47b44d7562a5e48cebef71c0eba8494e5e5"} Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.340622 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b56f9767-4458r" event={"ID":"d89e9402-7e3d-42c5-9f2a-b219113f9b2f","Type":"ContainerDied","Data":"8e4242a7204b98c5e497469639c4377815d1af9dfb13e7f3ddef0da3d84ff683"} Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.340645 4826 scope.go:117] "RemoveContainer" containerID="e11025ebf0c23d357e172a87e100c47b44d7562a5e48cebef71c0eba8494e5e5" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.340826 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b56f9767-4458r" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.354372 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-596b9c7d4-2m8gc" event={"ID":"ca39ae08-94df-4778-8203-bcff5806eff0","Type":"ContainerStarted","Data":"f60ca07e9e25fa742e12d26cd7318095e6e3dc16fac1e585a5581d0cf9693fdd"} Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.354425 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-596b9c7d4-2m8gc" event={"ID":"ca39ae08-94df-4778-8203-bcff5806eff0","Type":"ContainerStarted","Data":"639fc19de1ca53b22e5ad7ab867d42fad4024f7c5011feb4c18fae207a13d7e7"} Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.354438 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-596b9c7d4-2m8gc" event={"ID":"ca39ae08-94df-4778-8203-bcff5806eff0","Type":"ContainerStarted","Data":"147c8f4f8d013adf16e3865c378c30844d579a6c359678d682e8226a1f076502"} Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.355814 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.355844 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.359582 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-669bb5748f-zjsxt" event={"ID":"258e4d75-ecca-4001-9f56-aeb39557b326","Type":"ContainerStarted","Data":"9dfce637d6a3b6a7342be469a0635e6d8e5afd174f4e993540d7e2a69349016b"} Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.359615 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-669bb5748f-zjsxt" event={"ID":"258e4d75-ecca-4001-9f56-aeb39557b326","Type":"ContainerStarted","Data":"5ef7b36d03b1ef462470c22e1ad57d72fd7e0094c00a4af390d706239a6a169c"} Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.359628 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-669bb5748f-zjsxt" event={"ID":"258e4d75-ecca-4001-9f56-aeb39557b326","Type":"ContainerStarted","Data":"3693f8212ab3bdea973eefc6771205df90b07ff7f38ccabae234cd26fecefbff"} Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.359919 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.383144 4826 scope.go:117] "RemoveContainer" containerID="dacaf576740c1b7320658b7b467611c39833cdd967e3888267beee657d040b71" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.385879 4826 generic.go:334] "Generic (PLEG): container finished" podID="b459d81c-6b13-40f7-a460-524b0082a05d" containerID="b2c2e63a3bd83d1403d7cf7e391df590b210b9eb16bd39e883e4d3cd4a61c6f9" exitCode=0 Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.385927 4826 generic.go:334] "Generic (PLEG): container finished" podID="b459d81c-6b13-40f7-a460-524b0082a05d" containerID="9300df9956cc05af505ba0f1eca70bce92e138296b32ab0e9465d64a0af2bc46" exitCode=2 Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.385936 4826 generic.go:334] "Generic (PLEG): container finished" podID="b459d81c-6b13-40f7-a460-524b0082a05d" containerID="8228f51af815f47f376f9cb3bea8904c10ecbbbee7393c8f84f57f8e9aa9614d" exitCode=0 Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.385968 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b459d81c-6b13-40f7-a460-524b0082a05d","Type":"ContainerDied","Data":"b2c2e63a3bd83d1403d7cf7e391df590b210b9eb16bd39e883e4d3cd4a61c6f9"} Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.386001 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b459d81c-6b13-40f7-a460-524b0082a05d","Type":"ContainerDied","Data":"9300df9956cc05af505ba0f1eca70bce92e138296b32ab0e9465d64a0af2bc46"} Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.386043 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b459d81c-6b13-40f7-a460-524b0082a05d","Type":"ContainerDied","Data":"8228f51af815f47f376f9cb3bea8904c10ecbbbee7393c8f84f57f8e9aa9614d"} Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.417603 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-596b9c7d4-2m8gc" podStartSLOduration=7.417562236 podStartE2EDuration="7.417562236s" podCreationTimestamp="2026-01-29 07:03:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:22.377731868 +0000 UTC m=+1186.239524937" watchObservedRunningTime="2026-01-29 07:03:22.417562236 +0000 UTC m=+1186.279355295" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.423928 4826 scope.go:117] "RemoveContainer" containerID="e11025ebf0c23d357e172a87e100c47b44d7562a5e48cebef71c0eba8494e5e5" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.426204 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-ovsdbserver-nb\") pod \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.426401 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfnzn\" (UniqueName: \"kubernetes.io/projected/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-kube-api-access-cfnzn\") pod \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.426524 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-dns-svc\") pod \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.426609 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-config\") pod \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.426698 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-ovsdbserver-sb\") pod \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\" (UID: \"d89e9402-7e3d-42c5-9f2a-b219113f9b2f\") " Jan 29 07:03:22 crc kubenswrapper[4826]: E0129 07:03:22.427261 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e11025ebf0c23d357e172a87e100c47b44d7562a5e48cebef71c0eba8494e5e5\": container with ID starting with e11025ebf0c23d357e172a87e100c47b44d7562a5e48cebef71c0eba8494e5e5 not found: ID does not exist" containerID="e11025ebf0c23d357e172a87e100c47b44d7562a5e48cebef71c0eba8494e5e5" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.427400 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e11025ebf0c23d357e172a87e100c47b44d7562a5e48cebef71c0eba8494e5e5"} err="failed to get container status \"e11025ebf0c23d357e172a87e100c47b44d7562a5e48cebef71c0eba8494e5e5\": rpc error: code = NotFound desc = could not find container \"e11025ebf0c23d357e172a87e100c47b44d7562a5e48cebef71c0eba8494e5e5\": container with ID starting with e11025ebf0c23d357e172a87e100c47b44d7562a5e48cebef71c0eba8494e5e5 not found: ID does not exist" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.427490 4826 scope.go:117] "RemoveContainer" containerID="dacaf576740c1b7320658b7b467611c39833cdd967e3888267beee657d040b71" Jan 29 07:03:22 crc kubenswrapper[4826]: E0129 07:03:22.433733 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dacaf576740c1b7320658b7b467611c39833cdd967e3888267beee657d040b71\": container with ID starting with dacaf576740c1b7320658b7b467611c39833cdd967e3888267beee657d040b71 not found: ID does not exist" containerID="dacaf576740c1b7320658b7b467611c39833cdd967e3888267beee657d040b71" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.433863 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dacaf576740c1b7320658b7b467611c39833cdd967e3888267beee657d040b71"} err="failed to get container status \"dacaf576740c1b7320658b7b467611c39833cdd967e3888267beee657d040b71\": rpc error: code = NotFound desc = could not find container \"dacaf576740c1b7320658b7b467611c39833cdd967e3888267beee657d040b71\": container with ID starting with dacaf576740c1b7320658b7b467611c39833cdd967e3888267beee657d040b71 not found: ID does not exist" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.435888 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-669bb5748f-zjsxt" podStartSLOduration=7.435873868 podStartE2EDuration="7.435873868s" podCreationTimestamp="2026-01-29 07:03:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:22.397850847 +0000 UTC m=+1186.259643916" watchObservedRunningTime="2026-01-29 07:03:22.435873868 +0000 UTC m=+1186.297666937" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.440355 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-kube-api-access-cfnzn" (OuterVolumeSpecName: "kube-api-access-cfnzn") pod "d89e9402-7e3d-42c5-9f2a-b219113f9b2f" (UID: "d89e9402-7e3d-42c5-9f2a-b219113f9b2f"). InnerVolumeSpecName "kube-api-access-cfnzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.490171 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-config" (OuterVolumeSpecName: "config") pod "d89e9402-7e3d-42c5-9f2a-b219113f9b2f" (UID: "d89e9402-7e3d-42c5-9f2a-b219113f9b2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.490586 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d89e9402-7e3d-42c5-9f2a-b219113f9b2f" (UID: "d89e9402-7e3d-42c5-9f2a-b219113f9b2f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.506609 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d89e9402-7e3d-42c5-9f2a-b219113f9b2f" (UID: "d89e9402-7e3d-42c5-9f2a-b219113f9b2f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.509210 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d89e9402-7e3d-42c5-9f2a-b219113f9b2f" (UID: "d89e9402-7e3d-42c5-9f2a-b219113f9b2f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.529447 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.529636 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfnzn\" (UniqueName: \"kubernetes.io/projected/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-kube-api-access-cfnzn\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.529705 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.529771 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.529832 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d89e9402-7e3d-42c5-9f2a-b219113f9b2f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.671846 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 07:03:22 crc kubenswrapper[4826]: E0129 07:03:22.672322 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89e9402-7e3d-42c5-9f2a-b219113f9b2f" containerName="init" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.672341 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89e9402-7e3d-42c5-9f2a-b219113f9b2f" containerName="init" Jan 29 07:03:22 crc kubenswrapper[4826]: E0129 07:03:22.672367 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89e9402-7e3d-42c5-9f2a-b219113f9b2f" containerName="dnsmasq-dns" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.672375 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89e9402-7e3d-42c5-9f2a-b219113f9b2f" containerName="dnsmasq-dns" Jan 29 07:03:22 crc kubenswrapper[4826]: E0129 07:03:22.672401 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1157a3-fc93-4d73-8200-b55bfa626a09" containerName="cinder-db-sync" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.672410 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1157a3-fc93-4d73-8200-b55bfa626a09" containerName="cinder-db-sync" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.672641 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1157a3-fc93-4d73-8200-b55bfa626a09" containerName="cinder-db-sync" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.672674 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89e9402-7e3d-42c5-9f2a-b219113f9b2f" containerName="dnsmasq-dns" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.673912 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.677773 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.679672 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.680286 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.684759 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b56f9767-4458r"] Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.685319 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pr242" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.707385 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b56f9767-4458r"] Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.716226 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.739620 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.739876 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.739963 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-scripts\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.740078 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.740141 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-config-data\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.740227 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcbp2\" (UniqueName: \"kubernetes.io/projected/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-kube-api-access-lcbp2\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.830238 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d89e9402-7e3d-42c5-9f2a-b219113f9b2f" path="/var/lib/kubelet/pods/d89e9402-7e3d-42c5-9f2a-b219113f9b2f/volumes" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.849041 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-6jddr"] Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.850458 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.850563 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.850579 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-config-data\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.851056 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcbp2\" (UniqueName: \"kubernetes.io/projected/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-kube-api-access-lcbp2\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.851317 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.851358 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.851439 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-scripts\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.852181 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.862284 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.863241 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.864133 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-config-data\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.865878 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-scripts\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.893914 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcbp2\" (UniqueName: \"kubernetes.io/projected/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-kube-api-access-lcbp2\") pod \"cinder-scheduler-0\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.913095 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-6jddr"] Jan 29 07:03:22 crc kubenswrapper[4826]: I0129 07:03:22.993054 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.040235 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.056093 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb2b2\" (UniqueName: \"kubernetes.io/projected/3313021d-864e-430f-acda-e4641355dd75-kube-api-access-sb2b2\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.056200 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.056219 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.056266 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-config\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.056321 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.056344 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.056463 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.056857 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.060918 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.163783 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kwgn\" (UniqueName: \"kubernetes.io/projected/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-kube-api-access-2kwgn\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.164086 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-config-data\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.164163 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.164230 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.164314 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-scripts\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.164417 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.164611 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-config\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.164699 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.164773 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.164847 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.164917 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-logs\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.164982 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-config-data-custom\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.165056 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb2b2\" (UniqueName: \"kubernetes.io/projected/3313021d-864e-430f-acda-e4641355dd75-kube-api-access-sb2b2\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.166120 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.166750 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-config\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.167330 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.167351 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.167483 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.196331 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb2b2\" (UniqueName: \"kubernetes.io/projected/3313021d-864e-430f-acda-e4641355dd75-kube-api-access-sb2b2\") pod \"dnsmasq-dns-75bfc9b94f-6jddr\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.267437 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-logs\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.267483 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-config-data-custom\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.267554 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kwgn\" (UniqueName: \"kubernetes.io/projected/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-kube-api-access-2kwgn\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.267584 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-config-data\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.267600 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.267618 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.267636 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-scripts\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.267941 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-logs\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.268144 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.274050 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.276773 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-scripts\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.279172 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-config-data\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.281872 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-config-data-custom\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.292397 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.318394 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kwgn\" (UniqueName: \"kubernetes.io/projected/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-kube-api-access-2kwgn\") pod \"cinder-api-0\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.434939 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.703193 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 07:03:23 crc kubenswrapper[4826]: W0129 07:03:23.704753 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a281f08_83e6_478f_8c1b_3cc0c7bfd7f8.slice/crio-676f445d4436e3b457fb9015a404a823afe1115faf3e0044b9a386eae0c8ea0f WatchSource:0}: Error finding container 676f445d4436e3b457fb9015a404a823afe1115faf3e0044b9a386eae0c8ea0f: Status 404 returned error can't find the container with id 676f445d4436e3b457fb9015a404a823afe1115faf3e0044b9a386eae0c8ea0f Jan 29 07:03:23 crc kubenswrapper[4826]: W0129 07:03:23.785491 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3313021d_864e_430f_acda_e4641355dd75.slice/crio-a8c034e222578c6a1716797a7b81422350d30a6d28b6fc673d56ce6b7985fa71 WatchSource:0}: Error finding container a8c034e222578c6a1716797a7b81422350d30a6d28b6fc673d56ce6b7985fa71: Status 404 returned error can't find the container with id a8c034e222578c6a1716797a7b81422350d30a6d28b6fc673d56ce6b7985fa71 Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.786206 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-6jddr"] Jan 29 07:03:23 crc kubenswrapper[4826]: I0129 07:03:23.946003 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 07:03:23 crc kubenswrapper[4826]: W0129 07:03:23.962526 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod273a5f68_9084_4b33_ab4a_0b578c2bb7d1.slice/crio-ea03d6f6b23759fab4f4c2f860b9bf3e3468706d4c8d98692abfe9957593321b WatchSource:0}: Error finding container ea03d6f6b23759fab4f4c2f860b9bf3e3468706d4c8d98692abfe9957593321b: Status 404 returned error can't find the container with id ea03d6f6b23759fab4f4c2f860b9bf3e3468706d4c8d98692abfe9957593321b Jan 29 07:03:24 crc kubenswrapper[4826]: I0129 07:03:24.416759 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8","Type":"ContainerStarted","Data":"676f445d4436e3b457fb9015a404a823afe1115faf3e0044b9a386eae0c8ea0f"} Jan 29 07:03:24 crc kubenswrapper[4826]: I0129 07:03:24.418505 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"273a5f68-9084-4b33-ab4a-0b578c2bb7d1","Type":"ContainerStarted","Data":"ea03d6f6b23759fab4f4c2f860b9bf3e3468706d4c8d98692abfe9957593321b"} Jan 29 07:03:24 crc kubenswrapper[4826]: I0129 07:03:24.422759 4826 generic.go:334] "Generic (PLEG): container finished" podID="3313021d-864e-430f-acda-e4641355dd75" containerID="5f878ff6a21842a4ece617c1339449f3b4ff0d535f8c0ab4b4081e7c617e27f1" exitCode=0 Jan 29 07:03:24 crc kubenswrapper[4826]: I0129 07:03:24.424196 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" event={"ID":"3313021d-864e-430f-acda-e4641355dd75","Type":"ContainerDied","Data":"5f878ff6a21842a4ece617c1339449f3b4ff0d535f8c0ab4b4081e7c617e27f1"} Jan 29 07:03:24 crc kubenswrapper[4826]: I0129 07:03:24.424227 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" event={"ID":"3313021d-864e-430f-acda-e4641355dd75","Type":"ContainerStarted","Data":"a8c034e222578c6a1716797a7b81422350d30a6d28b6fc673d56ce6b7985fa71"} Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.448985 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8","Type":"ContainerStarted","Data":"0a83eaefc6ec463e58f41c35181e65dbcfe6ffba20a2211848e4580d733d53d8"} Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.450139 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.451680 4826 generic.go:334] "Generic (PLEG): container finished" podID="b459d81c-6b13-40f7-a460-524b0082a05d" containerID="df52ac6f977bbeb210e63377b69a96a0c4c93e4a270aad2ac19139ea7c47727e" exitCode=0 Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.451724 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b459d81c-6b13-40f7-a460-524b0082a05d","Type":"ContainerDied","Data":"df52ac6f977bbeb210e63377b69a96a0c4c93e4a270aad2ac19139ea7c47727e"} Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.451740 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b459d81c-6b13-40f7-a460-524b0082a05d","Type":"ContainerDied","Data":"785fada7912bea4a1ccef43f26d7364f8e26d4055765206142242f171e1d9795"} Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.451756 4826 scope.go:117] "RemoveContainer" containerID="b2c2e63a3bd83d1403d7cf7e391df590b210b9eb16bd39e883e4d3cd4a61c6f9" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.456824 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"273a5f68-9084-4b33-ab4a-0b578c2bb7d1","Type":"ContainerStarted","Data":"8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383"} Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.460327 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" event={"ID":"3313021d-864e-430f-acda-e4641355dd75","Type":"ContainerStarted","Data":"02d4a65c2248f6909e6e137019714be7d0e23232d76da80adf01c2b971465ddc"} Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.461367 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.467608 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.497370 4826 scope.go:117] "RemoveContainer" containerID="9300df9956cc05af505ba0f1eca70bce92e138296b32ab0e9465d64a0af2bc46" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.501639 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" podStartSLOduration=3.501622455 podStartE2EDuration="3.501622455s" podCreationTimestamp="2026-01-29 07:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:25.498757081 +0000 UTC m=+1189.360550150" watchObservedRunningTime="2026-01-29 07:03:25.501622455 +0000 UTC m=+1189.363415524" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.525463 4826 scope.go:117] "RemoveContainer" containerID="df52ac6f977bbeb210e63377b69a96a0c4c93e4a270aad2ac19139ea7c47727e" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.557354 4826 scope.go:117] "RemoveContainer" containerID="8228f51af815f47f376f9cb3bea8904c10ecbbbee7393c8f84f57f8e9aa9614d" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.592825 4826 scope.go:117] "RemoveContainer" containerID="b2c2e63a3bd83d1403d7cf7e391df590b210b9eb16bd39e883e4d3cd4a61c6f9" Jan 29 07:03:25 crc kubenswrapper[4826]: E0129 07:03:25.593274 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2c2e63a3bd83d1403d7cf7e391df590b210b9eb16bd39e883e4d3cd4a61c6f9\": container with ID starting with b2c2e63a3bd83d1403d7cf7e391df590b210b9eb16bd39e883e4d3cd4a61c6f9 not found: ID does not exist" containerID="b2c2e63a3bd83d1403d7cf7e391df590b210b9eb16bd39e883e4d3cd4a61c6f9" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.593343 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2c2e63a3bd83d1403d7cf7e391df590b210b9eb16bd39e883e4d3cd4a61c6f9"} err="failed to get container status \"b2c2e63a3bd83d1403d7cf7e391df590b210b9eb16bd39e883e4d3cd4a61c6f9\": rpc error: code = NotFound desc = could not find container \"b2c2e63a3bd83d1403d7cf7e391df590b210b9eb16bd39e883e4d3cd4a61c6f9\": container with ID starting with b2c2e63a3bd83d1403d7cf7e391df590b210b9eb16bd39e883e4d3cd4a61c6f9 not found: ID does not exist" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.593370 4826 scope.go:117] "RemoveContainer" containerID="9300df9956cc05af505ba0f1eca70bce92e138296b32ab0e9465d64a0af2bc46" Jan 29 07:03:25 crc kubenswrapper[4826]: E0129 07:03:25.594208 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9300df9956cc05af505ba0f1eca70bce92e138296b32ab0e9465d64a0af2bc46\": container with ID starting with 9300df9956cc05af505ba0f1eca70bce92e138296b32ab0e9465d64a0af2bc46 not found: ID does not exist" containerID="9300df9956cc05af505ba0f1eca70bce92e138296b32ab0e9465d64a0af2bc46" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.594231 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9300df9956cc05af505ba0f1eca70bce92e138296b32ab0e9465d64a0af2bc46"} err="failed to get container status \"9300df9956cc05af505ba0f1eca70bce92e138296b32ab0e9465d64a0af2bc46\": rpc error: code = NotFound desc = could not find container \"9300df9956cc05af505ba0f1eca70bce92e138296b32ab0e9465d64a0af2bc46\": container with ID starting with 9300df9956cc05af505ba0f1eca70bce92e138296b32ab0e9465d64a0af2bc46 not found: ID does not exist" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.594244 4826 scope.go:117] "RemoveContainer" containerID="df52ac6f977bbeb210e63377b69a96a0c4c93e4a270aad2ac19139ea7c47727e" Jan 29 07:03:25 crc kubenswrapper[4826]: E0129 07:03:25.594666 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df52ac6f977bbeb210e63377b69a96a0c4c93e4a270aad2ac19139ea7c47727e\": container with ID starting with df52ac6f977bbeb210e63377b69a96a0c4c93e4a270aad2ac19139ea7c47727e not found: ID does not exist" containerID="df52ac6f977bbeb210e63377b69a96a0c4c93e4a270aad2ac19139ea7c47727e" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.594690 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df52ac6f977bbeb210e63377b69a96a0c4c93e4a270aad2ac19139ea7c47727e"} err="failed to get container status \"df52ac6f977bbeb210e63377b69a96a0c4c93e4a270aad2ac19139ea7c47727e\": rpc error: code = NotFound desc = could not find container \"df52ac6f977bbeb210e63377b69a96a0c4c93e4a270aad2ac19139ea7c47727e\": container with ID starting with df52ac6f977bbeb210e63377b69a96a0c4c93e4a270aad2ac19139ea7c47727e not found: ID does not exist" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.594706 4826 scope.go:117] "RemoveContainer" containerID="8228f51af815f47f376f9cb3bea8904c10ecbbbee7393c8f84f57f8e9aa9614d" Jan 29 07:03:25 crc kubenswrapper[4826]: E0129 07:03:25.594989 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8228f51af815f47f376f9cb3bea8904c10ecbbbee7393c8f84f57f8e9aa9614d\": container with ID starting with 8228f51af815f47f376f9cb3bea8904c10ecbbbee7393c8f84f57f8e9aa9614d not found: ID does not exist" containerID="8228f51af815f47f376f9cb3bea8904c10ecbbbee7393c8f84f57f8e9aa9614d" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.595010 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8228f51af815f47f376f9cb3bea8904c10ecbbbee7393c8f84f57f8e9aa9614d"} err="failed to get container status \"8228f51af815f47f376f9cb3bea8904c10ecbbbee7393c8f84f57f8e9aa9614d\": rpc error: code = NotFound desc = could not find container \"8228f51af815f47f376f9cb3bea8904c10ecbbbee7393c8f84f57f8e9aa9614d\": container with ID starting with 8228f51af815f47f376f9cb3bea8904c10ecbbbee7393c8f84f57f8e9aa9614d not found: ID does not exist" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.609902 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-combined-ca-bundle\") pod \"b459d81c-6b13-40f7-a460-524b0082a05d\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.610010 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b459d81c-6b13-40f7-a460-524b0082a05d-log-httpd\") pod \"b459d81c-6b13-40f7-a460-524b0082a05d\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.610049 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-config-data\") pod \"b459d81c-6b13-40f7-a460-524b0082a05d\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.610151 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-sg-core-conf-yaml\") pod \"b459d81c-6b13-40f7-a460-524b0082a05d\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.610237 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txm99\" (UniqueName: \"kubernetes.io/projected/b459d81c-6b13-40f7-a460-524b0082a05d-kube-api-access-txm99\") pod \"b459d81c-6b13-40f7-a460-524b0082a05d\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.610259 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b459d81c-6b13-40f7-a460-524b0082a05d-run-httpd\") pod \"b459d81c-6b13-40f7-a460-524b0082a05d\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.610314 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-scripts\") pod \"b459d81c-6b13-40f7-a460-524b0082a05d\" (UID: \"b459d81c-6b13-40f7-a460-524b0082a05d\") " Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.610820 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b459d81c-6b13-40f7-a460-524b0082a05d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b459d81c-6b13-40f7-a460-524b0082a05d" (UID: "b459d81c-6b13-40f7-a460-524b0082a05d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.610965 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b459d81c-6b13-40f7-a460-524b0082a05d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.611863 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b459d81c-6b13-40f7-a460-524b0082a05d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b459d81c-6b13-40f7-a460-524b0082a05d" (UID: "b459d81c-6b13-40f7-a460-524b0082a05d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.616507 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-scripts" (OuterVolumeSpecName: "scripts") pod "b459d81c-6b13-40f7-a460-524b0082a05d" (UID: "b459d81c-6b13-40f7-a460-524b0082a05d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.619931 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b459d81c-6b13-40f7-a460-524b0082a05d-kube-api-access-txm99" (OuterVolumeSpecName: "kube-api-access-txm99") pod "b459d81c-6b13-40f7-a460-524b0082a05d" (UID: "b459d81c-6b13-40f7-a460-524b0082a05d"). InnerVolumeSpecName "kube-api-access-txm99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.640488 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b459d81c-6b13-40f7-a460-524b0082a05d" (UID: "b459d81c-6b13-40f7-a460-524b0082a05d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.712442 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.712472 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txm99\" (UniqueName: \"kubernetes.io/projected/b459d81c-6b13-40f7-a460-524b0082a05d-kube-api-access-txm99\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.712483 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b459d81c-6b13-40f7-a460-524b0082a05d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.712496 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.712732 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b459d81c-6b13-40f7-a460-524b0082a05d" (UID: "b459d81c-6b13-40f7-a460-524b0082a05d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.722334 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-config-data" (OuterVolumeSpecName: "config-data") pod "b459d81c-6b13-40f7-a460-524b0082a05d" (UID: "b459d81c-6b13-40f7-a460-524b0082a05d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.814172 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:25 crc kubenswrapper[4826]: I0129 07:03:25.814789 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459d81c-6b13-40f7-a460-524b0082a05d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:26 crc kubenswrapper[4826]: E0129 07:03:26.179901 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad14a23d_71a9_4348_9e06_61db9b024821.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad14a23d_71a9_4348_9e06_61db9b024821.slice/crio-32221b2b22f27820444f88b1c4d786966450cc34e86964c6b693b53b6785cd38\": RecentStats: unable to find data in memory cache]" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.471196 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8","Type":"ContainerStarted","Data":"e8918c774c6ac07e9dbc0b59ee8d45ef4a3ff1352e2291e3505d79b3f39bdf4c"} Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.473055 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.475464 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"273a5f68-9084-4b33-ab4a-0b578c2bb7d1","Type":"ContainerStarted","Data":"d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a"} Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.475746 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="273a5f68-9084-4b33-ab4a-0b578c2bb7d1" containerName="cinder-api-log" containerID="cri-o://8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383" gracePeriod=30 Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.475766 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="273a5f68-9084-4b33-ab4a-0b578c2bb7d1" containerName="cinder-api" containerID="cri-o://d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a" gracePeriod=30 Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.494541 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.653423073 podStartE2EDuration="4.494522979s" podCreationTimestamp="2026-01-29 07:03:22 +0000 UTC" firstStartedPulling="2026-01-29 07:03:23.707016813 +0000 UTC m=+1187.568809872" lastFinishedPulling="2026-01-29 07:03:24.548116689 +0000 UTC m=+1188.409909778" observedRunningTime="2026-01-29 07:03:26.489486889 +0000 UTC m=+1190.351279958" watchObservedRunningTime="2026-01-29 07:03:26.494522979 +0000 UTC m=+1190.356316048" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.510791 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.510777118 podStartE2EDuration="3.510777118s" podCreationTimestamp="2026-01-29 07:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:26.508088479 +0000 UTC m=+1190.369881548" watchObservedRunningTime="2026-01-29 07:03:26.510777118 +0000 UTC m=+1190.372570187" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.527580 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.535588 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.555249 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:03:26 crc kubenswrapper[4826]: E0129 07:03:26.555605 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="ceilometer-notification-agent" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.555622 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="ceilometer-notification-agent" Jan 29 07:03:26 crc kubenswrapper[4826]: E0129 07:03:26.555632 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="sg-core" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.555638 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="sg-core" Jan 29 07:03:26 crc kubenswrapper[4826]: E0129 07:03:26.555659 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="ceilometer-central-agent" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.555665 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="ceilometer-central-agent" Jan 29 07:03:26 crc kubenswrapper[4826]: E0129 07:03:26.555690 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="proxy-httpd" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.555696 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="proxy-httpd" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.555836 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="proxy-httpd" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.555861 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="ceilometer-central-agent" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.555875 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="sg-core" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.555887 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" containerName="ceilometer-notification-agent" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.557349 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.558950 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.559143 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.571459 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.730246 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-log-httpd\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.730293 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-scripts\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.730347 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.730545 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-run-httpd\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.730595 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq85p\" (UniqueName: \"kubernetes.io/projected/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-kube-api-access-bq85p\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.730646 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-config-data\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.730665 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.820548 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b459d81c-6b13-40f7-a460-524b0082a05d" path="/var/lib/kubelet/pods/b459d81c-6b13-40f7-a460-524b0082a05d/volumes" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.832222 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-log-httpd\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.832275 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-scripts\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.832332 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.832404 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq85p\" (UniqueName: \"kubernetes.io/projected/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-kube-api-access-bq85p\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.832430 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-run-httpd\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.832462 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-config-data\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.832484 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.832712 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-log-httpd\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.836455 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-run-httpd\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.839165 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.841885 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.845368 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-scripts\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.845584 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-config-data\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.849453 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq85p\" (UniqueName: \"kubernetes.io/projected/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-kube-api-access-bq85p\") pod \"ceilometer-0\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " pod="openstack/ceilometer-0" Jan 29 07:03:26 crc kubenswrapper[4826]: I0129 07:03:26.873157 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.141336 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.238612 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-scripts\") pod \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.238704 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-combined-ca-bundle\") pod \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.238782 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-config-data-custom\") pod \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.238828 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-config-data\") pod \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.238848 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kwgn\" (UniqueName: \"kubernetes.io/projected/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-kube-api-access-2kwgn\") pod \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.238881 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-logs\") pod \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.238901 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-etc-machine-id\") pod \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\" (UID: \"273a5f68-9084-4b33-ab4a-0b578c2bb7d1\") " Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.239266 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "273a5f68-9084-4b33-ab4a-0b578c2bb7d1" (UID: "273a5f68-9084-4b33-ab4a-0b578c2bb7d1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.243178 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-scripts" (OuterVolumeSpecName: "scripts") pod "273a5f68-9084-4b33-ab4a-0b578c2bb7d1" (UID: "273a5f68-9084-4b33-ab4a-0b578c2bb7d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.243755 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-logs" (OuterVolumeSpecName: "logs") pod "273a5f68-9084-4b33-ab4a-0b578c2bb7d1" (UID: "273a5f68-9084-4b33-ab4a-0b578c2bb7d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.247410 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "273a5f68-9084-4b33-ab4a-0b578c2bb7d1" (UID: "273a5f68-9084-4b33-ab4a-0b578c2bb7d1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.247508 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-kube-api-access-2kwgn" (OuterVolumeSpecName: "kube-api-access-2kwgn") pod "273a5f68-9084-4b33-ab4a-0b578c2bb7d1" (UID: "273a5f68-9084-4b33-ab4a-0b578c2bb7d1"). InnerVolumeSpecName "kube-api-access-2kwgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.265236 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "273a5f68-9084-4b33-ab4a-0b578c2bb7d1" (UID: "273a5f68-9084-4b33-ab4a-0b578c2bb7d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.298463 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-config-data" (OuterVolumeSpecName: "config-data") pod "273a5f68-9084-4b33-ab4a-0b578c2bb7d1" (UID: "273a5f68-9084-4b33-ab4a-0b578c2bb7d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.341027 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.341059 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.341070 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kwgn\" (UniqueName: \"kubernetes.io/projected/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-kube-api-access-2kwgn\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.341080 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.341088 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.341097 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.341105 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/273a5f68-9084-4b33-ab4a-0b578c2bb7d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.355656 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:03:27 crc kubenswrapper[4826]: W0129 07:03:27.361963 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa3e839c_aa4e_42a3_9c30_5bba595f1aad.slice/crio-8042992323eda57480d36e08b65f8f985ed8b6260ceadb0cb5b70bb868fa7788 WatchSource:0}: Error finding container 8042992323eda57480d36e08b65f8f985ed8b6260ceadb0cb5b70bb868fa7788: Status 404 returned error can't find the container with id 8042992323eda57480d36e08b65f8f985ed8b6260ceadb0cb5b70bb868fa7788 Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.488292 4826 generic.go:334] "Generic (PLEG): container finished" podID="273a5f68-9084-4b33-ab4a-0b578c2bb7d1" containerID="d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a" exitCode=0 Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.488374 4826 generic.go:334] "Generic (PLEG): container finished" podID="273a5f68-9084-4b33-ab4a-0b578c2bb7d1" containerID="8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383" exitCode=143 Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.488477 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"273a5f68-9084-4b33-ab4a-0b578c2bb7d1","Type":"ContainerDied","Data":"d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a"} Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.488538 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"273a5f68-9084-4b33-ab4a-0b578c2bb7d1","Type":"ContainerDied","Data":"8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383"} Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.488560 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"273a5f68-9084-4b33-ab4a-0b578c2bb7d1","Type":"ContainerDied","Data":"ea03d6f6b23759fab4f4c2f860b9bf3e3468706d4c8d98692abfe9957593321b"} Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.488591 4826 scope.go:117] "RemoveContainer" containerID="d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.490127 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.490232 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3e839c-aa4e-42a3-9c30-5bba595f1aad","Type":"ContainerStarted","Data":"8042992323eda57480d36e08b65f8f985ed8b6260ceadb0cb5b70bb868fa7788"} Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.511939 4826 scope.go:117] "RemoveContainer" containerID="8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.533485 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.551253 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.558719 4826 scope.go:117] "RemoveContainer" containerID="d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a" Jan 29 07:03:27 crc kubenswrapper[4826]: E0129 07:03:27.559093 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a\": container with ID starting with d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a not found: ID does not exist" containerID="d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.559134 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a"} err="failed to get container status \"d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a\": rpc error: code = NotFound desc = could not find container \"d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a\": container with ID starting with d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a not found: ID does not exist" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.559158 4826 scope.go:117] "RemoveContainer" containerID="8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383" Jan 29 07:03:27 crc kubenswrapper[4826]: E0129 07:03:27.559749 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383\": container with ID starting with 8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383 not found: ID does not exist" containerID="8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.559789 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383"} err="failed to get container status \"8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383\": rpc error: code = NotFound desc = could not find container \"8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383\": container with ID starting with 8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383 not found: ID does not exist" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.559817 4826 scope.go:117] "RemoveContainer" containerID="d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.563539 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 07:03:27 crc kubenswrapper[4826]: E0129 07:03:27.563906 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="273a5f68-9084-4b33-ab4a-0b578c2bb7d1" containerName="cinder-api-log" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.563923 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="273a5f68-9084-4b33-ab4a-0b578c2bb7d1" containerName="cinder-api-log" Jan 29 07:03:27 crc kubenswrapper[4826]: E0129 07:03:27.563939 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="273a5f68-9084-4b33-ab4a-0b578c2bb7d1" containerName="cinder-api" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.563945 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="273a5f68-9084-4b33-ab4a-0b578c2bb7d1" containerName="cinder-api" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.564115 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="273a5f68-9084-4b33-ab4a-0b578c2bb7d1" containerName="cinder-api-log" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.564131 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="273a5f68-9084-4b33-ab4a-0b578c2bb7d1" containerName="cinder-api" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.565578 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a"} err="failed to get container status \"d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a\": rpc error: code = NotFound desc = could not find container \"d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a\": container with ID starting with d2aca3693358750a6887ebdf91a118f00b3bdf9fd97802e365c856feb1aa812a not found: ID does not exist" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.565892 4826 scope.go:117] "RemoveContainer" containerID="8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.566501 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383"} err="failed to get container status \"8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383\": rpc error: code = NotFound desc = could not find container \"8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383\": container with ID starting with 8ea16af9bc15b5fe28ae6cbe085a5edcd03e1a2e170f2808a96af9269f5fe383 not found: ID does not exist" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.567976 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.572078 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.572380 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.578043 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.600481 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.746844 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-scripts\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.746935 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.746979 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-config-data-custom\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.747011 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/082eb821-de0c-462e-9653-b1c80c8e1d2c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.747212 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.747271 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.747351 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/082eb821-de0c-462e-9653-b1c80c8e1d2c-logs\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.747533 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4lpp\" (UniqueName: \"kubernetes.io/projected/082eb821-de0c-462e-9653-b1c80c8e1d2c-kube-api-access-g4lpp\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.747610 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-config-data\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.849553 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.849599 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-config-data-custom\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.849625 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/082eb821-de0c-462e-9653-b1c80c8e1d2c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.849704 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.849722 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.849740 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/082eb821-de0c-462e-9653-b1c80c8e1d2c-logs\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.849911 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4lpp\" (UniqueName: \"kubernetes.io/projected/082eb821-de0c-462e-9653-b1c80c8e1d2c-kube-api-access-g4lpp\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.850243 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-config-data\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.850253 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/082eb821-de0c-462e-9653-b1c80c8e1d2c-logs\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.850284 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-scripts\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.850877 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/082eb821-de0c-462e-9653-b1c80c8e1d2c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.853229 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.854196 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.854443 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-scripts\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.855750 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-config-data-custom\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.857835 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.865065 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-config-data\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.867959 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4lpp\" (UniqueName: \"kubernetes.io/projected/082eb821-de0c-462e-9653-b1c80c8e1d2c-kube-api-access-g4lpp\") pod \"cinder-api-0\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.898145 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 07:03:27 crc kubenswrapper[4826]: I0129 07:03:27.995918 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 07:03:28 crc kubenswrapper[4826]: I0129 07:03:28.096508 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:28 crc kubenswrapper[4826]: I0129 07:03:28.271972 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:03:28 crc kubenswrapper[4826]: I0129 07:03:28.362765 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5bbdf69cb4-gr8s9"] Jan 29 07:03:28 crc kubenswrapper[4826]: I0129 07:03:28.363233 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" podUID="496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" containerName="barbican-api-log" containerID="cri-o://89d9b65a6513ec2ac9c95c0c32c9367a352eb2b4baa44d3869f66002ef08a2f2" gracePeriod=30 Jan 29 07:03:28 crc kubenswrapper[4826]: I0129 07:03:28.363586 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" podUID="496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" containerName="barbican-api" containerID="cri-o://40dee22b22b805d753c85a95814c2121692adf4e2cda244989185957f128d1ee" gracePeriod=30 Jan 29 07:03:28 crc kubenswrapper[4826]: I0129 07:03:28.438088 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 07:03:28 crc kubenswrapper[4826]: I0129 07:03:28.504799 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3e839c-aa4e-42a3-9c30-5bba595f1aad","Type":"ContainerStarted","Data":"534570455bc7f28c999f94982da918b4d6103cc6c3fff68a2a134c4ea2d71822"} Jan 29 07:03:28 crc kubenswrapper[4826]: I0129 07:03:28.506597 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"082eb821-de0c-462e-9653-b1c80c8e1d2c","Type":"ContainerStarted","Data":"8cf0d9aa18257f6410a4185a2aeaa709cc8c2c2e64eca6bc4a11b545c2d14b02"} Jan 29 07:03:28 crc kubenswrapper[4826]: I0129 07:03:28.520815 4826 generic.go:334] "Generic (PLEG): container finished" podID="496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" containerID="89d9b65a6513ec2ac9c95c0c32c9367a352eb2b4baa44d3869f66002ef08a2f2" exitCode=143 Jan 29 07:03:28 crc kubenswrapper[4826]: I0129 07:03:28.521659 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" event={"ID":"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc","Type":"ContainerDied","Data":"89d9b65a6513ec2ac9c95c0c32c9367a352eb2b4baa44d3869f66002ef08a2f2"} Jan 29 07:03:28 crc kubenswrapper[4826]: I0129 07:03:28.834878 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="273a5f68-9084-4b33-ab4a-0b578c2bb7d1" path="/var/lib/kubelet/pods/273a5f68-9084-4b33-ab4a-0b578c2bb7d1/volumes" Jan 29 07:03:29 crc kubenswrapper[4826]: I0129 07:03:29.531428 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3e839c-aa4e-42a3-9c30-5bba595f1aad","Type":"ContainerStarted","Data":"9340955341f541231fbfa0a1c462ae086c08549c43bbb99f49dc523952932dc4"} Jan 29 07:03:29 crc kubenswrapper[4826]: I0129 07:03:29.531778 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3e839c-aa4e-42a3-9c30-5bba595f1aad","Type":"ContainerStarted","Data":"f30f857579c6e8464fe523290c094e1e775babc999174c164bf9a985768dff66"} Jan 29 07:03:29 crc kubenswrapper[4826]: I0129 07:03:29.534612 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"082eb821-de0c-462e-9653-b1c80c8e1d2c","Type":"ContainerStarted","Data":"a0acd49fff6ea3dad01d0e7ca858e9d1043795014ae1ed463aec93ec682db76b"} Jan 29 07:03:30 crc kubenswrapper[4826]: I0129 07:03:30.566546 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"082eb821-de0c-462e-9653-b1c80c8e1d2c","Type":"ContainerStarted","Data":"3a8934b6ad3dfc62e2d7a18edca3b16ec92c490c12b90aeaef855441c2f63a2c"} Jan 29 07:03:30 crc kubenswrapper[4826]: I0129 07:03:30.567543 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 07:03:30 crc kubenswrapper[4826]: I0129 07:03:30.621200 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.621170444 podStartE2EDuration="3.621170444s" podCreationTimestamp="2026-01-29 07:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:30.596102457 +0000 UTC m=+1194.457895566" watchObservedRunningTime="2026-01-29 07:03:30.621170444 +0000 UTC m=+1194.482963553" Jan 29 07:03:31 crc kubenswrapper[4826]: I0129 07:03:31.534475 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" podUID="496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:34384->10.217.0.155:9311: read: connection reset by peer" Jan 29 07:03:31 crc kubenswrapper[4826]: I0129 07:03:31.534574 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" podUID="496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:34370->10.217.0.155:9311: read: connection reset by peer" Jan 29 07:03:31 crc kubenswrapper[4826]: I0129 07:03:31.578808 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3e839c-aa4e-42a3-9c30-5bba595f1aad","Type":"ContainerStarted","Data":"0757037f51b34dd43829be352006b82b17f47465b5d4fa6ebf752641b255056b"} Jan 29 07:03:31 crc kubenswrapper[4826]: I0129 07:03:31.609238 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.736588593 podStartE2EDuration="5.609212992s" podCreationTimestamp="2026-01-29 07:03:26 +0000 UTC" firstStartedPulling="2026-01-29 07:03:27.364717196 +0000 UTC m=+1191.226510275" lastFinishedPulling="2026-01-29 07:03:31.237341595 +0000 UTC m=+1195.099134674" observedRunningTime="2026-01-29 07:03:31.606352348 +0000 UTC m=+1195.468145417" watchObservedRunningTime="2026-01-29 07:03:31.609212992 +0000 UTC m=+1195.471006071" Jan 29 07:03:31 crc kubenswrapper[4826]: I0129 07:03:31.944598 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.039521 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-logs\") pod \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.039676 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-combined-ca-bundle\") pod \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.039740 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgfrd\" (UniqueName: \"kubernetes.io/projected/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-kube-api-access-jgfrd\") pod \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.039772 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-config-data\") pod \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.039801 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-config-data-custom\") pod \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\" (UID: \"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc\") " Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.040123 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-logs" (OuterVolumeSpecName: "logs") pod "496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" (UID: "496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.040622 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.044984 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-kube-api-access-jgfrd" (OuterVolumeSpecName: "kube-api-access-jgfrd") pod "496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" (UID: "496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc"). InnerVolumeSpecName "kube-api-access-jgfrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.050681 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" (UID: "496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.096385 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" (UID: "496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.115476 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-config-data" (OuterVolumeSpecName: "config-data") pod "496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" (UID: "496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.142695 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.142727 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgfrd\" (UniqueName: \"kubernetes.io/projected/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-kube-api-access-jgfrd\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.142738 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.142747 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.589616 4826 generic.go:334] "Generic (PLEG): container finished" podID="496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" containerID="40dee22b22b805d753c85a95814c2121692adf4e2cda244989185957f128d1ee" exitCode=0 Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.589694 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" event={"ID":"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc","Type":"ContainerDied","Data":"40dee22b22b805d753c85a95814c2121692adf4e2cda244989185957f128d1ee"} Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.589699 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.589726 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5bbdf69cb4-gr8s9" event={"ID":"496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc","Type":"ContainerDied","Data":"8d001fc08bdfca6298a0cb1f0aafa07394f9cc78483f3edba7f193904b53e8ea"} Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.589745 4826 scope.go:117] "RemoveContainer" containerID="40dee22b22b805d753c85a95814c2121692adf4e2cda244989185957f128d1ee" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.593800 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.632635 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5bbdf69cb4-gr8s9"] Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.641222 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5bbdf69cb4-gr8s9"] Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.652922 4826 scope.go:117] "RemoveContainer" containerID="89d9b65a6513ec2ac9c95c0c32c9367a352eb2b4baa44d3869f66002ef08a2f2" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.683233 4826 scope.go:117] "RemoveContainer" containerID="40dee22b22b805d753c85a95814c2121692adf4e2cda244989185957f128d1ee" Jan 29 07:03:32 crc kubenswrapper[4826]: E0129 07:03:32.683766 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40dee22b22b805d753c85a95814c2121692adf4e2cda244989185957f128d1ee\": container with ID starting with 40dee22b22b805d753c85a95814c2121692adf4e2cda244989185957f128d1ee not found: ID does not exist" containerID="40dee22b22b805d753c85a95814c2121692adf4e2cda244989185957f128d1ee" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.683840 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40dee22b22b805d753c85a95814c2121692adf4e2cda244989185957f128d1ee"} err="failed to get container status \"40dee22b22b805d753c85a95814c2121692adf4e2cda244989185957f128d1ee\": rpc error: code = NotFound desc = could not find container \"40dee22b22b805d753c85a95814c2121692adf4e2cda244989185957f128d1ee\": container with ID starting with 40dee22b22b805d753c85a95814c2121692adf4e2cda244989185957f128d1ee not found: ID does not exist" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.683889 4826 scope.go:117] "RemoveContainer" containerID="89d9b65a6513ec2ac9c95c0c32c9367a352eb2b4baa44d3869f66002ef08a2f2" Jan 29 07:03:32 crc kubenswrapper[4826]: E0129 07:03:32.684233 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89d9b65a6513ec2ac9c95c0c32c9367a352eb2b4baa44d3869f66002ef08a2f2\": container with ID starting with 89d9b65a6513ec2ac9c95c0c32c9367a352eb2b4baa44d3869f66002ef08a2f2 not found: ID does not exist" containerID="89d9b65a6513ec2ac9c95c0c32c9367a352eb2b4baa44d3869f66002ef08a2f2" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.684266 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89d9b65a6513ec2ac9c95c0c32c9367a352eb2b4baa44d3869f66002ef08a2f2"} err="failed to get container status \"89d9b65a6513ec2ac9c95c0c32c9367a352eb2b4baa44d3869f66002ef08a2f2\": rpc error: code = NotFound desc = could not find container \"89d9b65a6513ec2ac9c95c0c32c9367a352eb2b4baa44d3869f66002ef08a2f2\": container with ID starting with 89d9b65a6513ec2ac9c95c0c32c9367a352eb2b4baa44d3869f66002ef08a2f2 not found: ID does not exist" Jan 29 07:03:32 crc kubenswrapper[4826]: I0129 07:03:32.844329 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" path="/var/lib/kubelet/pods/496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc/volumes" Jan 29 07:03:33 crc kubenswrapper[4826]: I0129 07:03:33.295750 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:03:33 crc kubenswrapper[4826]: I0129 07:03:33.408971 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-bcxcz"] Jan 29 07:03:33 crc kubenswrapper[4826]: I0129 07:03:33.409465 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" podUID="a9a33d25-bc6c-4615-9997-2ce3b421499f" containerName="dnsmasq-dns" containerID="cri-o://beb9ef7ee57ddf401eae74af871fe7cd6e0befc2f2b5340f33b2e08e5fad94a7" gracePeriod=10 Jan 29 07:03:33 crc kubenswrapper[4826]: I0129 07:03:33.601844 4826 generic.go:334] "Generic (PLEG): container finished" podID="a9a33d25-bc6c-4615-9997-2ce3b421499f" containerID="beb9ef7ee57ddf401eae74af871fe7cd6e0befc2f2b5340f33b2e08e5fad94a7" exitCode=0 Jan 29 07:03:33 crc kubenswrapper[4826]: I0129 07:03:33.601910 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" event={"ID":"a9a33d25-bc6c-4615-9997-2ce3b421499f","Type":"ContainerDied","Data":"beb9ef7ee57ddf401eae74af871fe7cd6e0befc2f2b5340f33b2e08e5fad94a7"} Jan 29 07:03:33 crc kubenswrapper[4826]: I0129 07:03:33.640025 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 07:03:33 crc kubenswrapper[4826]: I0129 07:03:33.685919 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 07:03:33 crc kubenswrapper[4826]: I0129 07:03:33.988704 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.086358 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99jkp\" (UniqueName: \"kubernetes.io/projected/a9a33d25-bc6c-4615-9997-2ce3b421499f-kube-api-access-99jkp\") pod \"a9a33d25-bc6c-4615-9997-2ce3b421499f\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.086456 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-ovsdbserver-nb\") pod \"a9a33d25-bc6c-4615-9997-2ce3b421499f\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.086500 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-dns-svc\") pod \"a9a33d25-bc6c-4615-9997-2ce3b421499f\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.086557 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-config\") pod \"a9a33d25-bc6c-4615-9997-2ce3b421499f\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.086624 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-dns-swift-storage-0\") pod \"a9a33d25-bc6c-4615-9997-2ce3b421499f\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.086661 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-ovsdbserver-sb\") pod \"a9a33d25-bc6c-4615-9997-2ce3b421499f\" (UID: \"a9a33d25-bc6c-4615-9997-2ce3b421499f\") " Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.103516 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a33d25-bc6c-4615-9997-2ce3b421499f-kube-api-access-99jkp" (OuterVolumeSpecName: "kube-api-access-99jkp") pod "a9a33d25-bc6c-4615-9997-2ce3b421499f" (UID: "a9a33d25-bc6c-4615-9997-2ce3b421499f"). InnerVolumeSpecName "kube-api-access-99jkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.143900 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-config" (OuterVolumeSpecName: "config") pod "a9a33d25-bc6c-4615-9997-2ce3b421499f" (UID: "a9a33d25-bc6c-4615-9997-2ce3b421499f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.155383 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a9a33d25-bc6c-4615-9997-2ce3b421499f" (UID: "a9a33d25-bc6c-4615-9997-2ce3b421499f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.167927 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9a33d25-bc6c-4615-9997-2ce3b421499f" (UID: "a9a33d25-bc6c-4615-9997-2ce3b421499f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.180068 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9a33d25-bc6c-4615-9997-2ce3b421499f" (UID: "a9a33d25-bc6c-4615-9997-2ce3b421499f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.180555 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9a33d25-bc6c-4615-9997-2ce3b421499f" (UID: "a9a33d25-bc6c-4615-9997-2ce3b421499f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.189294 4826 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.189494 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.189656 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99jkp\" (UniqueName: \"kubernetes.io/projected/a9a33d25-bc6c-4615-9997-2ce3b421499f-kube-api-access-99jkp\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.189789 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.189903 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.190015 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a33d25-bc6c-4615-9997-2ce3b421499f-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.618522 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" event={"ID":"a9a33d25-bc6c-4615-9997-2ce3b421499f","Type":"ContainerDied","Data":"8ca59e938e97ce896b8e1438e372918f8b26cf9e28c95b57f77c81c199fe321c"} Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.619243 4826 scope.go:117] "RemoveContainer" containerID="beb9ef7ee57ddf401eae74af871fe7cd6e0befc2f2b5340f33b2e08e5fad94a7" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.618707 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" containerName="cinder-scheduler" containerID="cri-o://0a83eaefc6ec463e58f41c35181e65dbcfe6ffba20a2211848e4580d733d53d8" gracePeriod=30 Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.618841 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" containerName="probe" containerID="cri-o://e8918c774c6ac07e9dbc0b59ee8d45ef4a3ff1352e2291e3505d79b3f39bdf4c" gracePeriod=30 Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.618552 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-bcxcz" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.657611 4826 scope.go:117] "RemoveContainer" containerID="accf2a866cee7804315032498e741ee2d80532fada34cbc8f12e6ca88b80a3d1" Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.674764 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-bcxcz"] Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.690525 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-bcxcz"] Jan 29 07:03:34 crc kubenswrapper[4826]: I0129 07:03:34.819285 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a33d25-bc6c-4615-9997-2ce3b421499f" path="/var/lib/kubelet/pods/a9a33d25-bc6c-4615-9997-2ce3b421499f/volumes" Jan 29 07:03:35 crc kubenswrapper[4826]: I0129 07:03:35.634403 4826 generic.go:334] "Generic (PLEG): container finished" podID="1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" containerID="e8918c774c6ac07e9dbc0b59ee8d45ef4a3ff1352e2291e3505d79b3f39bdf4c" exitCode=0 Jan 29 07:03:35 crc kubenswrapper[4826]: I0129 07:03:35.634494 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8","Type":"ContainerDied","Data":"e8918c774c6ac07e9dbc0b59ee8d45ef4a3ff1352e2291e3505d79b3f39bdf4c"} Jan 29 07:03:35 crc kubenswrapper[4826]: I0129 07:03:35.656450 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:03:35 crc kubenswrapper[4826]: I0129 07:03:35.656543 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:03:35 crc kubenswrapper[4826]: I0129 07:03:35.656612 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 07:03:35 crc kubenswrapper[4826]: I0129 07:03:35.657737 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2746c36a8cbae641f39bc5b503c4b8bd16a73e3034bddd5ca4705c812e26566f"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:03:35 crc kubenswrapper[4826]: I0129 07:03:35.657843 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://2746c36a8cbae641f39bc5b503c4b8bd16a73e3034bddd5ca4705c812e26566f" gracePeriod=600 Jan 29 07:03:35 crc kubenswrapper[4826]: I0129 07:03:35.779887 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:03:36 crc kubenswrapper[4826]: E0129 07:03:36.433389 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad14a23d_71a9_4348_9e06_61db9b024821.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad14a23d_71a9_4348_9e06_61db9b024821.slice/crio-32221b2b22f27820444f88b1c4d786966450cc34e86964c6b693b53b6785cd38\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a281f08_83e6_478f_8c1b_3cc0c7bfd7f8.slice/crio-0a83eaefc6ec463e58f41c35181e65dbcfe6ffba20a2211848e4580d733d53d8.scope\": RecentStats: unable to find data in memory cache]" Jan 29 07:03:36 crc kubenswrapper[4826]: I0129 07:03:36.644981 4826 generic.go:334] "Generic (PLEG): container finished" podID="1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" containerID="0a83eaefc6ec463e58f41c35181e65dbcfe6ffba20a2211848e4580d733d53d8" exitCode=0 Jan 29 07:03:36 crc kubenswrapper[4826]: I0129 07:03:36.645050 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8","Type":"ContainerDied","Data":"0a83eaefc6ec463e58f41c35181e65dbcfe6ffba20a2211848e4580d733d53d8"} Jan 29 07:03:36 crc kubenswrapper[4826]: I0129 07:03:36.647362 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="2746c36a8cbae641f39bc5b503c4b8bd16a73e3034bddd5ca4705c812e26566f" exitCode=0 Jan 29 07:03:36 crc kubenswrapper[4826]: I0129 07:03:36.647396 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"2746c36a8cbae641f39bc5b503c4b8bd16a73e3034bddd5ca4705c812e26566f"} Jan 29 07:03:36 crc kubenswrapper[4826]: I0129 07:03:36.647417 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"577b176493c80a578b39974191ea87b611ce451ac0e7d53efe3f736b701ffd68"} Jan 29 07:03:36 crc kubenswrapper[4826]: I0129 07:03:36.647435 4826 scope.go:117] "RemoveContainer" containerID="d3303398d9dd82a2bcfef8e8991ab372b491761ad48de0e25a106a7c53d77566" Jan 29 07:03:36 crc kubenswrapper[4826]: E0129 07:03:36.855224 4826 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a8c5a5f3154af4957d73dbd9795f90dd846b436148a779ef011ac9025efc4c60/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a8c5a5f3154af4957d73dbd9795f90dd846b436148a779ef011ac9025efc4c60/diff: no such file or directory, extraDiskErr: Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.218253 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.368606 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-config-data-custom\") pod \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.368807 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-combined-ca-bundle\") pod \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.368826 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-scripts\") pod \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.368859 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-etc-machine-id\") pod \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.368879 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-config-data\") pod \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.368956 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcbp2\" (UniqueName: \"kubernetes.io/projected/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-kube-api-access-lcbp2\") pod \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\" (UID: \"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8\") " Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.368983 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" (UID: "1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.369282 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.388810 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-scripts" (OuterVolumeSpecName: "scripts") pod "1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" (UID: "1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.389356 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-kube-api-access-lcbp2" (OuterVolumeSpecName: "kube-api-access-lcbp2") pod "1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" (UID: "1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8"). InnerVolumeSpecName "kube-api-access-lcbp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.389379 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" (UID: "1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.470883 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.470910 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcbp2\" (UniqueName: \"kubernetes.io/projected/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-kube-api-access-lcbp2\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.470935 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.475504 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-config-data" (OuterVolumeSpecName: "config-data") pod "1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" (UID: "1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.490492 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" (UID: "1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.572658 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.572683 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.660488 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8","Type":"ContainerDied","Data":"676f445d4436e3b457fb9015a404a823afe1115faf3e0044b9a386eae0c8ea0f"} Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.660545 4826 scope.go:117] "RemoveContainer" containerID="e8918c774c6ac07e9dbc0b59ee8d45ef4a3ff1352e2291e3505d79b3f39bdf4c" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.660544 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.694177 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.697608 4826 scope.go:117] "RemoveContainer" containerID="0a83eaefc6ec463e58f41c35181e65dbcfe6ffba20a2211848e4580d733d53d8" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.704561 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.726447 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 07:03:37 crc kubenswrapper[4826]: E0129 07:03:37.727018 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" containerName="barbican-api-log" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.727036 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" containerName="barbican-api-log" Jan 29 07:03:37 crc kubenswrapper[4826]: E0129 07:03:37.727057 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a33d25-bc6c-4615-9997-2ce3b421499f" containerName="dnsmasq-dns" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.727064 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a33d25-bc6c-4615-9997-2ce3b421499f" containerName="dnsmasq-dns" Jan 29 07:03:37 crc kubenswrapper[4826]: E0129 07:03:37.727074 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" containerName="cinder-scheduler" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.727080 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" containerName="cinder-scheduler" Jan 29 07:03:37 crc kubenswrapper[4826]: E0129 07:03:37.727092 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a33d25-bc6c-4615-9997-2ce3b421499f" containerName="init" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.727098 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a33d25-bc6c-4615-9997-2ce3b421499f" containerName="init" Jan 29 07:03:37 crc kubenswrapper[4826]: E0129 07:03:37.727117 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" containerName="probe" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.727122 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" containerName="probe" Jan 29 07:03:37 crc kubenswrapper[4826]: E0129 07:03:37.727136 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" containerName="barbican-api" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.727143 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" containerName="barbican-api" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.727442 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" containerName="cinder-scheduler" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.727456 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a33d25-bc6c-4615-9997-2ce3b421499f" containerName="dnsmasq-dns" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.727469 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" containerName="barbican-api-log" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.727491 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" containerName="probe" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.727497 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="496fc5ba-d9dd-4a85-a5a9-20ee6695a5cc" containerName="barbican-api" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.728314 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.731754 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.756742 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.878520 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.878622 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea529cf3-184e-446a-9c6a-759cf1bab14c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.878689 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.878746 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.878868 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.878898 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbq9q\" (UniqueName: \"kubernetes.io/projected/ea529cf3-184e-446a-9c6a-759cf1bab14c-kube-api-access-pbq9q\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.982560 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.982776 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea529cf3-184e-446a-9c6a-759cf1bab14c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.982956 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.983085 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.983252 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.983372 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbq9q\" (UniqueName: \"kubernetes.io/projected/ea529cf3-184e-446a-9c6a-759cf1bab14c-kube-api-access-pbq9q\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.984479 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea529cf3-184e-446a-9c6a-759cf1bab14c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.988908 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.989535 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:37 crc kubenswrapper[4826]: I0129 07:03:37.996828 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:38 crc kubenswrapper[4826]: I0129 07:03:38.000432 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:38 crc kubenswrapper[4826]: I0129 07:03:38.017573 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbq9q\" (UniqueName: \"kubernetes.io/projected/ea529cf3-184e-446a-9c6a-759cf1bab14c-kube-api-access-pbq9q\") pod \"cinder-scheduler-0\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " pod="openstack/cinder-scheduler-0" Jan 29 07:03:38 crc kubenswrapper[4826]: I0129 07:03:38.063863 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 07:03:38 crc kubenswrapper[4826]: I0129 07:03:38.554985 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 07:03:38 crc kubenswrapper[4826]: W0129 07:03:38.560675 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea529cf3_184e_446a_9c6a_759cf1bab14c.slice/crio-0ed54943a2f0d03803174d5aa0d9b5792403d7c5ac6c9867d99b149d33f7fb5b WatchSource:0}: Error finding container 0ed54943a2f0d03803174d5aa0d9b5792403d7c5ac6c9867d99b149d33f7fb5b: Status 404 returned error can't find the container with id 0ed54943a2f0d03803174d5aa0d9b5792403d7c5ac6c9867d99b149d33f7fb5b Jan 29 07:03:38 crc kubenswrapper[4826]: I0129 07:03:38.678885 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea529cf3-184e-446a-9c6a-759cf1bab14c","Type":"ContainerStarted","Data":"0ed54943a2f0d03803174d5aa0d9b5792403d7c5ac6c9867d99b149d33f7fb5b"} Jan 29 07:03:38 crc kubenswrapper[4826]: I0129 07:03:38.829450 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8" path="/var/lib/kubelet/pods/1a281f08-83e6-478f-8c1b-3cc0c7bfd7f8/volumes" Jan 29 07:03:39 crc kubenswrapper[4826]: I0129 07:03:39.690211 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea529cf3-184e-446a-9c6a-759cf1bab14c","Type":"ContainerStarted","Data":"da6172474a5804740243a88717e98452c0421876963c421e7935fba689bdc058"} Jan 29 07:03:39 crc kubenswrapper[4826]: I0129 07:03:39.770995 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 07:03:39 crc kubenswrapper[4826]: I0129 07:03:39.772627 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 07:03:39 crc kubenswrapper[4826]: I0129 07:03:39.782439 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 07:03:39 crc kubenswrapper[4826]: I0129 07:03:39.784990 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 07:03:39 crc kubenswrapper[4826]: I0129 07:03:39.785198 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-hdbq2" Jan 29 07:03:39 crc kubenswrapper[4826]: I0129 07:03:39.785598 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 07:03:39 crc kubenswrapper[4826]: I0129 07:03:39.921267 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsq28\" (UniqueName: \"kubernetes.io/projected/071ae6a1-9762-465d-8499-69ec962f08b7-kube-api-access-rsq28\") pod \"openstackclient\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " pod="openstack/openstackclient" Jan 29 07:03:39 crc kubenswrapper[4826]: I0129 07:03:39.921338 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/071ae6a1-9762-465d-8499-69ec962f08b7-openstack-config\") pod \"openstackclient\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " pod="openstack/openstackclient" Jan 29 07:03:39 crc kubenswrapper[4826]: I0129 07:03:39.921376 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071ae6a1-9762-465d-8499-69ec962f08b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " pod="openstack/openstackclient" Jan 29 07:03:39 crc kubenswrapper[4826]: I0129 07:03:39.921404 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/071ae6a1-9762-465d-8499-69ec962f08b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " pod="openstack/openstackclient" Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.023125 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsq28\" (UniqueName: \"kubernetes.io/projected/071ae6a1-9762-465d-8499-69ec962f08b7-kube-api-access-rsq28\") pod \"openstackclient\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " pod="openstack/openstackclient" Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.023179 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/071ae6a1-9762-465d-8499-69ec962f08b7-openstack-config\") pod \"openstackclient\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " pod="openstack/openstackclient" Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.023217 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071ae6a1-9762-465d-8499-69ec962f08b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " pod="openstack/openstackclient" Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.023251 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/071ae6a1-9762-465d-8499-69ec962f08b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " pod="openstack/openstackclient" Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.024934 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/071ae6a1-9762-465d-8499-69ec962f08b7-openstack-config\") pod \"openstackclient\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " pod="openstack/openstackclient" Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.028517 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/071ae6a1-9762-465d-8499-69ec962f08b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " pod="openstack/openstackclient" Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.029170 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071ae6a1-9762-465d-8499-69ec962f08b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " pod="openstack/openstackclient" Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.048041 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsq28\" (UniqueName: \"kubernetes.io/projected/071ae6a1-9762-465d-8499-69ec962f08b7-kube-api-access-rsq28\") pod \"openstackclient\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " pod="openstack/openstackclient" Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.089959 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.108626 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.637719 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 07:03:40 crc kubenswrapper[4826]: W0129 07:03:40.652667 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod071ae6a1_9762_465d_8499_69ec962f08b7.slice/crio-9578b2af5d81ce695f0cdf6793a3bf227894dbf70d282756866fbd43b551d114 WatchSource:0}: Error finding container 9578b2af5d81ce695f0cdf6793a3bf227894dbf70d282756866fbd43b551d114: Status 404 returned error can't find the container with id 9578b2af5d81ce695f0cdf6793a3bf227894dbf70d282756866fbd43b551d114 Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.701765 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"071ae6a1-9762-465d-8499-69ec962f08b7","Type":"ContainerStarted","Data":"9578b2af5d81ce695f0cdf6793a3bf227894dbf70d282756866fbd43b551d114"} Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.705137 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea529cf3-184e-446a-9c6a-759cf1bab14c","Type":"ContainerStarted","Data":"16b794547221f9aaaeba5d721b9f980ce5d5698a660a295e46a0036d996a08a9"} Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.724995 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.724975309 podStartE2EDuration="3.724975309s" podCreationTimestamp="2026-01-29 07:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:40.722671019 +0000 UTC m=+1204.584464088" watchObservedRunningTime="2026-01-29 07:03:40.724975309 +0000 UTC m=+1204.586768378" Jan 29 07:03:40 crc kubenswrapper[4826]: I0129 07:03:40.742872 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:41 crc kubenswrapper[4826]: I0129 07:03:41.703182 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:03:41 crc kubenswrapper[4826]: I0129 07:03:41.720793 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:43 crc kubenswrapper[4826]: I0129 07:03:43.064782 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.539737 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-55957b69d9-prlpm"] Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.541416 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.543527 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.543755 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.543871 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.549237 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55957b69d9-prlpm"] Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.638012 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm8sf\" (UniqueName: \"kubernetes.io/projected/b0f5ad8c-072d-4994-acb3-e898c0981eef-kube-api-access-hm8sf\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.638092 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0f5ad8c-072d-4994-acb3-e898c0981eef-log-httpd\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.638292 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0f5ad8c-072d-4994-acb3-e898c0981eef-run-httpd\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.638355 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-config-data\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.638465 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-public-tls-certs\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.638497 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b0f5ad8c-072d-4994-acb3-e898c0981eef-etc-swift\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.638522 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-internal-tls-certs\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.638627 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-combined-ca-bundle\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.740076 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-combined-ca-bundle\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.740131 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm8sf\" (UniqueName: \"kubernetes.io/projected/b0f5ad8c-072d-4994-acb3-e898c0981eef-kube-api-access-hm8sf\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.740203 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0f5ad8c-072d-4994-acb3-e898c0981eef-log-httpd\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.740272 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0f5ad8c-072d-4994-acb3-e898c0981eef-run-httpd\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.740322 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-config-data\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.740380 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-public-tls-certs\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.740409 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b0f5ad8c-072d-4994-acb3-e898c0981eef-etc-swift\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.740440 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-internal-tls-certs\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.740768 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0f5ad8c-072d-4994-acb3-e898c0981eef-log-httpd\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.740917 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0f5ad8c-072d-4994-acb3-e898c0981eef-run-httpd\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.747322 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-config-data\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.748451 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b0f5ad8c-072d-4994-acb3-e898c0981eef-etc-swift\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.750880 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-internal-tls-certs\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.751873 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-public-tls-certs\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.763006 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-combined-ca-bundle\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.763403 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm8sf\" (UniqueName: \"kubernetes.io/projected/b0f5ad8c-072d-4994-acb3-e898c0981eef-kube-api-access-hm8sf\") pod \"swift-proxy-55957b69d9-prlpm\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:44 crc kubenswrapper[4826]: I0129 07:03:44.926927 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:45 crc kubenswrapper[4826]: I0129 07:03:45.293237 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:03:45 crc kubenswrapper[4826]: I0129 07:03:45.293528 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="ceilometer-central-agent" containerID="cri-o://534570455bc7f28c999f94982da918b4d6103cc6c3fff68a2a134c4ea2d71822" gracePeriod=30 Jan 29 07:03:45 crc kubenswrapper[4826]: I0129 07:03:45.293619 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="sg-core" containerID="cri-o://9340955341f541231fbfa0a1c462ae086c08549c43bbb99f49dc523952932dc4" gracePeriod=30 Jan 29 07:03:45 crc kubenswrapper[4826]: I0129 07:03:45.293683 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="proxy-httpd" containerID="cri-o://0757037f51b34dd43829be352006b82b17f47465b5d4fa6ebf752641b255056b" gracePeriod=30 Jan 29 07:03:45 crc kubenswrapper[4826]: I0129 07:03:45.293623 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="ceilometer-notification-agent" containerID="cri-o://f30f857579c6e8464fe523290c094e1e775babc999174c164bf9a985768dff66" gracePeriod=30 Jan 29 07:03:45 crc kubenswrapper[4826]: I0129 07:03:45.308129 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.163:3000/\": EOF" Jan 29 07:03:45 crc kubenswrapper[4826]: I0129 07:03:45.791495 4826 generic.go:334] "Generic (PLEG): container finished" podID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerID="0757037f51b34dd43829be352006b82b17f47465b5d4fa6ebf752641b255056b" exitCode=0 Jan 29 07:03:45 crc kubenswrapper[4826]: I0129 07:03:45.791527 4826 generic.go:334] "Generic (PLEG): container finished" podID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerID="9340955341f541231fbfa0a1c462ae086c08549c43bbb99f49dc523952932dc4" exitCode=2 Jan 29 07:03:45 crc kubenswrapper[4826]: I0129 07:03:45.791536 4826 generic.go:334] "Generic (PLEG): container finished" podID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerID="534570455bc7f28c999f94982da918b4d6103cc6c3fff68a2a134c4ea2d71822" exitCode=0 Jan 29 07:03:45 crc kubenswrapper[4826]: I0129 07:03:45.791557 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3e839c-aa4e-42a3-9c30-5bba595f1aad","Type":"ContainerDied","Data":"0757037f51b34dd43829be352006b82b17f47465b5d4fa6ebf752641b255056b"} Jan 29 07:03:45 crc kubenswrapper[4826]: I0129 07:03:45.791585 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3e839c-aa4e-42a3-9c30-5bba595f1aad","Type":"ContainerDied","Data":"9340955341f541231fbfa0a1c462ae086c08549c43bbb99f49dc523952932dc4"} Jan 29 07:03:45 crc kubenswrapper[4826]: I0129 07:03:45.791596 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3e839c-aa4e-42a3-9c30-5bba595f1aad","Type":"ContainerDied","Data":"534570455bc7f28c999f94982da918b4d6103cc6c3fff68a2a134c4ea2d71822"} Jan 29 07:03:45 crc kubenswrapper[4826]: I0129 07:03:45.958066 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:03:46 crc kubenswrapper[4826]: I0129 07:03:46.019921 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-589db6d64d-xbt79"] Jan 29 07:03:46 crc kubenswrapper[4826]: I0129 07:03:46.020183 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-589db6d64d-xbt79" podUID="7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" containerName="neutron-api" containerID="cri-o://caaa9065a7eec3d3207f7501f03a024e35fc55d3a277d53adee97f01fb2a3514" gracePeriod=30 Jan 29 07:03:46 crc kubenswrapper[4826]: I0129 07:03:46.020618 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-589db6d64d-xbt79" podUID="7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" containerName="neutron-httpd" containerID="cri-o://d01e3c5369a5de9d7a1e41286fbd0292a5532a8b46a78005b8868dce2183bf7e" gracePeriod=30 Jan 29 07:03:46 crc kubenswrapper[4826]: I0129 07:03:46.800608 4826 generic.go:334] "Generic (PLEG): container finished" podID="7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" containerID="d01e3c5369a5de9d7a1e41286fbd0292a5532a8b46a78005b8868dce2183bf7e" exitCode=0 Jan 29 07:03:46 crc kubenswrapper[4826]: I0129 07:03:46.800684 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-589db6d64d-xbt79" event={"ID":"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8","Type":"ContainerDied","Data":"d01e3c5369a5de9d7a1e41286fbd0292a5532a8b46a78005b8868dce2183bf7e"} Jan 29 07:03:48 crc kubenswrapper[4826]: I0129 07:03:48.313220 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 07:03:49 crc kubenswrapper[4826]: I0129 07:03:49.796715 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:03:49 crc kubenswrapper[4826]: I0129 07:03:49.797248 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="017465e9-fb85-458e-8eca-109192bf47e7" containerName="glance-log" containerID="cri-o://02fd3aeeeceec4c65c45b602c7ab22ebc9bd69ec465aea4e012b65c127ca4a4a" gracePeriod=30 Jan 29 07:03:49 crc kubenswrapper[4826]: I0129 07:03:49.797438 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="017465e9-fb85-458e-8eca-109192bf47e7" containerName="glance-httpd" containerID="cri-o://41219edc8dc3d793f63b17ea4b062b042171564ccc15cff867a70046f0e09625" gracePeriod=30 Jan 29 07:03:49 crc kubenswrapper[4826]: I0129 07:03:49.841661 4826 generic.go:334] "Generic (PLEG): container finished" podID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerID="f30f857579c6e8464fe523290c094e1e775babc999174c164bf9a985768dff66" exitCode=0 Jan 29 07:03:49 crc kubenswrapper[4826]: I0129 07:03:49.841719 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3e839c-aa4e-42a3-9c30-5bba595f1aad","Type":"ContainerDied","Data":"f30f857579c6e8464fe523290c094e1e775babc999174c164bf9a985768dff66"} Jan 29 07:03:49 crc kubenswrapper[4826]: I0129 07:03:49.847058 4826 generic.go:334] "Generic (PLEG): container finished" podID="7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" containerID="caaa9065a7eec3d3207f7501f03a024e35fc55d3a277d53adee97f01fb2a3514" exitCode=0 Jan 29 07:03:49 crc kubenswrapper[4826]: I0129 07:03:49.847157 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-589db6d64d-xbt79" event={"ID":"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8","Type":"ContainerDied","Data":"caaa9065a7eec3d3207f7501f03a024e35fc55d3a277d53adee97f01fb2a3514"} Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.441345 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.486794 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-run-httpd\") pod \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.486884 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-config-data\") pod \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.486918 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-combined-ca-bundle\") pod \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.486955 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-scripts\") pod \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.486991 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-sg-core-conf-yaml\") pod \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.487096 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-log-httpd\") pod \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.487124 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq85p\" (UniqueName: \"kubernetes.io/projected/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-kube-api-access-bq85p\") pod \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\" (UID: \"aa3e839c-aa4e-42a3-9c30-5bba595f1aad\") " Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.488289 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa3e839c-aa4e-42a3-9c30-5bba595f1aad" (UID: "aa3e839c-aa4e-42a3-9c30-5bba595f1aad"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.488715 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa3e839c-aa4e-42a3-9c30-5bba595f1aad" (UID: "aa3e839c-aa4e-42a3-9c30-5bba595f1aad"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.493044 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-scripts" (OuterVolumeSpecName: "scripts") pod "aa3e839c-aa4e-42a3-9c30-5bba595f1aad" (UID: "aa3e839c-aa4e-42a3-9c30-5bba595f1aad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.493525 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-kube-api-access-bq85p" (OuterVolumeSpecName: "kube-api-access-bq85p") pod "aa3e839c-aa4e-42a3-9c30-5bba595f1aad" (UID: "aa3e839c-aa4e-42a3-9c30-5bba595f1aad"). InnerVolumeSpecName "kube-api-access-bq85p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.511908 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.557195 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa3e839c-aa4e-42a3-9c30-5bba595f1aad" (UID: "aa3e839c-aa4e-42a3-9c30-5bba595f1aad"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.575254 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa3e839c-aa4e-42a3-9c30-5bba595f1aad" (UID: "aa3e839c-aa4e-42a3-9c30-5bba595f1aad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.588194 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-httpd-config\") pod \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.588240 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-combined-ca-bundle\") pod \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.588469 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-ovndb-tls-certs\") pod \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.588584 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-config\") pod \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.588659 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfl56\" (UniqueName: \"kubernetes.io/projected/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-kube-api-access-kfl56\") pod \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\" (UID: \"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8\") " Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.589242 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.589259 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.589268 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq85p\" (UniqueName: \"kubernetes.io/projected/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-kube-api-access-bq85p\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.589277 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.589286 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.589309 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.594443 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-kube-api-access-kfl56" (OuterVolumeSpecName: "kube-api-access-kfl56") pod "7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" (UID: "7fadc18f-3e45-4d36-ae10-f4ee19d24cf8"). InnerVolumeSpecName "kube-api-access-kfl56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.594646 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" (UID: "7fadc18f-3e45-4d36-ae10-f4ee19d24cf8"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.661044 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-config-data" (OuterVolumeSpecName: "config-data") pod "aa3e839c-aa4e-42a3-9c30-5bba595f1aad" (UID: "aa3e839c-aa4e-42a3-9c30-5bba595f1aad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.668921 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" (UID: "7fadc18f-3e45-4d36-ae10-f4ee19d24cf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.683595 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-config" (OuterVolumeSpecName: "config") pod "7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" (UID: "7fadc18f-3e45-4d36-ae10-f4ee19d24cf8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.690250 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.690278 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfl56\" (UniqueName: \"kubernetes.io/projected/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-kube-api-access-kfl56\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.690290 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa3e839c-aa4e-42a3-9c30-5bba595f1aad-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.690313 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.690324 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.717309 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.717537 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cbf55335-feed-4467-9375-9543d111bc55" containerName="glance-log" containerID="cri-o://284517534f5a17580a2a91b0b750cf4fe7d6b57ad048cb55c376801303f8f9ff" gracePeriod=30 Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.717679 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cbf55335-feed-4467-9375-9543d111bc55" containerName="glance-httpd" containerID="cri-o://e705f84552b82451981b2667b42d642a119793c5b4249ad22f285c72e2bd3759" gracePeriod=30 Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.718569 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" (UID: "7fadc18f-3e45-4d36-ae10-f4ee19d24cf8"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.779170 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55957b69d9-prlpm"] Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.791033 4826 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.866956 4826 generic.go:334] "Generic (PLEG): container finished" podID="017465e9-fb85-458e-8eca-109192bf47e7" containerID="02fd3aeeeceec4c65c45b602c7ab22ebc9bd69ec465aea4e012b65c127ca4a4a" exitCode=143 Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.867056 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"017465e9-fb85-458e-8eca-109192bf47e7","Type":"ContainerDied","Data":"02fd3aeeeceec4c65c45b602c7ab22ebc9bd69ec465aea4e012b65c127ca4a4a"} Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.870816 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa3e839c-aa4e-42a3-9c30-5bba595f1aad","Type":"ContainerDied","Data":"8042992323eda57480d36e08b65f8f985ed8b6260ceadb0cb5b70bb868fa7788"} Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.870850 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.870948 4826 scope.go:117] "RemoveContainer" containerID="0757037f51b34dd43829be352006b82b17f47465b5d4fa6ebf752641b255056b" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.873866 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-589db6d64d-xbt79" event={"ID":"7fadc18f-3e45-4d36-ae10-f4ee19d24cf8","Type":"ContainerDied","Data":"f35795da78d4628e02199d662b2f71badcd8a0ea9936ea558515b78be502314d"} Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.873874 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-589db6d64d-xbt79" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.876005 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55957b69d9-prlpm" event={"ID":"b0f5ad8c-072d-4994-acb3-e898c0981eef","Type":"ContainerStarted","Data":"f9b7bf9412422f2b31d3bfeb5e4e73dc504a7b77c6e6f1d010e49868c7b50853"} Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.879553 4826 generic.go:334] "Generic (PLEG): container finished" podID="cbf55335-feed-4467-9375-9543d111bc55" containerID="284517534f5a17580a2a91b0b750cf4fe7d6b57ad048cb55c376801303f8f9ff" exitCode=143 Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.879609 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbf55335-feed-4467-9375-9543d111bc55","Type":"ContainerDied","Data":"284517534f5a17580a2a91b0b750cf4fe7d6b57ad048cb55c376801303f8f9ff"} Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.882902 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"071ae6a1-9762-465d-8499-69ec962f08b7","Type":"ContainerStarted","Data":"65213368cc65f805bf38badc08c83a4c18280b76d4a7b5a339c6c9e7d8454ef3"} Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.902398 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.911046 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.914045 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.479517841 podStartE2EDuration="11.914026454s" podCreationTimestamp="2026-01-29 07:03:39 +0000 UTC" firstStartedPulling="2026-01-29 07:03:40.655192928 +0000 UTC m=+1204.516985997" lastFinishedPulling="2026-01-29 07:03:50.089701541 +0000 UTC m=+1213.951494610" observedRunningTime="2026-01-29 07:03:50.907656199 +0000 UTC m=+1214.769449268" watchObservedRunningTime="2026-01-29 07:03:50.914026454 +0000 UTC m=+1214.775819523" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.923548 4826 scope.go:117] "RemoveContainer" containerID="9340955341f541231fbfa0a1c462ae086c08549c43bbb99f49dc523952932dc4" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.928810 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:03:50 crc kubenswrapper[4826]: E0129 07:03:50.930643 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" containerName="neutron-httpd" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.930722 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" containerName="neutron-httpd" Jan 29 07:03:50 crc kubenswrapper[4826]: E0129 07:03:50.930779 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="ceilometer-notification-agent" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.930829 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="ceilometer-notification-agent" Jan 29 07:03:50 crc kubenswrapper[4826]: E0129 07:03:50.930898 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="ceilometer-central-agent" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.930950 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="ceilometer-central-agent" Jan 29 07:03:50 crc kubenswrapper[4826]: E0129 07:03:50.931023 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="proxy-httpd" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.931081 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="proxy-httpd" Jan 29 07:03:50 crc kubenswrapper[4826]: E0129 07:03:50.931144 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="sg-core" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.931194 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="sg-core" Jan 29 07:03:50 crc kubenswrapper[4826]: E0129 07:03:50.931359 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" containerName="neutron-api" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.931432 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" containerName="neutron-api" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.931708 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="proxy-httpd" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.931781 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="ceilometer-notification-agent" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.931848 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="sg-core" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.931900 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" containerName="neutron-api" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.931960 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" containerName="ceilometer-central-agent" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.932069 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" containerName="neutron-httpd" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.935803 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.936116 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.938382 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.959107 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-589db6d64d-xbt79"] Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.969050 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.976631 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-589db6d64d-xbt79"] Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.996402 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4426z\" (UniqueName: \"kubernetes.io/projected/949145dc-8a4e-4715-bf58-22395853ed14-kube-api-access-4426z\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.996450 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-config-data\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.996472 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949145dc-8a4e-4715-bf58-22395853ed14-run-httpd\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.996501 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-scripts\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.996526 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.996549 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949145dc-8a4e-4715-bf58-22395853ed14-log-httpd\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:50 crc kubenswrapper[4826]: I0129 07:03:50.996628 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.050731 4826 scope.go:117] "RemoveContainer" containerID="f30f857579c6e8464fe523290c094e1e775babc999174c164bf9a985768dff66" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.097938 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4426z\" (UniqueName: \"kubernetes.io/projected/949145dc-8a4e-4715-bf58-22395853ed14-kube-api-access-4426z\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.098196 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-config-data\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.098214 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949145dc-8a4e-4715-bf58-22395853ed14-run-httpd\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.098249 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-scripts\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.098273 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.098309 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949145dc-8a4e-4715-bf58-22395853ed14-log-httpd\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.098389 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.099443 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949145dc-8a4e-4715-bf58-22395853ed14-run-httpd\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.101326 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.101714 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949145dc-8a4e-4715-bf58-22395853ed14-log-httpd\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.103075 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.103922 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-scripts\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.112524 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-config-data\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.120243 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4426z\" (UniqueName: \"kubernetes.io/projected/949145dc-8a4e-4715-bf58-22395853ed14-kube-api-access-4426z\") pod \"ceilometer-0\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.203194 4826 scope.go:117] "RemoveContainer" containerID="534570455bc7f28c999f94982da918b4d6103cc6c3fff68a2a134c4ea2d71822" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.248057 4826 scope.go:117] "RemoveContainer" containerID="d01e3c5369a5de9d7a1e41286fbd0292a5532a8b46a78005b8868dce2183bf7e" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.276187 4826 scope.go:117] "RemoveContainer" containerID="caaa9065a7eec3d3207f7501f03a024e35fc55d3a277d53adee97f01fb2a3514" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.353796 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.862767 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.895808 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55957b69d9-prlpm" event={"ID":"b0f5ad8c-072d-4994-acb3-e898c0981eef","Type":"ContainerStarted","Data":"b6663b83c1263d117ea6eb33e490b5c99d06f2282c774e74f6560f8ad6d5bc74"} Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.895847 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55957b69d9-prlpm" event={"ID":"b0f5ad8c-072d-4994-acb3-e898c0981eef","Type":"ContainerStarted","Data":"bbf08c9f4b99f6403fa0c9818497bc87b482aca717f1d209262d98a1ff4df06d"} Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.896045 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.896092 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.898115 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949145dc-8a4e-4715-bf58-22395853ed14","Type":"ContainerStarted","Data":"48e3076674c7b53b49d6f5bebb5c68073dc206b9402681292b43a0d3c5c015f7"} Jan 29 07:03:51 crc kubenswrapper[4826]: I0129 07:03:51.925852 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-55957b69d9-prlpm" podStartSLOduration=7.925832975 podStartE2EDuration="7.925832975s" podCreationTimestamp="2026-01-29 07:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:03:51.916146225 +0000 UTC m=+1215.777939304" watchObservedRunningTime="2026-01-29 07:03:51.925832975 +0000 UTC m=+1215.787626034" Jan 29 07:03:52 crc kubenswrapper[4826]: I0129 07:03:52.819382 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fadc18f-3e45-4d36-ae10-f4ee19d24cf8" path="/var/lib/kubelet/pods/7fadc18f-3e45-4d36-ae10-f4ee19d24cf8/volumes" Jan 29 07:03:52 crc kubenswrapper[4826]: I0129 07:03:52.820331 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa3e839c-aa4e-42a3-9c30-5bba595f1aad" path="/var/lib/kubelet/pods/aa3e839c-aa4e-42a3-9c30-5bba595f1aad/volumes" Jan 29 07:03:52 crc kubenswrapper[4826]: I0129 07:03:52.908782 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949145dc-8a4e-4715-bf58-22395853ed14","Type":"ContainerStarted","Data":"ea2155d850be5d9399ee7421588707f9ed0c5dc2dff45d86a761598a2074604c"} Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.520428 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.575831 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.656552 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-config-data\") pod \"017465e9-fb85-458e-8eca-109192bf47e7\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.657315 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"017465e9-fb85-458e-8eca-109192bf47e7\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.657643 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/017465e9-fb85-458e-8eca-109192bf47e7-logs\") pod \"017465e9-fb85-458e-8eca-109192bf47e7\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.657687 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-public-tls-certs\") pod \"017465e9-fb85-458e-8eca-109192bf47e7\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.657781 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/017465e9-fb85-458e-8eca-109192bf47e7-httpd-run\") pod \"017465e9-fb85-458e-8eca-109192bf47e7\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.657862 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-combined-ca-bundle\") pod \"017465e9-fb85-458e-8eca-109192bf47e7\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.657892 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hkgx\" (UniqueName: \"kubernetes.io/projected/017465e9-fb85-458e-8eca-109192bf47e7-kube-api-access-7hkgx\") pod \"017465e9-fb85-458e-8eca-109192bf47e7\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.657974 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-scripts\") pod \"017465e9-fb85-458e-8eca-109192bf47e7\" (UID: \"017465e9-fb85-458e-8eca-109192bf47e7\") " Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.661035 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/017465e9-fb85-458e-8eca-109192bf47e7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "017465e9-fb85-458e-8eca-109192bf47e7" (UID: "017465e9-fb85-458e-8eca-109192bf47e7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.661376 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/017465e9-fb85-458e-8eca-109192bf47e7-logs" (OuterVolumeSpecName: "logs") pod "017465e9-fb85-458e-8eca-109192bf47e7" (UID: "017465e9-fb85-458e-8eca-109192bf47e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.662434 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-scripts" (OuterVolumeSpecName: "scripts") pod "017465e9-fb85-458e-8eca-109192bf47e7" (UID: "017465e9-fb85-458e-8eca-109192bf47e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.668469 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "017465e9-fb85-458e-8eca-109192bf47e7" (UID: "017465e9-fb85-458e-8eca-109192bf47e7"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.672375 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/017465e9-fb85-458e-8eca-109192bf47e7-kube-api-access-7hkgx" (OuterVolumeSpecName: "kube-api-access-7hkgx") pod "017465e9-fb85-458e-8eca-109192bf47e7" (UID: "017465e9-fb85-458e-8eca-109192bf47e7"). InnerVolumeSpecName "kube-api-access-7hkgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.700837 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "017465e9-fb85-458e-8eca-109192bf47e7" (UID: "017465e9-fb85-458e-8eca-109192bf47e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.721592 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-config-data" (OuterVolumeSpecName: "config-data") pod "017465e9-fb85-458e-8eca-109192bf47e7" (UID: "017465e9-fb85-458e-8eca-109192bf47e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.760238 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/017465e9-fb85-458e-8eca-109192bf47e7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.760264 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.760279 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hkgx\" (UniqueName: \"kubernetes.io/projected/017465e9-fb85-458e-8eca-109192bf47e7-kube-api-access-7hkgx\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.760299 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.760318 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.760345 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.760357 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/017465e9-fb85-458e-8eca-109192bf47e7-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.773477 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "017465e9-fb85-458e-8eca-109192bf47e7" (UID: "017465e9-fb85-458e-8eca-109192bf47e7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.783874 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.860949 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.860975 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/017465e9-fb85-458e-8eca-109192bf47e7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.923995 4826 generic.go:334] "Generic (PLEG): container finished" podID="017465e9-fb85-458e-8eca-109192bf47e7" containerID="41219edc8dc3d793f63b17ea4b062b042171564ccc15cff867a70046f0e09625" exitCode=0 Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.924073 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"017465e9-fb85-458e-8eca-109192bf47e7","Type":"ContainerDied","Data":"41219edc8dc3d793f63b17ea4b062b042171564ccc15cff867a70046f0e09625"} Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.924101 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"017465e9-fb85-458e-8eca-109192bf47e7","Type":"ContainerDied","Data":"389b3d025aec4080b309b392d3a76c1f5eab4a2b380ccad5f64d117da1f3fa96"} Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.924118 4826 scope.go:117] "RemoveContainer" containerID="41219edc8dc3d793f63b17ea4b062b042171564ccc15cff867a70046f0e09625" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.924238 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.929709 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949145dc-8a4e-4715-bf58-22395853ed14","Type":"ContainerStarted","Data":"eada752438475dee942cc8aca12fbf0fee177f0e457bd66d055cec2310616f09"} Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.929757 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949145dc-8a4e-4715-bf58-22395853ed14","Type":"ContainerStarted","Data":"fe568e1bafcbdf0935eb5bb9a9fda5df2d252e12c2d9ce98b008ed2271d62778"} Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.935489 4826 generic.go:334] "Generic (PLEG): container finished" podID="cbf55335-feed-4467-9375-9543d111bc55" containerID="e705f84552b82451981b2667b42d642a119793c5b4249ad22f285c72e2bd3759" exitCode=0 Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.935542 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbf55335-feed-4467-9375-9543d111bc55","Type":"ContainerDied","Data":"e705f84552b82451981b2667b42d642a119793c5b4249ad22f285c72e2bd3759"} Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.962861 4826 scope.go:117] "RemoveContainer" containerID="02fd3aeeeceec4c65c45b602c7ab22ebc9bd69ec465aea4e012b65c127ca4a4a" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.969369 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.980908 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.990624 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:03:53 crc kubenswrapper[4826]: E0129 07:03:53.991171 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017465e9-fb85-458e-8eca-109192bf47e7" containerName="glance-log" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.991187 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="017465e9-fb85-458e-8eca-109192bf47e7" containerName="glance-log" Jan 29 07:03:53 crc kubenswrapper[4826]: E0129 07:03:53.991232 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017465e9-fb85-458e-8eca-109192bf47e7" containerName="glance-httpd" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.991239 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="017465e9-fb85-458e-8eca-109192bf47e7" containerName="glance-httpd" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.991530 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="017465e9-fb85-458e-8eca-109192bf47e7" containerName="glance-httpd" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.991558 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="017465e9-fb85-458e-8eca-109192bf47e7" containerName="glance-log" Jan 29 07:03:53 crc kubenswrapper[4826]: I0129 07:03:53.993034 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.004812 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.005102 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.012822 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.041423 4826 scope.go:117] "RemoveContainer" containerID="41219edc8dc3d793f63b17ea4b062b042171564ccc15cff867a70046f0e09625" Jan 29 07:03:54 crc kubenswrapper[4826]: E0129 07:03:54.045117 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41219edc8dc3d793f63b17ea4b062b042171564ccc15cff867a70046f0e09625\": container with ID starting with 41219edc8dc3d793f63b17ea4b062b042171564ccc15cff867a70046f0e09625 not found: ID does not exist" containerID="41219edc8dc3d793f63b17ea4b062b042171564ccc15cff867a70046f0e09625" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.045176 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41219edc8dc3d793f63b17ea4b062b042171564ccc15cff867a70046f0e09625"} err="failed to get container status \"41219edc8dc3d793f63b17ea4b062b042171564ccc15cff867a70046f0e09625\": rpc error: code = NotFound desc = could not find container \"41219edc8dc3d793f63b17ea4b062b042171564ccc15cff867a70046f0e09625\": container with ID starting with 41219edc8dc3d793f63b17ea4b062b042171564ccc15cff867a70046f0e09625 not found: ID does not exist" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.045209 4826 scope.go:117] "RemoveContainer" containerID="02fd3aeeeceec4c65c45b602c7ab22ebc9bd69ec465aea4e012b65c127ca4a4a" Jan 29 07:03:54 crc kubenswrapper[4826]: E0129 07:03:54.046183 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02fd3aeeeceec4c65c45b602c7ab22ebc9bd69ec465aea4e012b65c127ca4a4a\": container with ID starting with 02fd3aeeeceec4c65c45b602c7ab22ebc9bd69ec465aea4e012b65c127ca4a4a not found: ID does not exist" containerID="02fd3aeeeceec4c65c45b602c7ab22ebc9bd69ec465aea4e012b65c127ca4a4a" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.046223 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02fd3aeeeceec4c65c45b602c7ab22ebc9bd69ec465aea4e012b65c127ca4a4a"} err="failed to get container status \"02fd3aeeeceec4c65c45b602c7ab22ebc9bd69ec465aea4e012b65c127ca4a4a\": rpc error: code = NotFound desc = could not find container \"02fd3aeeeceec4c65c45b602c7ab22ebc9bd69ec465aea4e012b65c127ca4a4a\": container with ID starting with 02fd3aeeeceec4c65c45b602c7ab22ebc9bd69ec465aea4e012b65c127ca4a4a not found: ID does not exist" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.168380 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-config-data\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.168420 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.168479 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c080b978-6895-4067-9dd5-2c23d4d68518-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.168496 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c080b978-6895-4067-9dd5-2c23d4d68518-logs\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.168742 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.168805 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swjn5\" (UniqueName: \"kubernetes.io/projected/c080b978-6895-4067-9dd5-2c23d4d68518-kube-api-access-swjn5\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.168883 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.168921 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-scripts\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.270950 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-config-data\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.271003 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.271051 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c080b978-6895-4067-9dd5-2c23d4d68518-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.271073 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c080b978-6895-4067-9dd5-2c23d4d68518-logs\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.271114 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.271140 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swjn5\" (UniqueName: \"kubernetes.io/projected/c080b978-6895-4067-9dd5-2c23d4d68518-kube-api-access-swjn5\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.271177 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.271204 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-scripts\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.271636 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c080b978-6895-4067-9dd5-2c23d4d68518-logs\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.271634 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.271972 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c080b978-6895-4067-9dd5-2c23d4d68518-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.276316 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.276408 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.277903 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-config-data\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.278967 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-scripts\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.292659 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swjn5\" (UniqueName: \"kubernetes.io/projected/c080b978-6895-4067-9dd5-2c23d4d68518-kube-api-access-swjn5\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.293908 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.338756 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.360022 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.475015 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cbf55335-feed-4467-9375-9543d111bc55\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.475079 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-scripts\") pod \"cbf55335-feed-4467-9375-9543d111bc55\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.475106 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf55335-feed-4467-9375-9543d111bc55-httpd-run\") pod \"cbf55335-feed-4467-9375-9543d111bc55\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.475229 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4xzt\" (UniqueName: \"kubernetes.io/projected/cbf55335-feed-4467-9375-9543d111bc55-kube-api-access-c4xzt\") pod \"cbf55335-feed-4467-9375-9543d111bc55\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.475251 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbf55335-feed-4467-9375-9543d111bc55-logs\") pod \"cbf55335-feed-4467-9375-9543d111bc55\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.475336 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-combined-ca-bundle\") pod \"cbf55335-feed-4467-9375-9543d111bc55\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.475373 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-internal-tls-certs\") pod \"cbf55335-feed-4467-9375-9543d111bc55\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.475414 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-config-data\") pod \"cbf55335-feed-4467-9375-9543d111bc55\" (UID: \"cbf55335-feed-4467-9375-9543d111bc55\") " Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.478032 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf55335-feed-4467-9375-9543d111bc55-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cbf55335-feed-4467-9375-9543d111bc55" (UID: "cbf55335-feed-4467-9375-9543d111bc55"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.478249 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf55335-feed-4467-9375-9543d111bc55-logs" (OuterVolumeSpecName: "logs") pod "cbf55335-feed-4467-9375-9543d111bc55" (UID: "cbf55335-feed-4467-9375-9543d111bc55"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.482632 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-scripts" (OuterVolumeSpecName: "scripts") pod "cbf55335-feed-4467-9375-9543d111bc55" (UID: "cbf55335-feed-4467-9375-9543d111bc55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.484968 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "cbf55335-feed-4467-9375-9543d111bc55" (UID: "cbf55335-feed-4467-9375-9543d111bc55"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.485870 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf55335-feed-4467-9375-9543d111bc55-kube-api-access-c4xzt" (OuterVolumeSpecName: "kube-api-access-c4xzt") pod "cbf55335-feed-4467-9375-9543d111bc55" (UID: "cbf55335-feed-4467-9375-9543d111bc55"). InnerVolumeSpecName "kube-api-access-c4xzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.515595 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbf55335-feed-4467-9375-9543d111bc55" (UID: "cbf55335-feed-4467-9375-9543d111bc55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.536653 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-config-data" (OuterVolumeSpecName: "config-data") pod "cbf55335-feed-4467-9375-9543d111bc55" (UID: "cbf55335-feed-4467-9375-9543d111bc55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.548121 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cbf55335-feed-4467-9375-9543d111bc55" (UID: "cbf55335-feed-4467-9375-9543d111bc55"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.577914 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbf55335-feed-4467-9375-9543d111bc55-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.577951 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4xzt\" (UniqueName: \"kubernetes.io/projected/cbf55335-feed-4467-9375-9543d111bc55-kube-api-access-c4xzt\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.577968 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.577980 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.577992 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.578027 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.578039 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf55335-feed-4467-9375-9543d111bc55-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.578049 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf55335-feed-4467-9375-9543d111bc55-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.604585 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.679766 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.820238 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="017465e9-fb85-458e-8eca-109192bf47e7" path="/var/lib/kubelet/pods/017465e9-fb85-458e-8eca-109192bf47e7/volumes" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.895491 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:03:54 crc kubenswrapper[4826]: W0129 07:03:54.905524 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc080b978_6895_4067_9dd5_2c23d4d68518.slice/crio-7bea36e3d6f592921ffc6a06e593f52844df17f3699dbcefb0c6ce1458942911 WatchSource:0}: Error finding container 7bea36e3d6f592921ffc6a06e593f52844df17f3699dbcefb0c6ce1458942911: Status 404 returned error can't find the container with id 7bea36e3d6f592921ffc6a06e593f52844df17f3699dbcefb0c6ce1458942911 Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.945687 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c080b978-6895-4067-9dd5-2c23d4d68518","Type":"ContainerStarted","Data":"7bea36e3d6f592921ffc6a06e593f52844df17f3699dbcefb0c6ce1458942911"} Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.947698 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbf55335-feed-4467-9375-9543d111bc55","Type":"ContainerDied","Data":"aabb7bf5dbd735aa0be2bb411b1859ac593328e2447df4235ed017bf57b85db0"} Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.947762 4826 scope.go:117] "RemoveContainer" containerID="e705f84552b82451981b2667b42d642a119793c5b4249ad22f285c72e2bd3759" Jan 29 07:03:54 crc kubenswrapper[4826]: I0129 07:03:54.947925 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.040614 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.050054 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.066677 4826 scope.go:117] "RemoveContainer" containerID="284517534f5a17580a2a91b0b750cf4fe7d6b57ad048cb55c376801303f8f9ff" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.067862 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:03:55 crc kubenswrapper[4826]: E0129 07:03:55.069736 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf55335-feed-4467-9375-9543d111bc55" containerName="glance-httpd" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.069758 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf55335-feed-4467-9375-9543d111bc55" containerName="glance-httpd" Jan 29 07:03:55 crc kubenswrapper[4826]: E0129 07:03:55.069777 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf55335-feed-4467-9375-9543d111bc55" containerName="glance-log" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.069789 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf55335-feed-4467-9375-9543d111bc55" containerName="glance-log" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.069985 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf55335-feed-4467-9375-9543d111bc55" containerName="glance-httpd" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.069998 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf55335-feed-4467-9375-9543d111bc55" containerName="glance-log" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.070960 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.073353 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.074202 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.089777 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.187909 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.187978 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.188077 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.188133 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.188182 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.188214 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5378dab4-ad0c-4259-a7d2-d3f7e784a142-logs\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.188257 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqslr\" (UniqueName: \"kubernetes.io/projected/5378dab4-ad0c-4259-a7d2-d3f7e784a142-kube-api-access-gqslr\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.188300 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5378dab4-ad0c-4259-a7d2-d3f7e784a142-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.290041 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.290104 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.290150 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.290182 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.290215 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.290242 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5378dab4-ad0c-4259-a7d2-d3f7e784a142-logs\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.290279 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqslr\" (UniqueName: \"kubernetes.io/projected/5378dab4-ad0c-4259-a7d2-d3f7e784a142-kube-api-access-gqslr\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.290314 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5378dab4-ad0c-4259-a7d2-d3f7e784a142-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.290840 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5378dab4-ad0c-4259-a7d2-d3f7e784a142-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.291710 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.291738 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5378dab4-ad0c-4259-a7d2-d3f7e784a142-logs\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.295943 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.308737 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.310509 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.311223 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.315319 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqslr\" (UniqueName: \"kubernetes.io/projected/5378dab4-ad0c-4259-a7d2-d3f7e784a142-kube-api-access-gqslr\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.340845 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " pod="openstack/glance-default-internal-api-0" Jan 29 07:03:55 crc kubenswrapper[4826]: I0129 07:03:55.393733 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 07:03:56 crc kubenswrapper[4826]: I0129 07:03:56.121090 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:03:56 crc kubenswrapper[4826]: W0129 07:03:56.134029 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5378dab4_ad0c_4259_a7d2_d3f7e784a142.slice/crio-ab263a483e51b5e0215aae333c6a2ec2607b598a5c44587311b9c8ae95994fc9 WatchSource:0}: Error finding container ab263a483e51b5e0215aae333c6a2ec2607b598a5c44587311b9c8ae95994fc9: Status 404 returned error can't find the container with id ab263a483e51b5e0215aae333c6a2ec2607b598a5c44587311b9c8ae95994fc9 Jan 29 07:03:56 crc kubenswrapper[4826]: I0129 07:03:56.823928 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf55335-feed-4467-9375-9543d111bc55" path="/var/lib/kubelet/pods/cbf55335-feed-4467-9375-9543d111bc55/volumes" Jan 29 07:03:56 crc kubenswrapper[4826]: I0129 07:03:56.975526 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5378dab4-ad0c-4259-a7d2-d3f7e784a142","Type":"ContainerStarted","Data":"ab263a483e51b5e0215aae333c6a2ec2607b598a5c44587311b9c8ae95994fc9"} Jan 29 07:03:56 crc kubenswrapper[4826]: I0129 07:03:56.977243 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c080b978-6895-4067-9dd5-2c23d4d68518","Type":"ContainerStarted","Data":"b086320849aa987d26d49d964fbefd0fcd5dd9b1184d3344ea085e5c42fc14d1"} Jan 29 07:03:58 crc kubenswrapper[4826]: I0129 07:03:58.008054 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5378dab4-ad0c-4259-a7d2-d3f7e784a142","Type":"ContainerStarted","Data":"3817b6b8e4ea595c7ced258dd6bfd2af338287753b307239e9914dc8d293a791"} Jan 29 07:03:59 crc kubenswrapper[4826]: I0129 07:03:59.937751 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:03:59 crc kubenswrapper[4826]: I0129 07:03:59.938395 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:04:00 crc kubenswrapper[4826]: I0129 07:04:00.028206 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5378dab4-ad0c-4259-a7d2-d3f7e784a142","Type":"ContainerStarted","Data":"31b30318fc91eafcd3b97afe85a5e0965b844ee6e31ce721180f3fef71409d0b"} Jan 29 07:04:00 crc kubenswrapper[4826]: I0129 07:04:00.046800 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="ceilometer-central-agent" containerID="cri-o://ea2155d850be5d9399ee7421588707f9ed0c5dc2dff45d86a761598a2074604c" gracePeriod=30 Jan 29 07:04:00 crc kubenswrapper[4826]: I0129 07:04:00.047086 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949145dc-8a4e-4715-bf58-22395853ed14","Type":"ContainerStarted","Data":"df1308edb9ac92d762c5745e435d14d3a812a4a537d758c981482ddfd629b39d"} Jan 29 07:04:00 crc kubenswrapper[4826]: I0129 07:04:00.047133 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="proxy-httpd" containerID="cri-o://df1308edb9ac92d762c5745e435d14d3a812a4a537d758c981482ddfd629b39d" gracePeriod=30 Jan 29 07:04:00 crc kubenswrapper[4826]: I0129 07:04:00.047183 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="sg-core" containerID="cri-o://eada752438475dee942cc8aca12fbf0fee177f0e457bd66d055cec2310616f09" gracePeriod=30 Jan 29 07:04:00 crc kubenswrapper[4826]: I0129 07:04:00.047191 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 07:04:00 crc kubenswrapper[4826]: I0129 07:04:00.047219 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="ceilometer-notification-agent" containerID="cri-o://fe568e1bafcbdf0935eb5bb9a9fda5df2d252e12c2d9ce98b008ed2271d62778" gracePeriod=30 Jan 29 07:04:00 crc kubenswrapper[4826]: I0129 07:04:00.075390 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.075367888 podStartE2EDuration="5.075367888s" podCreationTimestamp="2026-01-29 07:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:04:00.071454687 +0000 UTC m=+1223.933247756" watchObservedRunningTime="2026-01-29 07:04:00.075367888 +0000 UTC m=+1223.937160957" Jan 29 07:04:00 crc kubenswrapper[4826]: I0129 07:04:00.077553 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c080b978-6895-4067-9dd5-2c23d4d68518","Type":"ContainerStarted","Data":"18ae489dd61bb2195354b5d8f5f9b6e0f384329f3a4ac858dde0d3feffe4b202"} Jan 29 07:04:00 crc kubenswrapper[4826]: I0129 07:04:00.108201 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.045397345 podStartE2EDuration="10.108183275s" podCreationTimestamp="2026-01-29 07:03:50 +0000 UTC" firstStartedPulling="2026-01-29 07:03:51.862546302 +0000 UTC m=+1215.724339371" lastFinishedPulling="2026-01-29 07:03:57.925332232 +0000 UTC m=+1221.787125301" observedRunningTime="2026-01-29 07:04:00.093033244 +0000 UTC m=+1223.954826313" watchObservedRunningTime="2026-01-29 07:04:00.108183275 +0000 UTC m=+1223.969976334" Jan 29 07:04:00 crc kubenswrapper[4826]: I0129 07:04:00.134466 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.134447932 podStartE2EDuration="7.134447932s" podCreationTimestamp="2026-01-29 07:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:04:00.111978862 +0000 UTC m=+1223.973771931" watchObservedRunningTime="2026-01-29 07:04:00.134447932 +0000 UTC m=+1223.996241001" Jan 29 07:04:01 crc kubenswrapper[4826]: I0129 07:04:01.087827 4826 generic.go:334] "Generic (PLEG): container finished" podID="949145dc-8a4e-4715-bf58-22395853ed14" containerID="df1308edb9ac92d762c5745e435d14d3a812a4a537d758c981482ddfd629b39d" exitCode=0 Jan 29 07:04:01 crc kubenswrapper[4826]: I0129 07:04:01.087861 4826 generic.go:334] "Generic (PLEG): container finished" podID="949145dc-8a4e-4715-bf58-22395853ed14" containerID="eada752438475dee942cc8aca12fbf0fee177f0e457bd66d055cec2310616f09" exitCode=2 Jan 29 07:04:01 crc kubenswrapper[4826]: I0129 07:04:01.087868 4826 generic.go:334] "Generic (PLEG): container finished" podID="949145dc-8a4e-4715-bf58-22395853ed14" containerID="fe568e1bafcbdf0935eb5bb9a9fda5df2d252e12c2d9ce98b008ed2271d62778" exitCode=0 Jan 29 07:04:01 crc kubenswrapper[4826]: I0129 07:04:01.088689 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949145dc-8a4e-4715-bf58-22395853ed14","Type":"ContainerDied","Data":"df1308edb9ac92d762c5745e435d14d3a812a4a537d758c981482ddfd629b39d"} Jan 29 07:04:01 crc kubenswrapper[4826]: I0129 07:04:01.088713 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949145dc-8a4e-4715-bf58-22395853ed14","Type":"ContainerDied","Data":"eada752438475dee942cc8aca12fbf0fee177f0e457bd66d055cec2310616f09"} Jan 29 07:04:01 crc kubenswrapper[4826]: I0129 07:04:01.088723 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949145dc-8a4e-4715-bf58-22395853ed14","Type":"ContainerDied","Data":"fe568e1bafcbdf0935eb5bb9a9fda5df2d252e12c2d9ce98b008ed2271d62778"} Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.339726 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.340368 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.380014 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.409163 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.635562 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-c48cv"] Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.636800 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-c48cv" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.645421 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-c48cv"] Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.729749 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-84d9-account-create-update-rrjq4"] Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.735981 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-84d9-account-create-update-rrjq4" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.738757 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.771746 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-84d9-account-create-update-rrjq4"] Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.782885 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-45cdw"] Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.784133 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-45cdw" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.799855 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c292348-77b1-4f3e-9a58-aaecfbaa43e9-operator-scripts\") pod \"nova-api-84d9-account-create-update-rrjq4\" (UID: \"0c292348-77b1-4f3e-9a58-aaecfbaa43e9\") " pod="openstack/nova-api-84d9-account-create-update-rrjq4" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.801198 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvxdd\" (UniqueName: \"kubernetes.io/projected/4d989bde-5508-4800-9910-76c04d308f3e-kube-api-access-hvxdd\") pod \"nova-api-db-create-c48cv\" (UID: \"4d989bde-5508-4800-9910-76c04d308f3e\") " pod="openstack/nova-api-db-create-c48cv" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.801241 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d989bde-5508-4800-9910-76c04d308f3e-operator-scripts\") pod \"nova-api-db-create-c48cv\" (UID: \"4d989bde-5508-4800-9910-76c04d308f3e\") " pod="openstack/nova-api-db-create-c48cv" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.801299 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8695\" (UniqueName: \"kubernetes.io/projected/0c292348-77b1-4f3e-9a58-aaecfbaa43e9-kube-api-access-n8695\") pod \"nova-api-84d9-account-create-update-rrjq4\" (UID: \"0c292348-77b1-4f3e-9a58-aaecfbaa43e9\") " pod="openstack/nova-api-84d9-account-create-update-rrjq4" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.807765 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-45cdw"] Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.852355 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-dp8h9"] Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.853837 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dp8h9" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.862294 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dp8h9"] Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.872431 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-984c-account-create-update-dpdkf"] Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.873790 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-984c-account-create-update-dpdkf" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.878596 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.880305 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-984c-account-create-update-dpdkf"] Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.904566 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf9l9\" (UniqueName: \"kubernetes.io/projected/81a9564d-73ee-4a76-ab97-61caad992764-kube-api-access-pf9l9\") pod \"nova-cell0-db-create-45cdw\" (UID: \"81a9564d-73ee-4a76-ab97-61caad992764\") " pod="openstack/nova-cell0-db-create-45cdw" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.904671 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvxdd\" (UniqueName: \"kubernetes.io/projected/4d989bde-5508-4800-9910-76c04d308f3e-kube-api-access-hvxdd\") pod \"nova-api-db-create-c48cv\" (UID: \"4d989bde-5508-4800-9910-76c04d308f3e\") " pod="openstack/nova-api-db-create-c48cv" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.904696 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d989bde-5508-4800-9910-76c04d308f3e-operator-scripts\") pod \"nova-api-db-create-c48cv\" (UID: \"4d989bde-5508-4800-9910-76c04d308f3e\") " pod="openstack/nova-api-db-create-c48cv" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.904757 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8695\" (UniqueName: \"kubernetes.io/projected/0c292348-77b1-4f3e-9a58-aaecfbaa43e9-kube-api-access-n8695\") pod \"nova-api-84d9-account-create-update-rrjq4\" (UID: \"0c292348-77b1-4f3e-9a58-aaecfbaa43e9\") " pod="openstack/nova-api-84d9-account-create-update-rrjq4" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.904803 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81a9564d-73ee-4a76-ab97-61caad992764-operator-scripts\") pod \"nova-cell0-db-create-45cdw\" (UID: \"81a9564d-73ee-4a76-ab97-61caad992764\") " pod="openstack/nova-cell0-db-create-45cdw" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.904851 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c292348-77b1-4f3e-9a58-aaecfbaa43e9-operator-scripts\") pod \"nova-api-84d9-account-create-update-rrjq4\" (UID: \"0c292348-77b1-4f3e-9a58-aaecfbaa43e9\") " pod="openstack/nova-api-84d9-account-create-update-rrjq4" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.905693 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c292348-77b1-4f3e-9a58-aaecfbaa43e9-operator-scripts\") pod \"nova-api-84d9-account-create-update-rrjq4\" (UID: \"0c292348-77b1-4f3e-9a58-aaecfbaa43e9\") " pod="openstack/nova-api-84d9-account-create-update-rrjq4" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.905724 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d989bde-5508-4800-9910-76c04d308f3e-operator-scripts\") pod \"nova-api-db-create-c48cv\" (UID: \"4d989bde-5508-4800-9910-76c04d308f3e\") " pod="openstack/nova-api-db-create-c48cv" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.927390 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvxdd\" (UniqueName: \"kubernetes.io/projected/4d989bde-5508-4800-9910-76c04d308f3e-kube-api-access-hvxdd\") pod \"nova-api-db-create-c48cv\" (UID: \"4d989bde-5508-4800-9910-76c04d308f3e\") " pod="openstack/nova-api-db-create-c48cv" Jan 29 07:04:04 crc kubenswrapper[4826]: I0129 07:04:04.944828 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8695\" (UniqueName: \"kubernetes.io/projected/0c292348-77b1-4f3e-9a58-aaecfbaa43e9-kube-api-access-n8695\") pod \"nova-api-84d9-account-create-update-rrjq4\" (UID: \"0c292348-77b1-4f3e-9a58-aaecfbaa43e9\") " pod="openstack/nova-api-84d9-account-create-update-rrjq4" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.006693 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vxj7\" (UniqueName: \"kubernetes.io/projected/48445e15-3a7e-4476-a253-0adab28f920e-kube-api-access-5vxj7\") pod \"nova-cell0-984c-account-create-update-dpdkf\" (UID: \"48445e15-3a7e-4476-a253-0adab28f920e\") " pod="openstack/nova-cell0-984c-account-create-update-dpdkf" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.006751 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81a9564d-73ee-4a76-ab97-61caad992764-operator-scripts\") pod \"nova-cell0-db-create-45cdw\" (UID: \"81a9564d-73ee-4a76-ab97-61caad992764\") " pod="openstack/nova-cell0-db-create-45cdw" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.006781 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/635336bb-af74-473f-88b2-77547b4a9ca1-operator-scripts\") pod \"nova-cell1-db-create-dp8h9\" (UID: \"635336bb-af74-473f-88b2-77547b4a9ca1\") " pod="openstack/nova-cell1-db-create-dp8h9" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.007053 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf9l9\" (UniqueName: \"kubernetes.io/projected/81a9564d-73ee-4a76-ab97-61caad992764-kube-api-access-pf9l9\") pod \"nova-cell0-db-create-45cdw\" (UID: \"81a9564d-73ee-4a76-ab97-61caad992764\") " pod="openstack/nova-cell0-db-create-45cdw" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.007107 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh7xv\" (UniqueName: \"kubernetes.io/projected/635336bb-af74-473f-88b2-77547b4a9ca1-kube-api-access-xh7xv\") pod \"nova-cell1-db-create-dp8h9\" (UID: \"635336bb-af74-473f-88b2-77547b4a9ca1\") " pod="openstack/nova-cell1-db-create-dp8h9" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.007153 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48445e15-3a7e-4476-a253-0adab28f920e-operator-scripts\") pod \"nova-cell0-984c-account-create-update-dpdkf\" (UID: \"48445e15-3a7e-4476-a253-0adab28f920e\") " pod="openstack/nova-cell0-984c-account-create-update-dpdkf" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.007488 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81a9564d-73ee-4a76-ab97-61caad992764-operator-scripts\") pod \"nova-cell0-db-create-45cdw\" (UID: \"81a9564d-73ee-4a76-ab97-61caad992764\") " pod="openstack/nova-cell0-db-create-45cdw" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.029637 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf9l9\" (UniqueName: \"kubernetes.io/projected/81a9564d-73ee-4a76-ab97-61caad992764-kube-api-access-pf9l9\") pod \"nova-cell0-db-create-45cdw\" (UID: \"81a9564d-73ee-4a76-ab97-61caad992764\") " pod="openstack/nova-cell0-db-create-45cdw" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.059645 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-431a-account-create-update-q9xq6"] Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.060954 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-431a-account-create-update-q9xq6" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.063110 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.070471 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-431a-account-create-update-q9xq6"] Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.070936 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-c48cv" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.083726 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-84d9-account-create-update-rrjq4" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.103027 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-45cdw" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.110855 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh7xv\" (UniqueName: \"kubernetes.io/projected/635336bb-af74-473f-88b2-77547b4a9ca1-kube-api-access-xh7xv\") pod \"nova-cell1-db-create-dp8h9\" (UID: \"635336bb-af74-473f-88b2-77547b4a9ca1\") " pod="openstack/nova-cell1-db-create-dp8h9" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.110905 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48445e15-3a7e-4476-a253-0adab28f920e-operator-scripts\") pod \"nova-cell0-984c-account-create-update-dpdkf\" (UID: \"48445e15-3a7e-4476-a253-0adab28f920e\") " pod="openstack/nova-cell0-984c-account-create-update-dpdkf" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.110996 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vxj7\" (UniqueName: \"kubernetes.io/projected/48445e15-3a7e-4476-a253-0adab28f920e-kube-api-access-5vxj7\") pod \"nova-cell0-984c-account-create-update-dpdkf\" (UID: \"48445e15-3a7e-4476-a253-0adab28f920e\") " pod="openstack/nova-cell0-984c-account-create-update-dpdkf" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.111059 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/635336bb-af74-473f-88b2-77547b4a9ca1-operator-scripts\") pod \"nova-cell1-db-create-dp8h9\" (UID: \"635336bb-af74-473f-88b2-77547b4a9ca1\") " pod="openstack/nova-cell1-db-create-dp8h9" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.111913 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48445e15-3a7e-4476-a253-0adab28f920e-operator-scripts\") pod \"nova-cell0-984c-account-create-update-dpdkf\" (UID: \"48445e15-3a7e-4476-a253-0adab28f920e\") " pod="openstack/nova-cell0-984c-account-create-update-dpdkf" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.116614 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/635336bb-af74-473f-88b2-77547b4a9ca1-operator-scripts\") pod \"nova-cell1-db-create-dp8h9\" (UID: \"635336bb-af74-473f-88b2-77547b4a9ca1\") " pod="openstack/nova-cell1-db-create-dp8h9" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.146028 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vxj7\" (UniqueName: \"kubernetes.io/projected/48445e15-3a7e-4476-a253-0adab28f920e-kube-api-access-5vxj7\") pod \"nova-cell0-984c-account-create-update-dpdkf\" (UID: \"48445e15-3a7e-4476-a253-0adab28f920e\") " pod="openstack/nova-cell0-984c-account-create-update-dpdkf" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.149686 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh7xv\" (UniqueName: \"kubernetes.io/projected/635336bb-af74-473f-88b2-77547b4a9ca1-kube-api-access-xh7xv\") pod \"nova-cell1-db-create-dp8h9\" (UID: \"635336bb-af74-473f-88b2-77547b4a9ca1\") " pod="openstack/nova-cell1-db-create-dp8h9" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.166936 4826 generic.go:334] "Generic (PLEG): container finished" podID="949145dc-8a4e-4715-bf58-22395853ed14" containerID="ea2155d850be5d9399ee7421588707f9ed0c5dc2dff45d86a761598a2074604c" exitCode=0 Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.167003 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949145dc-8a4e-4715-bf58-22395853ed14","Type":"ContainerDied","Data":"ea2155d850be5d9399ee7421588707f9ed0c5dc2dff45d86a761598a2074604c"} Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.167212 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.167273 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.172368 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dp8h9" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.195737 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-984c-account-create-update-dpdkf" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.214421 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7h6j\" (UniqueName: \"kubernetes.io/projected/882b3c41-2672-47df-a717-d2ccd9f7e2dc-kube-api-access-s7h6j\") pod \"nova-cell1-431a-account-create-update-q9xq6\" (UID: \"882b3c41-2672-47df-a717-d2ccd9f7e2dc\") " pod="openstack/nova-cell1-431a-account-create-update-q9xq6" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.214512 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882b3c41-2672-47df-a717-d2ccd9f7e2dc-operator-scripts\") pod \"nova-cell1-431a-account-create-update-q9xq6\" (UID: \"882b3c41-2672-47df-a717-d2ccd9f7e2dc\") " pod="openstack/nova-cell1-431a-account-create-update-q9xq6" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.231351 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.315555 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949145dc-8a4e-4715-bf58-22395853ed14-log-httpd\") pod \"949145dc-8a4e-4715-bf58-22395853ed14\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.315633 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4426z\" (UniqueName: \"kubernetes.io/projected/949145dc-8a4e-4715-bf58-22395853ed14-kube-api-access-4426z\") pod \"949145dc-8a4e-4715-bf58-22395853ed14\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.315676 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-config-data\") pod \"949145dc-8a4e-4715-bf58-22395853ed14\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.315776 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-combined-ca-bundle\") pod \"949145dc-8a4e-4715-bf58-22395853ed14\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.315793 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-scripts\") pod \"949145dc-8a4e-4715-bf58-22395853ed14\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.315856 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-sg-core-conf-yaml\") pod \"949145dc-8a4e-4715-bf58-22395853ed14\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.315946 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949145dc-8a4e-4715-bf58-22395853ed14-run-httpd\") pod \"949145dc-8a4e-4715-bf58-22395853ed14\" (UID: \"949145dc-8a4e-4715-bf58-22395853ed14\") " Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.316227 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882b3c41-2672-47df-a717-d2ccd9f7e2dc-operator-scripts\") pod \"nova-cell1-431a-account-create-update-q9xq6\" (UID: \"882b3c41-2672-47df-a717-d2ccd9f7e2dc\") " pod="openstack/nova-cell1-431a-account-create-update-q9xq6" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.316405 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7h6j\" (UniqueName: \"kubernetes.io/projected/882b3c41-2672-47df-a717-d2ccd9f7e2dc-kube-api-access-s7h6j\") pod \"nova-cell1-431a-account-create-update-q9xq6\" (UID: \"882b3c41-2672-47df-a717-d2ccd9f7e2dc\") " pod="openstack/nova-cell1-431a-account-create-update-q9xq6" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.316956 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949145dc-8a4e-4715-bf58-22395853ed14-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "949145dc-8a4e-4715-bf58-22395853ed14" (UID: "949145dc-8a4e-4715-bf58-22395853ed14"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.317261 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949145dc-8a4e-4715-bf58-22395853ed14-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "949145dc-8a4e-4715-bf58-22395853ed14" (UID: "949145dc-8a4e-4715-bf58-22395853ed14"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.317459 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882b3c41-2672-47df-a717-d2ccd9f7e2dc-operator-scripts\") pod \"nova-cell1-431a-account-create-update-q9xq6\" (UID: \"882b3c41-2672-47df-a717-d2ccd9f7e2dc\") " pod="openstack/nova-cell1-431a-account-create-update-q9xq6" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.325433 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-scripts" (OuterVolumeSpecName: "scripts") pod "949145dc-8a4e-4715-bf58-22395853ed14" (UID: "949145dc-8a4e-4715-bf58-22395853ed14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.325809 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949145dc-8a4e-4715-bf58-22395853ed14-kube-api-access-4426z" (OuterVolumeSpecName: "kube-api-access-4426z") pod "949145dc-8a4e-4715-bf58-22395853ed14" (UID: "949145dc-8a4e-4715-bf58-22395853ed14"). InnerVolumeSpecName "kube-api-access-4426z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.337805 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7h6j\" (UniqueName: \"kubernetes.io/projected/882b3c41-2672-47df-a717-d2ccd9f7e2dc-kube-api-access-s7h6j\") pod \"nova-cell1-431a-account-create-update-q9xq6\" (UID: \"882b3c41-2672-47df-a717-d2ccd9f7e2dc\") " pod="openstack/nova-cell1-431a-account-create-update-q9xq6" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.352353 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "949145dc-8a4e-4715-bf58-22395853ed14" (UID: "949145dc-8a4e-4715-bf58-22395853ed14"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.383017 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-431a-account-create-update-q9xq6" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.394395 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.394437 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.416361 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "949145dc-8a4e-4715-bf58-22395853ed14" (UID: "949145dc-8a4e-4715-bf58-22395853ed14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.423468 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949145dc-8a4e-4715-bf58-22395853ed14-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.423495 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949145dc-8a4e-4715-bf58-22395853ed14-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.423507 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4426z\" (UniqueName: \"kubernetes.io/projected/949145dc-8a4e-4715-bf58-22395853ed14-kube-api-access-4426z\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.423517 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.423525 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.423533 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.451097 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.455686 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.458700 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-config-data" (OuterVolumeSpecName: "config-data") pod "949145dc-8a4e-4715-bf58-22395853ed14" (UID: "949145dc-8a4e-4715-bf58-22395853ed14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.524796 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949145dc-8a4e-4715-bf58-22395853ed14-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.584939 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-c48cv"] Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.722333 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-45cdw"] Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.747483 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-84d9-account-create-update-rrjq4"] Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.842630 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-984c-account-create-update-dpdkf"] Jan 29 07:04:05 crc kubenswrapper[4826]: W0129 07:04:05.850329 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48445e15_3a7e_4476_a253_0adab28f920e.slice/crio-f39de3fa17a4a4203977e43dbff5daba3cc6bd160e21a7dcab3696f8cbe75c18 WatchSource:0}: Error finding container f39de3fa17a4a4203977e43dbff5daba3cc6bd160e21a7dcab3696f8cbe75c18: Status 404 returned error can't find the container with id f39de3fa17a4a4203977e43dbff5daba3cc6bd160e21a7dcab3696f8cbe75c18 Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.904586 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-431a-account-create-update-q9xq6"] Jan 29 07:04:05 crc kubenswrapper[4826]: W0129 07:04:05.911860 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod882b3c41_2672_47df_a717_d2ccd9f7e2dc.slice/crio-e0861c4686aad4d1274841c7b9e11f8e8df3464c9cd1000aa01f4932163ea871 WatchSource:0}: Error finding container e0861c4686aad4d1274841c7b9e11f8e8df3464c9cd1000aa01f4932163ea871: Status 404 returned error can't find the container with id e0861c4686aad4d1274841c7b9e11f8e8df3464c9cd1000aa01f4932163ea871 Jan 29 07:04:05 crc kubenswrapper[4826]: I0129 07:04:05.916945 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dp8h9"] Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.176993 4826 generic.go:334] "Generic (PLEG): container finished" podID="4d989bde-5508-4800-9910-76c04d308f3e" containerID="cc5c9fe906766fdcc27e8bc446b6ae193bd58eed0a3689382a15788e0c5569f2" exitCode=0 Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.177084 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-c48cv" event={"ID":"4d989bde-5508-4800-9910-76c04d308f3e","Type":"ContainerDied","Data":"cc5c9fe906766fdcc27e8bc446b6ae193bd58eed0a3689382a15788e0c5569f2"} Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.177138 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-c48cv" event={"ID":"4d989bde-5508-4800-9910-76c04d308f3e","Type":"ContainerStarted","Data":"226b0a6f91ca50a8898eb14978c9090edaaa8c5f59d179466d4100c3262fdce3"} Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.178266 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-431a-account-create-update-q9xq6" event={"ID":"882b3c41-2672-47df-a717-d2ccd9f7e2dc","Type":"ContainerStarted","Data":"e0861c4686aad4d1274841c7b9e11f8e8df3464c9cd1000aa01f4932163ea871"} Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.182062 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"949145dc-8a4e-4715-bf58-22395853ed14","Type":"ContainerDied","Data":"48e3076674c7b53b49d6f5bebb5c68073dc206b9402681292b43a0d3c5c015f7"} Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.182151 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.182170 4826 scope.go:117] "RemoveContainer" containerID="df1308edb9ac92d762c5745e435d14d3a812a4a537d758c981482ddfd629b39d" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.184005 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-45cdw" event={"ID":"81a9564d-73ee-4a76-ab97-61caad992764","Type":"ContainerStarted","Data":"018eed4364454815bb4af9ce7df76c240d53b948d5b108dd42db992f25fc0a95"} Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.185241 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-984c-account-create-update-dpdkf" event={"ID":"48445e15-3a7e-4476-a253-0adab28f920e","Type":"ContainerStarted","Data":"f39de3fa17a4a4203977e43dbff5daba3cc6bd160e21a7dcab3696f8cbe75c18"} Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.186256 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dp8h9" event={"ID":"635336bb-af74-473f-88b2-77547b4a9ca1","Type":"ContainerStarted","Data":"508bd0aa51a7b28b815853ba34ace2759a6f5b0b53b68cc82d5e146b77b5e65d"} Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.187540 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-84d9-account-create-update-rrjq4" event={"ID":"0c292348-77b1-4f3e-9a58-aaecfbaa43e9","Type":"ContainerStarted","Data":"195849c48e336f4896c4849e4848a20d0054c5e15213821bd6d9324f9eaeacd5"} Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.188381 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.188425 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.205866 4826 scope.go:117] "RemoveContainer" containerID="eada752438475dee942cc8aca12fbf0fee177f0e457bd66d055cec2310616f09" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.231100 4826 scope.go:117] "RemoveContainer" containerID="fe568e1bafcbdf0935eb5bb9a9fda5df2d252e12c2d9ce98b008ed2271d62778" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.244459 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.261364 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.265273 4826 scope.go:117] "RemoveContainer" containerID="ea2155d850be5d9399ee7421588707f9ed0c5dc2dff45d86a761598a2074604c" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.277060 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:06 crc kubenswrapper[4826]: E0129 07:04:06.277480 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="ceilometer-central-agent" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.277498 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="ceilometer-central-agent" Jan 29 07:04:06 crc kubenswrapper[4826]: E0129 07:04:06.277514 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="proxy-httpd" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.277520 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="proxy-httpd" Jan 29 07:04:06 crc kubenswrapper[4826]: E0129 07:04:06.277530 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="ceilometer-notification-agent" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.277537 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="ceilometer-notification-agent" Jan 29 07:04:06 crc kubenswrapper[4826]: E0129 07:04:06.277547 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="sg-core" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.277562 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="sg-core" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.277727 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="sg-core" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.277746 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="ceilometer-notification-agent" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.277756 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="ceilometer-central-agent" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.277769 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="949145dc-8a4e-4715-bf58-22395853ed14" containerName="proxy-httpd" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.279333 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.281437 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.281699 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.295386 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.338291 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b944598-3739-4dfd-bbbb-a8155b5be333-run-httpd\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.338363 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b944598-3739-4dfd-bbbb-a8155b5be333-log-httpd\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.338430 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-scripts\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.338470 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.338489 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9jkt\" (UniqueName: \"kubernetes.io/projected/2b944598-3739-4dfd-bbbb-a8155b5be333-kube-api-access-w9jkt\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.338508 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.338554 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-config-data\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.440411 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-scripts\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.440478 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.440501 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9jkt\" (UniqueName: \"kubernetes.io/projected/2b944598-3739-4dfd-bbbb-a8155b5be333-kube-api-access-w9jkt\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.440523 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.440569 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-config-data\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.440598 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b944598-3739-4dfd-bbbb-a8155b5be333-run-httpd\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.440623 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b944598-3739-4dfd-bbbb-a8155b5be333-log-httpd\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.441030 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b944598-3739-4dfd-bbbb-a8155b5be333-log-httpd\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.443720 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b944598-3739-4dfd-bbbb-a8155b5be333-run-httpd\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.446453 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.447126 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-config-data\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.447753 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.449427 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-scripts\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.464115 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9jkt\" (UniqueName: \"kubernetes.io/projected/2b944598-3739-4dfd-bbbb-a8155b5be333-kube-api-access-w9jkt\") pod \"ceilometer-0\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.595663 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:04:06 crc kubenswrapper[4826]: I0129 07:04:06.825984 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949145dc-8a4e-4715-bf58-22395853ed14" path="/var/lib/kubelet/pods/949145dc-8a4e-4715-bf58-22395853ed14/volumes" Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.049617 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.199915 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-431a-account-create-update-q9xq6" event={"ID":"882b3c41-2672-47df-a717-d2ccd9f7e2dc","Type":"ContainerStarted","Data":"bd6aa32cd0f15e492ee0631ae9d7045ced48634f793b26fba31fc94692e39ffc"} Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.204669 4826 generic.go:334] "Generic (PLEG): container finished" podID="81a9564d-73ee-4a76-ab97-61caad992764" containerID="f6aa23acc47e692957ab07731c46d27ddbb133feea525282ac5021374f9f89a9" exitCode=0 Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.204830 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-45cdw" event={"ID":"81a9564d-73ee-4a76-ab97-61caad992764","Type":"ContainerDied","Data":"f6aa23acc47e692957ab07731c46d27ddbb133feea525282ac5021374f9f89a9"} Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.207900 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-984c-account-create-update-dpdkf" event={"ID":"48445e15-3a7e-4476-a253-0adab28f920e","Type":"ContainerStarted","Data":"4677b9febca43835d07a5261efafaae8aef936a63e7457f76d3eb9bfc3d8d34e"} Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.210426 4826 generic.go:334] "Generic (PLEG): container finished" podID="635336bb-af74-473f-88b2-77547b4a9ca1" containerID="df2585b24365121c60a8b8502943ca22fe5a1a6fc73f6f2496cfd131bb0cca5c" exitCode=0 Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.210491 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dp8h9" event={"ID":"635336bb-af74-473f-88b2-77547b4a9ca1","Type":"ContainerDied","Data":"df2585b24365121c60a8b8502943ca22fe5a1a6fc73f6f2496cfd131bb0cca5c"} Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.217390 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b944598-3739-4dfd-bbbb-a8155b5be333","Type":"ContainerStarted","Data":"1f9ac00942ca1d2afa750a6518db2c0f79d03c6a59fee217803059d60edb89e0"} Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.226244 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-84d9-account-create-update-rrjq4" event={"ID":"0c292348-77b1-4f3e-9a58-aaecfbaa43e9","Type":"ContainerStarted","Data":"fc743e44b2c9d827e60ea724bbe7d974a44f3ee74e7756065624a98438ca7451"} Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.267425 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-431a-account-create-update-q9xq6" podStartSLOduration=2.267395837 podStartE2EDuration="2.267395837s" podCreationTimestamp="2026-01-29 07:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:04:07.231594834 +0000 UTC m=+1231.093387903" watchObservedRunningTime="2026-01-29 07:04:07.267395837 +0000 UTC m=+1231.129188906" Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.268787 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-84d9-account-create-update-rrjq4" podStartSLOduration=3.268778783 podStartE2EDuration="3.268778783s" podCreationTimestamp="2026-01-29 07:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:04:07.246271751 +0000 UTC m=+1231.108064810" watchObservedRunningTime="2026-01-29 07:04:07.268778783 +0000 UTC m=+1231.130571852" Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.322664 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-984c-account-create-update-dpdkf" podStartSLOduration=3.322637061 podStartE2EDuration="3.322637061s" podCreationTimestamp="2026-01-29 07:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:04:07.305383797 +0000 UTC m=+1231.167176866" watchObservedRunningTime="2026-01-29 07:04:07.322637061 +0000 UTC m=+1231.184430130" Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.396201 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.397027 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.638540 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-c48cv" Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.675080 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d989bde-5508-4800-9910-76c04d308f3e-operator-scripts\") pod \"4d989bde-5508-4800-9910-76c04d308f3e\" (UID: \"4d989bde-5508-4800-9910-76c04d308f3e\") " Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.675183 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvxdd\" (UniqueName: \"kubernetes.io/projected/4d989bde-5508-4800-9910-76c04d308f3e-kube-api-access-hvxdd\") pod \"4d989bde-5508-4800-9910-76c04d308f3e\" (UID: \"4d989bde-5508-4800-9910-76c04d308f3e\") " Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.676003 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d989bde-5508-4800-9910-76c04d308f3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d989bde-5508-4800-9910-76c04d308f3e" (UID: "4d989bde-5508-4800-9910-76c04d308f3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.701458 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d989bde-5508-4800-9910-76c04d308f3e-kube-api-access-hvxdd" (OuterVolumeSpecName: "kube-api-access-hvxdd") pod "4d989bde-5508-4800-9910-76c04d308f3e" (UID: "4d989bde-5508-4800-9910-76c04d308f3e"). InnerVolumeSpecName "kube-api-access-hvxdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.776885 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d989bde-5508-4800-9910-76c04d308f3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:07 crc kubenswrapper[4826]: I0129 07:04:07.778076 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvxdd\" (UniqueName: \"kubernetes.io/projected/4d989bde-5508-4800-9910-76c04d308f3e-kube-api-access-hvxdd\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.234930 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-c48cv" event={"ID":"4d989bde-5508-4800-9910-76c04d308f3e","Type":"ContainerDied","Data":"226b0a6f91ca50a8898eb14978c9090edaaa8c5f59d179466d4100c3262fdce3"} Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.235335 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="226b0a6f91ca50a8898eb14978c9090edaaa8c5f59d179466d4100c3262fdce3" Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.235001 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-c48cv" Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.237526 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.237987 4826 generic.go:334] "Generic (PLEG): container finished" podID="882b3c41-2672-47df-a717-d2ccd9f7e2dc" containerID="bd6aa32cd0f15e492ee0631ae9d7045ced48634f793b26fba31fc94692e39ffc" exitCode=0 Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.238162 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-431a-account-create-update-q9xq6" event={"ID":"882b3c41-2672-47df-a717-d2ccd9f7e2dc","Type":"ContainerDied","Data":"bd6aa32cd0f15e492ee0631ae9d7045ced48634f793b26fba31fc94692e39ffc"} Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.240332 4826 generic.go:334] "Generic (PLEG): container finished" podID="48445e15-3a7e-4476-a253-0adab28f920e" containerID="4677b9febca43835d07a5261efafaae8aef936a63e7457f76d3eb9bfc3d8d34e" exitCode=0 Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.240391 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-984c-account-create-update-dpdkf" event={"ID":"48445e15-3a7e-4476-a253-0adab28f920e","Type":"ContainerDied","Data":"4677b9febca43835d07a5261efafaae8aef936a63e7457f76d3eb9bfc3d8d34e"} Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.242291 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b944598-3739-4dfd-bbbb-a8155b5be333","Type":"ContainerStarted","Data":"63052738398e84954be9279b95df1f9ed8911464f4d8a722134201ecbd948294"} Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.247126 4826 generic.go:334] "Generic (PLEG): container finished" podID="0c292348-77b1-4f3e-9a58-aaecfbaa43e9" containerID="fc743e44b2c9d827e60ea724bbe7d974a44f3ee74e7756065624a98438ca7451" exitCode=0 Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.247163 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-84d9-account-create-update-rrjq4" event={"ID":"0c292348-77b1-4f3e-9a58-aaecfbaa43e9","Type":"ContainerDied","Data":"fc743e44b2c9d827e60ea724bbe7d974a44f3ee74e7756065624a98438ca7451"} Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.247426 4826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.459056 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.740400 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-45cdw" Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.778883 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dp8h9" Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.903044 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/635336bb-af74-473f-88b2-77547b4a9ca1-operator-scripts\") pod \"635336bb-af74-473f-88b2-77547b4a9ca1\" (UID: \"635336bb-af74-473f-88b2-77547b4a9ca1\") " Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.903169 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf9l9\" (UniqueName: \"kubernetes.io/projected/81a9564d-73ee-4a76-ab97-61caad992764-kube-api-access-pf9l9\") pod \"81a9564d-73ee-4a76-ab97-61caad992764\" (UID: \"81a9564d-73ee-4a76-ab97-61caad992764\") " Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.903273 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81a9564d-73ee-4a76-ab97-61caad992764-operator-scripts\") pod \"81a9564d-73ee-4a76-ab97-61caad992764\" (UID: \"81a9564d-73ee-4a76-ab97-61caad992764\") " Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.903351 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh7xv\" (UniqueName: \"kubernetes.io/projected/635336bb-af74-473f-88b2-77547b4a9ca1-kube-api-access-xh7xv\") pod \"635336bb-af74-473f-88b2-77547b4a9ca1\" (UID: \"635336bb-af74-473f-88b2-77547b4a9ca1\") " Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.903901 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/635336bb-af74-473f-88b2-77547b4a9ca1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "635336bb-af74-473f-88b2-77547b4a9ca1" (UID: "635336bb-af74-473f-88b2-77547b4a9ca1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.903901 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a9564d-73ee-4a76-ab97-61caad992764-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81a9564d-73ee-4a76-ab97-61caad992764" (UID: "81a9564d-73ee-4a76-ab97-61caad992764"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.911566 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/635336bb-af74-473f-88b2-77547b4a9ca1-kube-api-access-xh7xv" (OuterVolumeSpecName: "kube-api-access-xh7xv") pod "635336bb-af74-473f-88b2-77547b4a9ca1" (UID: "635336bb-af74-473f-88b2-77547b4a9ca1"). InnerVolumeSpecName "kube-api-access-xh7xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:08 crc kubenswrapper[4826]: I0129 07:04:08.915445 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a9564d-73ee-4a76-ab97-61caad992764-kube-api-access-pf9l9" (OuterVolumeSpecName: "kube-api-access-pf9l9") pod "81a9564d-73ee-4a76-ab97-61caad992764" (UID: "81a9564d-73ee-4a76-ab97-61caad992764"). InnerVolumeSpecName "kube-api-access-pf9l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.006268 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/635336bb-af74-473f-88b2-77547b4a9ca1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.006383 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf9l9\" (UniqueName: \"kubernetes.io/projected/81a9564d-73ee-4a76-ab97-61caad992764-kube-api-access-pf9l9\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.006404 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81a9564d-73ee-4a76-ab97-61caad992764-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.006432 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh7xv\" (UniqueName: \"kubernetes.io/projected/635336bb-af74-473f-88b2-77547b4a9ca1-kube-api-access-xh7xv\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.258404 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-45cdw" event={"ID":"81a9564d-73ee-4a76-ab97-61caad992764","Type":"ContainerDied","Data":"018eed4364454815bb4af9ce7df76c240d53b948d5b108dd42db992f25fc0a95"} Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.258718 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="018eed4364454815bb4af9ce7df76c240d53b948d5b108dd42db992f25fc0a95" Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.258438 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-45cdw" Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.260539 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dp8h9" Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.263799 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dp8h9" event={"ID":"635336bb-af74-473f-88b2-77547b4a9ca1","Type":"ContainerDied","Data":"508bd0aa51a7b28b815853ba34ace2759a6f5b0b53b68cc82d5e146b77b5e65d"} Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.263846 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="508bd0aa51a7b28b815853ba34ace2759a6f5b0b53b68cc82d5e146b77b5e65d" Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.755192 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-984c-account-create-update-dpdkf" Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.937958 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48445e15-3a7e-4476-a253-0adab28f920e-operator-scripts\") pod \"48445e15-3a7e-4476-a253-0adab28f920e\" (UID: \"48445e15-3a7e-4476-a253-0adab28f920e\") " Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.938332 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vxj7\" (UniqueName: \"kubernetes.io/projected/48445e15-3a7e-4476-a253-0adab28f920e-kube-api-access-5vxj7\") pod \"48445e15-3a7e-4476-a253-0adab28f920e\" (UID: \"48445e15-3a7e-4476-a253-0adab28f920e\") " Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.940856 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48445e15-3a7e-4476-a253-0adab28f920e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48445e15-3a7e-4476-a253-0adab28f920e" (UID: "48445e15-3a7e-4476-a253-0adab28f920e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:04:09 crc kubenswrapper[4826]: I0129 07:04:09.943678 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48445e15-3a7e-4476-a253-0adab28f920e-kube-api-access-5vxj7" (OuterVolumeSpecName: "kube-api-access-5vxj7") pod "48445e15-3a7e-4476-a253-0adab28f920e" (UID: "48445e15-3a7e-4476-a253-0adab28f920e"). InnerVolumeSpecName "kube-api-access-5vxj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.012197 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-431a-account-create-update-q9xq6" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.023769 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-84d9-account-create-update-rrjq4" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.042337 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48445e15-3a7e-4476-a253-0adab28f920e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.042373 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vxj7\" (UniqueName: \"kubernetes.io/projected/48445e15-3a7e-4476-a253-0adab28f920e-kube-api-access-5vxj7\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.143998 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c292348-77b1-4f3e-9a58-aaecfbaa43e9-operator-scripts\") pod \"0c292348-77b1-4f3e-9a58-aaecfbaa43e9\" (UID: \"0c292348-77b1-4f3e-9a58-aaecfbaa43e9\") " Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.144069 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882b3c41-2672-47df-a717-d2ccd9f7e2dc-operator-scripts\") pod \"882b3c41-2672-47df-a717-d2ccd9f7e2dc\" (UID: \"882b3c41-2672-47df-a717-d2ccd9f7e2dc\") " Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.144101 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7h6j\" (UniqueName: \"kubernetes.io/projected/882b3c41-2672-47df-a717-d2ccd9f7e2dc-kube-api-access-s7h6j\") pod \"882b3c41-2672-47df-a717-d2ccd9f7e2dc\" (UID: \"882b3c41-2672-47df-a717-d2ccd9f7e2dc\") " Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.144228 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8695\" (UniqueName: \"kubernetes.io/projected/0c292348-77b1-4f3e-9a58-aaecfbaa43e9-kube-api-access-n8695\") pod \"0c292348-77b1-4f3e-9a58-aaecfbaa43e9\" (UID: \"0c292348-77b1-4f3e-9a58-aaecfbaa43e9\") " Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.145547 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c292348-77b1-4f3e-9a58-aaecfbaa43e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c292348-77b1-4f3e-9a58-aaecfbaa43e9" (UID: "0c292348-77b1-4f3e-9a58-aaecfbaa43e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.145801 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882b3c41-2672-47df-a717-d2ccd9f7e2dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "882b3c41-2672-47df-a717-d2ccd9f7e2dc" (UID: "882b3c41-2672-47df-a717-d2ccd9f7e2dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.148063 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c292348-77b1-4f3e-9a58-aaecfbaa43e9-kube-api-access-n8695" (OuterVolumeSpecName: "kube-api-access-n8695") pod "0c292348-77b1-4f3e-9a58-aaecfbaa43e9" (UID: "0c292348-77b1-4f3e-9a58-aaecfbaa43e9"). InnerVolumeSpecName "kube-api-access-n8695". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.149953 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882b3c41-2672-47df-a717-d2ccd9f7e2dc-kube-api-access-s7h6j" (OuterVolumeSpecName: "kube-api-access-s7h6j") pod "882b3c41-2672-47df-a717-d2ccd9f7e2dc" (UID: "882b3c41-2672-47df-a717-d2ccd9f7e2dc"). InnerVolumeSpecName "kube-api-access-s7h6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.246719 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c292348-77b1-4f3e-9a58-aaecfbaa43e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.246973 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882b3c41-2672-47df-a717-d2ccd9f7e2dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.246984 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7h6j\" (UniqueName: \"kubernetes.io/projected/882b3c41-2672-47df-a717-d2ccd9f7e2dc-kube-api-access-s7h6j\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.246996 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8695\" (UniqueName: \"kubernetes.io/projected/0c292348-77b1-4f3e-9a58-aaecfbaa43e9-kube-api-access-n8695\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.272422 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-984c-account-create-update-dpdkf" event={"ID":"48445e15-3a7e-4476-a253-0adab28f920e","Type":"ContainerDied","Data":"f39de3fa17a4a4203977e43dbff5daba3cc6bd160e21a7dcab3696f8cbe75c18"} Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.272495 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f39de3fa17a4a4203977e43dbff5daba3cc6bd160e21a7dcab3696f8cbe75c18" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.272463 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-984c-account-create-update-dpdkf" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.275133 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-84d9-account-create-update-rrjq4" event={"ID":"0c292348-77b1-4f3e-9a58-aaecfbaa43e9","Type":"ContainerDied","Data":"195849c48e336f4896c4849e4848a20d0054c5e15213821bd6d9324f9eaeacd5"} Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.275166 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-84d9-account-create-update-rrjq4" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.275162 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="195849c48e336f4896c4849e4848a20d0054c5e15213821bd6d9324f9eaeacd5" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.277022 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-431a-account-create-update-q9xq6" event={"ID":"882b3c41-2672-47df-a717-d2ccd9f7e2dc","Type":"ContainerDied","Data":"e0861c4686aad4d1274841c7b9e11f8e8df3464c9cd1000aa01f4932163ea871"} Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.277063 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0861c4686aad4d1274841c7b9e11f8e8df3464c9cd1000aa01f4932163ea871" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.277060 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-431a-account-create-update-q9xq6" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.521380 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-glp6k"] Jan 29 07:04:10 crc kubenswrapper[4826]: E0129 07:04:10.522049 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c292348-77b1-4f3e-9a58-aaecfbaa43e9" containerName="mariadb-account-create-update" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.522071 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c292348-77b1-4f3e-9a58-aaecfbaa43e9" containerName="mariadb-account-create-update" Jan 29 07:04:10 crc kubenswrapper[4826]: E0129 07:04:10.522086 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a9564d-73ee-4a76-ab97-61caad992764" containerName="mariadb-database-create" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.522094 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a9564d-73ee-4a76-ab97-61caad992764" containerName="mariadb-database-create" Jan 29 07:04:10 crc kubenswrapper[4826]: E0129 07:04:10.522106 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d989bde-5508-4800-9910-76c04d308f3e" containerName="mariadb-database-create" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.522113 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d989bde-5508-4800-9910-76c04d308f3e" containerName="mariadb-database-create" Jan 29 07:04:10 crc kubenswrapper[4826]: E0129 07:04:10.522134 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635336bb-af74-473f-88b2-77547b4a9ca1" containerName="mariadb-database-create" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.522140 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="635336bb-af74-473f-88b2-77547b4a9ca1" containerName="mariadb-database-create" Jan 29 07:04:10 crc kubenswrapper[4826]: E0129 07:04:10.522154 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48445e15-3a7e-4476-a253-0adab28f920e" containerName="mariadb-account-create-update" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.522163 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="48445e15-3a7e-4476-a253-0adab28f920e" containerName="mariadb-account-create-update" Jan 29 07:04:10 crc kubenswrapper[4826]: E0129 07:04:10.522179 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882b3c41-2672-47df-a717-d2ccd9f7e2dc" containerName="mariadb-account-create-update" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.522187 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="882b3c41-2672-47df-a717-d2ccd9f7e2dc" containerName="mariadb-account-create-update" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.522410 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="882b3c41-2672-47df-a717-d2ccd9f7e2dc" containerName="mariadb-account-create-update" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.522437 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="81a9564d-73ee-4a76-ab97-61caad992764" containerName="mariadb-database-create" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.522449 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="48445e15-3a7e-4476-a253-0adab28f920e" containerName="mariadb-account-create-update" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.522472 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d989bde-5508-4800-9910-76c04d308f3e" containerName="mariadb-database-create" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.522488 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c292348-77b1-4f3e-9a58-aaecfbaa43e9" containerName="mariadb-account-create-update" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.522502 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="635336bb-af74-473f-88b2-77547b4a9ca1" containerName="mariadb-database-create" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.524813 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.527998 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.528241 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.528411 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dc5s7" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.528802 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-glp6k"] Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.654014 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-scripts\") pod \"nova-cell0-conductor-db-sync-glp6k\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.654118 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-glp6k\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.654594 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-config-data\") pod \"nova-cell0-conductor-db-sync-glp6k\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.654783 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh2sl\" (UniqueName: \"kubernetes.io/projected/ec91ae50-7020-40a3-bc50-16d3360b0d10-kube-api-access-rh2sl\") pod \"nova-cell0-conductor-db-sync-glp6k\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.756206 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-glp6k\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.756377 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-config-data\") pod \"nova-cell0-conductor-db-sync-glp6k\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.756430 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh2sl\" (UniqueName: \"kubernetes.io/projected/ec91ae50-7020-40a3-bc50-16d3360b0d10-kube-api-access-rh2sl\") pod \"nova-cell0-conductor-db-sync-glp6k\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.756523 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-scripts\") pod \"nova-cell0-conductor-db-sync-glp6k\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.762553 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-scripts\") pod \"nova-cell0-conductor-db-sync-glp6k\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.762611 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-glp6k\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.763357 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-config-data\") pod \"nova-cell0-conductor-db-sync-glp6k\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.779836 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh2sl\" (UniqueName: \"kubernetes.io/projected/ec91ae50-7020-40a3-bc50-16d3360b0d10-kube-api-access-rh2sl\") pod \"nova-cell0-conductor-db-sync-glp6k\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:10 crc kubenswrapper[4826]: I0129 07:04:10.871170 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:11 crc kubenswrapper[4826]: I0129 07:04:11.290089 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b944598-3739-4dfd-bbbb-a8155b5be333","Type":"ContainerStarted","Data":"dda22345193f3efa59fe420778353c99c634977df85688616a4c520ba59db42f"} Jan 29 07:04:11 crc kubenswrapper[4826]: I0129 07:04:11.517748 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-glp6k"] Jan 29 07:04:12 crc kubenswrapper[4826]: I0129 07:04:12.300622 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-glp6k" event={"ID":"ec91ae50-7020-40a3-bc50-16d3360b0d10","Type":"ContainerStarted","Data":"9d158cac1f2dc00b2a60ea4d354b25ecefb0edbdf774ac4fe8ba077207fdedab"} Jan 29 07:04:12 crc kubenswrapper[4826]: I0129 07:04:12.750905 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:13 crc kubenswrapper[4826]: I0129 07:04:13.324999 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b944598-3739-4dfd-bbbb-a8155b5be333","Type":"ContainerStarted","Data":"191e40e872573258f2a0551d032e4f1395ced10fe9b16a57036a4eba25a4925f"} Jan 29 07:04:20 crc kubenswrapper[4826]: I0129 07:04:20.416038 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-glp6k" event={"ID":"ec91ae50-7020-40a3-bc50-16d3360b0d10","Type":"ContainerStarted","Data":"33a75c8196166d38a593255be21141b79abb974c28717d5282e9fca7bf70f313"} Jan 29 07:04:20 crc kubenswrapper[4826]: I0129 07:04:20.420737 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b944598-3739-4dfd-bbbb-a8155b5be333","Type":"ContainerStarted","Data":"a79a2b51e9072617acf015809bd0e0041cd28d831aae105ff145d129c0de79cf"} Jan 29 07:04:20 crc kubenswrapper[4826]: I0129 07:04:20.421097 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="ceilometer-central-agent" containerID="cri-o://63052738398e84954be9279b95df1f9ed8911464f4d8a722134201ecbd948294" gracePeriod=30 Jan 29 07:04:20 crc kubenswrapper[4826]: I0129 07:04:20.422397 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="proxy-httpd" containerID="cri-o://a79a2b51e9072617acf015809bd0e0041cd28d831aae105ff145d129c0de79cf" gracePeriod=30 Jan 29 07:04:20 crc kubenswrapper[4826]: I0129 07:04:20.422547 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="sg-core" containerID="cri-o://191e40e872573258f2a0551d032e4f1395ced10fe9b16a57036a4eba25a4925f" gracePeriod=30 Jan 29 07:04:20 crc kubenswrapper[4826]: I0129 07:04:20.422619 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="ceilometer-notification-agent" containerID="cri-o://dda22345193f3efa59fe420778353c99c634977df85688616a4c520ba59db42f" gracePeriod=30 Jan 29 07:04:20 crc kubenswrapper[4826]: I0129 07:04:20.421153 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 07:04:20 crc kubenswrapper[4826]: I0129 07:04:20.454509 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-glp6k" podStartSLOduration=2.537389929 podStartE2EDuration="10.454481915s" podCreationTimestamp="2026-01-29 07:04:10 +0000 UTC" firstStartedPulling="2026-01-29 07:04:11.563733851 +0000 UTC m=+1235.425526920" lastFinishedPulling="2026-01-29 07:04:19.480825807 +0000 UTC m=+1243.342618906" observedRunningTime="2026-01-29 07:04:20.449576156 +0000 UTC m=+1244.311369265" watchObservedRunningTime="2026-01-29 07:04:20.454481915 +0000 UTC m=+1244.316275024" Jan 29 07:04:20 crc kubenswrapper[4826]: I0129 07:04:20.493975 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.071062252 podStartE2EDuration="14.493943073s" podCreationTimestamp="2026-01-29 07:04:06 +0000 UTC" firstStartedPulling="2026-01-29 07:04:07.057027312 +0000 UTC m=+1230.918820381" lastFinishedPulling="2026-01-29 07:04:19.479908133 +0000 UTC m=+1243.341701202" observedRunningTime="2026-01-29 07:04:20.485254875 +0000 UTC m=+1244.347048014" watchObservedRunningTime="2026-01-29 07:04:20.493943073 +0000 UTC m=+1244.355736162" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.431591 4826 generic.go:334] "Generic (PLEG): container finished" podID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerID="a79a2b51e9072617acf015809bd0e0041cd28d831aae105ff145d129c0de79cf" exitCode=0 Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.431966 4826 generic.go:334] "Generic (PLEG): container finished" podID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerID="191e40e872573258f2a0551d032e4f1395ced10fe9b16a57036a4eba25a4925f" exitCode=2 Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.431979 4826 generic.go:334] "Generic (PLEG): container finished" podID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerID="dda22345193f3efa59fe420778353c99c634977df85688616a4c520ba59db42f" exitCode=0 Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.431989 4826 generic.go:334] "Generic (PLEG): container finished" podID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerID="63052738398e84954be9279b95df1f9ed8911464f4d8a722134201ecbd948294" exitCode=0 Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.431637 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b944598-3739-4dfd-bbbb-a8155b5be333","Type":"ContainerDied","Data":"a79a2b51e9072617acf015809bd0e0041cd28d831aae105ff145d129c0de79cf"} Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.432122 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b944598-3739-4dfd-bbbb-a8155b5be333","Type":"ContainerDied","Data":"191e40e872573258f2a0551d032e4f1395ced10fe9b16a57036a4eba25a4925f"} Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.432149 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b944598-3739-4dfd-bbbb-a8155b5be333","Type":"ContainerDied","Data":"dda22345193f3efa59fe420778353c99c634977df85688616a4c520ba59db42f"} Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.432161 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b944598-3739-4dfd-bbbb-a8155b5be333","Type":"ContainerDied","Data":"63052738398e84954be9279b95df1f9ed8911464f4d8a722134201ecbd948294"} Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.432172 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b944598-3739-4dfd-bbbb-a8155b5be333","Type":"ContainerDied","Data":"1f9ac00942ca1d2afa750a6518db2c0f79d03c6a59fee217803059d60edb89e0"} Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.432182 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f9ac00942ca1d2afa750a6518db2c0f79d03c6a59fee217803059d60edb89e0" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.435325 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.464985 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-combined-ca-bundle\") pod \"2b944598-3739-4dfd-bbbb-a8155b5be333\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.465067 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-scripts\") pod \"2b944598-3739-4dfd-bbbb-a8155b5be333\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.465142 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9jkt\" (UniqueName: \"kubernetes.io/projected/2b944598-3739-4dfd-bbbb-a8155b5be333-kube-api-access-w9jkt\") pod \"2b944598-3739-4dfd-bbbb-a8155b5be333\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.465227 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b944598-3739-4dfd-bbbb-a8155b5be333-run-httpd\") pod \"2b944598-3739-4dfd-bbbb-a8155b5be333\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.465350 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-sg-core-conf-yaml\") pod \"2b944598-3739-4dfd-bbbb-a8155b5be333\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.468969 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b944598-3739-4dfd-bbbb-a8155b5be333-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2b944598-3739-4dfd-bbbb-a8155b5be333" (UID: "2b944598-3739-4dfd-bbbb-a8155b5be333"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.472565 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b944598-3739-4dfd-bbbb-a8155b5be333-kube-api-access-w9jkt" (OuterVolumeSpecName: "kube-api-access-w9jkt") pod "2b944598-3739-4dfd-bbbb-a8155b5be333" (UID: "2b944598-3739-4dfd-bbbb-a8155b5be333"). InnerVolumeSpecName "kube-api-access-w9jkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.493545 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-scripts" (OuterVolumeSpecName: "scripts") pod "2b944598-3739-4dfd-bbbb-a8155b5be333" (UID: "2b944598-3739-4dfd-bbbb-a8155b5be333"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.496511 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2b944598-3739-4dfd-bbbb-a8155b5be333" (UID: "2b944598-3739-4dfd-bbbb-a8155b5be333"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.565601 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b944598-3739-4dfd-bbbb-a8155b5be333" (UID: "2b944598-3739-4dfd-bbbb-a8155b5be333"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.567227 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-config-data\") pod \"2b944598-3739-4dfd-bbbb-a8155b5be333\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.567343 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b944598-3739-4dfd-bbbb-a8155b5be333-log-httpd\") pod \"2b944598-3739-4dfd-bbbb-a8155b5be333\" (UID: \"2b944598-3739-4dfd-bbbb-a8155b5be333\") " Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.567837 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.567859 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.567873 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9jkt\" (UniqueName: \"kubernetes.io/projected/2b944598-3739-4dfd-bbbb-a8155b5be333-kube-api-access-w9jkt\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.567887 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b944598-3739-4dfd-bbbb-a8155b5be333-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.567898 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.568082 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b944598-3739-4dfd-bbbb-a8155b5be333-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2b944598-3739-4dfd-bbbb-a8155b5be333" (UID: "2b944598-3739-4dfd-bbbb-a8155b5be333"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.668610 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b944598-3739-4dfd-bbbb-a8155b5be333-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.679022 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-config-data" (OuterVolumeSpecName: "config-data") pod "2b944598-3739-4dfd-bbbb-a8155b5be333" (UID: "2b944598-3739-4dfd-bbbb-a8155b5be333"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:21 crc kubenswrapper[4826]: I0129 07:04:21.770202 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b944598-3739-4dfd-bbbb-a8155b5be333-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.439696 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.480378 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.494312 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.523402 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:22 crc kubenswrapper[4826]: E0129 07:04:22.523848 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="proxy-httpd" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.523864 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="proxy-httpd" Jan 29 07:04:22 crc kubenswrapper[4826]: E0129 07:04:22.523883 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="ceilometer-central-agent" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.523892 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="ceilometer-central-agent" Jan 29 07:04:22 crc kubenswrapper[4826]: E0129 07:04:22.523915 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="ceilometer-notification-agent" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.523924 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="ceilometer-notification-agent" Jan 29 07:04:22 crc kubenswrapper[4826]: E0129 07:04:22.523937 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="sg-core" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.523944 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="sg-core" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.524161 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="sg-core" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.524179 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="ceilometer-notification-agent" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.524204 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="ceilometer-central-agent" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.524226 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" containerName="proxy-httpd" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.526186 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.528752 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.528931 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.557759 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.687266 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgzg8\" (UniqueName: \"kubernetes.io/projected/25e8c873-524a-4274-a56c-5f7d49451acf-kube-api-access-lgzg8\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.687419 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e8c873-524a-4274-a56c-5f7d49451acf-run-httpd\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.688540 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e8c873-524a-4274-a56c-5f7d49451acf-log-httpd\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.688755 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-scripts\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.688860 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.688947 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-config-data\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.689177 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.791458 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgzg8\" (UniqueName: \"kubernetes.io/projected/25e8c873-524a-4274-a56c-5f7d49451acf-kube-api-access-lgzg8\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.791540 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e8c873-524a-4274-a56c-5f7d49451acf-run-httpd\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.791588 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e8c873-524a-4274-a56c-5f7d49451acf-log-httpd\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.791620 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-scripts\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.791643 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.791669 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-config-data\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.791692 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.792839 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e8c873-524a-4274-a56c-5f7d49451acf-run-httpd\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.793328 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e8c873-524a-4274-a56c-5f7d49451acf-log-httpd\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.796805 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.797120 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.801982 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-scripts\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.804922 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-config-data\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.809268 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgzg8\" (UniqueName: \"kubernetes.io/projected/25e8c873-524a-4274-a56c-5f7d49451acf-kube-api-access-lgzg8\") pod \"ceilometer-0\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " pod="openstack/ceilometer-0" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.819687 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b944598-3739-4dfd-bbbb-a8155b5be333" path="/var/lib/kubelet/pods/2b944598-3739-4dfd-bbbb-a8155b5be333/volumes" Jan 29 07:04:22 crc kubenswrapper[4826]: I0129 07:04:22.850227 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:04:23 crc kubenswrapper[4826]: I0129 07:04:23.174601 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:23 crc kubenswrapper[4826]: W0129 07:04:23.180823 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25e8c873_524a_4274_a56c_5f7d49451acf.slice/crio-c14e2af2382f2394a670f75153c0c566a92e9204e74330a0cfcf26150e4244ac WatchSource:0}: Error finding container c14e2af2382f2394a670f75153c0c566a92e9204e74330a0cfcf26150e4244ac: Status 404 returned error can't find the container with id c14e2af2382f2394a670f75153c0c566a92e9204e74330a0cfcf26150e4244ac Jan 29 07:04:23 crc kubenswrapper[4826]: I0129 07:04:23.458958 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e8c873-524a-4274-a56c-5f7d49451acf","Type":"ContainerStarted","Data":"c14e2af2382f2394a670f75153c0c566a92e9204e74330a0cfcf26150e4244ac"} Jan 29 07:04:24 crc kubenswrapper[4826]: I0129 07:04:24.473659 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e8c873-524a-4274-a56c-5f7d49451acf","Type":"ContainerStarted","Data":"40ec7e8ca4dc542c396cd8053739a81758a07a9b844e4a0eab64c8253680c20d"} Jan 29 07:04:24 crc kubenswrapper[4826]: I0129 07:04:24.675287 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:25 crc kubenswrapper[4826]: I0129 07:04:25.486275 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e8c873-524a-4274-a56c-5f7d49451acf","Type":"ContainerStarted","Data":"a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab"} Jan 29 07:04:25 crc kubenswrapper[4826]: I0129 07:04:25.486626 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e8c873-524a-4274-a56c-5f7d49451acf","Type":"ContainerStarted","Data":"98340f408004b90dfc6f93d8617320f8236b17b0777caeac078f60f0cb3cc092"} Jan 29 07:04:27 crc kubenswrapper[4826]: I0129 07:04:27.525995 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e8c873-524a-4274-a56c-5f7d49451acf","Type":"ContainerStarted","Data":"da54d5844c461d1b657ef98b779ff018ee213d81a0a20bbc68cb72d9a782ada2"} Jan 29 07:04:27 crc kubenswrapper[4826]: I0129 07:04:27.529063 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 07:04:27 crc kubenswrapper[4826]: I0129 07:04:27.526902 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="sg-core" containerID="cri-o://a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab" gracePeriod=30 Jan 29 07:04:27 crc kubenswrapper[4826]: I0129 07:04:27.526921 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="proxy-httpd" containerID="cri-o://da54d5844c461d1b657ef98b779ff018ee213d81a0a20bbc68cb72d9a782ada2" gracePeriod=30 Jan 29 07:04:27 crc kubenswrapper[4826]: I0129 07:04:27.526933 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="ceilometer-notification-agent" containerID="cri-o://98340f408004b90dfc6f93d8617320f8236b17b0777caeac078f60f0cb3cc092" gracePeriod=30 Jan 29 07:04:27 crc kubenswrapper[4826]: I0129 07:04:27.526684 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="ceilometer-central-agent" containerID="cri-o://40ec7e8ca4dc542c396cd8053739a81758a07a9b844e4a0eab64c8253680c20d" gracePeriod=30 Jan 29 07:04:27 crc kubenswrapper[4826]: I0129 07:04:27.572597 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7833179239999999 podStartE2EDuration="5.572579381s" podCreationTimestamp="2026-01-29 07:04:22 +0000 UTC" firstStartedPulling="2026-01-29 07:04:23.18558195 +0000 UTC m=+1247.047375019" lastFinishedPulling="2026-01-29 07:04:26.974843397 +0000 UTC m=+1250.836636476" observedRunningTime="2026-01-29 07:04:27.546915285 +0000 UTC m=+1251.408708374" watchObservedRunningTime="2026-01-29 07:04:27.572579381 +0000 UTC m=+1251.434372450" Jan 29 07:04:27 crc kubenswrapper[4826]: E0129 07:04:27.645131 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25e8c873_524a_4274_a56c_5f7d49451acf.slice/crio-conmon-a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25e8c873_524a_4274_a56c_5f7d49451acf.slice/crio-a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab.scope\": RecentStats: unable to find data in memory cache]" Jan 29 07:04:28 crc kubenswrapper[4826]: I0129 07:04:28.537865 4826 generic.go:334] "Generic (PLEG): container finished" podID="25e8c873-524a-4274-a56c-5f7d49451acf" containerID="da54d5844c461d1b657ef98b779ff018ee213d81a0a20bbc68cb72d9a782ada2" exitCode=0 Jan 29 07:04:28 crc kubenswrapper[4826]: I0129 07:04:28.538183 4826 generic.go:334] "Generic (PLEG): container finished" podID="25e8c873-524a-4274-a56c-5f7d49451acf" containerID="a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab" exitCode=2 Jan 29 07:04:28 crc kubenswrapper[4826]: I0129 07:04:28.538191 4826 generic.go:334] "Generic (PLEG): container finished" podID="25e8c873-524a-4274-a56c-5f7d49451acf" containerID="98340f408004b90dfc6f93d8617320f8236b17b0777caeac078f60f0cb3cc092" exitCode=0 Jan 29 07:04:28 crc kubenswrapper[4826]: I0129 07:04:28.537935 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e8c873-524a-4274-a56c-5f7d49451acf","Type":"ContainerDied","Data":"da54d5844c461d1b657ef98b779ff018ee213d81a0a20bbc68cb72d9a782ada2"} Jan 29 07:04:28 crc kubenswrapper[4826]: I0129 07:04:28.538233 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e8c873-524a-4274-a56c-5f7d49451acf","Type":"ContainerDied","Data":"a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab"} Jan 29 07:04:28 crc kubenswrapper[4826]: I0129 07:04:28.538249 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e8c873-524a-4274-a56c-5f7d49451acf","Type":"ContainerDied","Data":"98340f408004b90dfc6f93d8617320f8236b17b0777caeac078f60f0cb3cc092"} Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.054376 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.076915 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgzg8\" (UniqueName: \"kubernetes.io/projected/25e8c873-524a-4274-a56c-5f7d49451acf-kube-api-access-lgzg8\") pod \"25e8c873-524a-4274-a56c-5f7d49451acf\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.076977 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e8c873-524a-4274-a56c-5f7d49451acf-run-httpd\") pod \"25e8c873-524a-4274-a56c-5f7d49451acf\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.077002 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-scripts\") pod \"25e8c873-524a-4274-a56c-5f7d49451acf\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.077039 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-sg-core-conf-yaml\") pod \"25e8c873-524a-4274-a56c-5f7d49451acf\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.077086 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-combined-ca-bundle\") pod \"25e8c873-524a-4274-a56c-5f7d49451acf\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.077230 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-config-data\") pod \"25e8c873-524a-4274-a56c-5f7d49451acf\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.077358 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e8c873-524a-4274-a56c-5f7d49451acf-log-httpd\") pod \"25e8c873-524a-4274-a56c-5f7d49451acf\" (UID: \"25e8c873-524a-4274-a56c-5f7d49451acf\") " Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.077563 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25e8c873-524a-4274-a56c-5f7d49451acf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "25e8c873-524a-4274-a56c-5f7d49451acf" (UID: "25e8c873-524a-4274-a56c-5f7d49451acf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.077879 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e8c873-524a-4274-a56c-5f7d49451acf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.078210 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25e8c873-524a-4274-a56c-5f7d49451acf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "25e8c873-524a-4274-a56c-5f7d49451acf" (UID: "25e8c873-524a-4274-a56c-5f7d49451acf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.087984 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-scripts" (OuterVolumeSpecName: "scripts") pod "25e8c873-524a-4274-a56c-5f7d49451acf" (UID: "25e8c873-524a-4274-a56c-5f7d49451acf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.091034 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e8c873-524a-4274-a56c-5f7d49451acf-kube-api-access-lgzg8" (OuterVolumeSpecName: "kube-api-access-lgzg8") pod "25e8c873-524a-4274-a56c-5f7d49451acf" (UID: "25e8c873-524a-4274-a56c-5f7d49451acf"). InnerVolumeSpecName "kube-api-access-lgzg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.144617 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "25e8c873-524a-4274-a56c-5f7d49451acf" (UID: "25e8c873-524a-4274-a56c-5f7d49451acf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.179449 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e8c873-524a-4274-a56c-5f7d49451acf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.179517 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgzg8\" (UniqueName: \"kubernetes.io/projected/25e8c873-524a-4274-a56c-5f7d49451acf-kube-api-access-lgzg8\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.179533 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.179546 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.195639 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25e8c873-524a-4274-a56c-5f7d49451acf" (UID: "25e8c873-524a-4274-a56c-5f7d49451acf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.196098 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-config-data" (OuterVolumeSpecName: "config-data") pod "25e8c873-524a-4274-a56c-5f7d49451acf" (UID: "25e8c873-524a-4274-a56c-5f7d49451acf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.282415 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.282509 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e8c873-524a-4274-a56c-5f7d49451acf-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.589265 4826 generic.go:334] "Generic (PLEG): container finished" podID="25e8c873-524a-4274-a56c-5f7d49451acf" containerID="40ec7e8ca4dc542c396cd8053739a81758a07a9b844e4a0eab64c8253680c20d" exitCode=0 Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.589344 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e8c873-524a-4274-a56c-5f7d49451acf","Type":"ContainerDied","Data":"40ec7e8ca4dc542c396cd8053739a81758a07a9b844e4a0eab64c8253680c20d"} Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.589721 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25e8c873-524a-4274-a56c-5f7d49451acf","Type":"ContainerDied","Data":"c14e2af2382f2394a670f75153c0c566a92e9204e74330a0cfcf26150e4244ac"} Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.589741 4826 scope.go:117] "RemoveContainer" containerID="da54d5844c461d1b657ef98b779ff018ee213d81a0a20bbc68cb72d9a782ada2" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.589435 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.593627 4826 generic.go:334] "Generic (PLEG): container finished" podID="ec91ae50-7020-40a3-bc50-16d3360b0d10" containerID="33a75c8196166d38a593255be21141b79abb974c28717d5282e9fca7bf70f313" exitCode=0 Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.593683 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-glp6k" event={"ID":"ec91ae50-7020-40a3-bc50-16d3360b0d10","Type":"ContainerDied","Data":"33a75c8196166d38a593255be21141b79abb974c28717d5282e9fca7bf70f313"} Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.633474 4826 scope.go:117] "RemoveContainer" containerID="a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.666646 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.686557 4826 scope.go:117] "RemoveContainer" containerID="98340f408004b90dfc6f93d8617320f8236b17b0777caeac078f60f0cb3cc092" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.688834 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.704425 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:31 crc kubenswrapper[4826]: E0129 07:04:31.705039 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="proxy-httpd" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.705071 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="proxy-httpd" Jan 29 07:04:31 crc kubenswrapper[4826]: E0129 07:04:31.705093 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="ceilometer-notification-agent" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.705106 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="ceilometer-notification-agent" Jan 29 07:04:31 crc kubenswrapper[4826]: E0129 07:04:31.705145 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="sg-core" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.705156 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="sg-core" Jan 29 07:04:31 crc kubenswrapper[4826]: E0129 07:04:31.705190 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="ceilometer-central-agent" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.705201 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="ceilometer-central-agent" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.705539 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="proxy-httpd" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.705595 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="sg-core" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.705613 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="ceilometer-central-agent" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.705633 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" containerName="ceilometer-notification-agent" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.708735 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.717077 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.717559 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.721116 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.758341 4826 scope.go:117] "RemoveContainer" containerID="40ec7e8ca4dc542c396cd8053739a81758a07a9b844e4a0eab64c8253680c20d" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.773360 4826 scope.go:117] "RemoveContainer" containerID="da54d5844c461d1b657ef98b779ff018ee213d81a0a20bbc68cb72d9a782ada2" Jan 29 07:04:31 crc kubenswrapper[4826]: E0129 07:04:31.773833 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da54d5844c461d1b657ef98b779ff018ee213d81a0a20bbc68cb72d9a782ada2\": container with ID starting with da54d5844c461d1b657ef98b779ff018ee213d81a0a20bbc68cb72d9a782ada2 not found: ID does not exist" containerID="da54d5844c461d1b657ef98b779ff018ee213d81a0a20bbc68cb72d9a782ada2" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.773900 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da54d5844c461d1b657ef98b779ff018ee213d81a0a20bbc68cb72d9a782ada2"} err="failed to get container status \"da54d5844c461d1b657ef98b779ff018ee213d81a0a20bbc68cb72d9a782ada2\": rpc error: code = NotFound desc = could not find container \"da54d5844c461d1b657ef98b779ff018ee213d81a0a20bbc68cb72d9a782ada2\": container with ID starting with da54d5844c461d1b657ef98b779ff018ee213d81a0a20bbc68cb72d9a782ada2 not found: ID does not exist" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.773930 4826 scope.go:117] "RemoveContainer" containerID="a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab" Jan 29 07:04:31 crc kubenswrapper[4826]: E0129 07:04:31.774308 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab\": container with ID starting with a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab not found: ID does not exist" containerID="a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.774343 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab"} err="failed to get container status \"a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab\": rpc error: code = NotFound desc = could not find container \"a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab\": container with ID starting with a18d13319697e0844cd05089358e00c78ba2cddf49a997c788a640748e8d91ab not found: ID does not exist" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.774366 4826 scope.go:117] "RemoveContainer" containerID="98340f408004b90dfc6f93d8617320f8236b17b0777caeac078f60f0cb3cc092" Jan 29 07:04:31 crc kubenswrapper[4826]: E0129 07:04:31.774618 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98340f408004b90dfc6f93d8617320f8236b17b0777caeac078f60f0cb3cc092\": container with ID starting with 98340f408004b90dfc6f93d8617320f8236b17b0777caeac078f60f0cb3cc092 not found: ID does not exist" containerID="98340f408004b90dfc6f93d8617320f8236b17b0777caeac078f60f0cb3cc092" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.774638 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98340f408004b90dfc6f93d8617320f8236b17b0777caeac078f60f0cb3cc092"} err="failed to get container status \"98340f408004b90dfc6f93d8617320f8236b17b0777caeac078f60f0cb3cc092\": rpc error: code = NotFound desc = could not find container \"98340f408004b90dfc6f93d8617320f8236b17b0777caeac078f60f0cb3cc092\": container with ID starting with 98340f408004b90dfc6f93d8617320f8236b17b0777caeac078f60f0cb3cc092 not found: ID does not exist" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.774652 4826 scope.go:117] "RemoveContainer" containerID="40ec7e8ca4dc542c396cd8053739a81758a07a9b844e4a0eab64c8253680c20d" Jan 29 07:04:31 crc kubenswrapper[4826]: E0129 07:04:31.774841 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40ec7e8ca4dc542c396cd8053739a81758a07a9b844e4a0eab64c8253680c20d\": container with ID starting with 40ec7e8ca4dc542c396cd8053739a81758a07a9b844e4a0eab64c8253680c20d not found: ID does not exist" containerID="40ec7e8ca4dc542c396cd8053739a81758a07a9b844e4a0eab64c8253680c20d" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.774861 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40ec7e8ca4dc542c396cd8053739a81758a07a9b844e4a0eab64c8253680c20d"} err="failed to get container status \"40ec7e8ca4dc542c396cd8053739a81758a07a9b844e4a0eab64c8253680c20d\": rpc error: code = NotFound desc = could not find container \"40ec7e8ca4dc542c396cd8053739a81758a07a9b844e4a0eab64c8253680c20d\": container with ID starting with 40ec7e8ca4dc542c396cd8053739a81758a07a9b844e4a0eab64c8253680c20d not found: ID does not exist" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.789633 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71636ef8-0a41-494e-bd82-703d109ce02d-run-httpd\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.789676 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.789712 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-scripts\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.789758 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-config-data\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.789783 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7lp9\" (UniqueName: \"kubernetes.io/projected/71636ef8-0a41-494e-bd82-703d109ce02d-kube-api-access-z7lp9\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.789814 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71636ef8-0a41-494e-bd82-703d109ce02d-log-httpd\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.789833 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.892538 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-scripts\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.892713 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-config-data\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.892801 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7lp9\" (UniqueName: \"kubernetes.io/projected/71636ef8-0a41-494e-bd82-703d109ce02d-kube-api-access-z7lp9\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.892844 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71636ef8-0a41-494e-bd82-703d109ce02d-log-httpd\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.892895 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.893066 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71636ef8-0a41-494e-bd82-703d109ce02d-run-httpd\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.893998 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71636ef8-0a41-494e-bd82-703d109ce02d-run-httpd\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.894381 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71636ef8-0a41-494e-bd82-703d109ce02d-log-httpd\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.894553 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.896557 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-scripts\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.898036 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.898234 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.911415 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-config-data\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:31 crc kubenswrapper[4826]: I0129 07:04:31.913061 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7lp9\" (UniqueName: \"kubernetes.io/projected/71636ef8-0a41-494e-bd82-703d109ce02d-kube-api-access-z7lp9\") pod \"ceilometer-0\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " pod="openstack/ceilometer-0" Jan 29 07:04:32 crc kubenswrapper[4826]: I0129 07:04:32.058677 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:04:32 crc kubenswrapper[4826]: I0129 07:04:32.520697 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:04:32 crc kubenswrapper[4826]: W0129 07:04:32.527600 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71636ef8_0a41_494e_bd82_703d109ce02d.slice/crio-218ebbd79527b104dc3ab226a096d1d242c9974de791d984b67c808d8c9f630f WatchSource:0}: Error finding container 218ebbd79527b104dc3ab226a096d1d242c9974de791d984b67c808d8c9f630f: Status 404 returned error can't find the container with id 218ebbd79527b104dc3ab226a096d1d242c9974de791d984b67c808d8c9f630f Jan 29 07:04:32 crc kubenswrapper[4826]: I0129 07:04:32.608507 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71636ef8-0a41-494e-bd82-703d109ce02d","Type":"ContainerStarted","Data":"218ebbd79527b104dc3ab226a096d1d242c9974de791d984b67c808d8c9f630f"} Jan 29 07:04:32 crc kubenswrapper[4826]: I0129 07:04:32.821609 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e8c873-524a-4274-a56c-5f7d49451acf" path="/var/lib/kubelet/pods/25e8c873-524a-4274-a56c-5f7d49451acf/volumes" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.071225 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.121933 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh2sl\" (UniqueName: \"kubernetes.io/projected/ec91ae50-7020-40a3-bc50-16d3360b0d10-kube-api-access-rh2sl\") pod \"ec91ae50-7020-40a3-bc50-16d3360b0d10\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.123276 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-combined-ca-bundle\") pod \"ec91ae50-7020-40a3-bc50-16d3360b0d10\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.123821 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-config-data\") pod \"ec91ae50-7020-40a3-bc50-16d3360b0d10\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.123874 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-scripts\") pod \"ec91ae50-7020-40a3-bc50-16d3360b0d10\" (UID: \"ec91ae50-7020-40a3-bc50-16d3360b0d10\") " Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.131958 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-scripts" (OuterVolumeSpecName: "scripts") pod "ec91ae50-7020-40a3-bc50-16d3360b0d10" (UID: "ec91ae50-7020-40a3-bc50-16d3360b0d10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.136341 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec91ae50-7020-40a3-bc50-16d3360b0d10-kube-api-access-rh2sl" (OuterVolumeSpecName: "kube-api-access-rh2sl") pod "ec91ae50-7020-40a3-bc50-16d3360b0d10" (UID: "ec91ae50-7020-40a3-bc50-16d3360b0d10"). InnerVolumeSpecName "kube-api-access-rh2sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.163757 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec91ae50-7020-40a3-bc50-16d3360b0d10" (UID: "ec91ae50-7020-40a3-bc50-16d3360b0d10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.169160 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-config-data" (OuterVolumeSpecName: "config-data") pod "ec91ae50-7020-40a3-bc50-16d3360b0d10" (UID: "ec91ae50-7020-40a3-bc50-16d3360b0d10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.225776 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.225817 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.225832 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec91ae50-7020-40a3-bc50-16d3360b0d10-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.225849 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh2sl\" (UniqueName: \"kubernetes.io/projected/ec91ae50-7020-40a3-bc50-16d3360b0d10-kube-api-access-rh2sl\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.620452 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-glp6k" event={"ID":"ec91ae50-7020-40a3-bc50-16d3360b0d10","Type":"ContainerDied","Data":"9d158cac1f2dc00b2a60ea4d354b25ecefb0edbdf774ac4fe8ba077207fdedab"} Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.620504 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d158cac1f2dc00b2a60ea4d354b25ecefb0edbdf774ac4fe8ba077207fdedab" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.620521 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-glp6k" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.796398 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 07:04:33 crc kubenswrapper[4826]: E0129 07:04:33.796944 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec91ae50-7020-40a3-bc50-16d3360b0d10" containerName="nova-cell0-conductor-db-sync" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.796973 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec91ae50-7020-40a3-bc50-16d3360b0d10" containerName="nova-cell0-conductor-db-sync" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.797214 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec91ae50-7020-40a3-bc50-16d3360b0d10" containerName="nova-cell0-conductor-db-sync" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.798090 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.800965 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.807611 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.808198 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dc5s7" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.838505 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\") " pod="openstack/nova-cell0-conductor-0" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.838784 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7sh\" (UniqueName: \"kubernetes.io/projected/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-kube-api-access-zz7sh\") pod \"nova-cell0-conductor-0\" (UID: \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\") " pod="openstack/nova-cell0-conductor-0" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.838837 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\") " pod="openstack/nova-cell0-conductor-0" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.940629 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\") " pod="openstack/nova-cell0-conductor-0" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.940754 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz7sh\" (UniqueName: \"kubernetes.io/projected/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-kube-api-access-zz7sh\") pod \"nova-cell0-conductor-0\" (UID: \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\") " pod="openstack/nova-cell0-conductor-0" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.940782 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\") " pod="openstack/nova-cell0-conductor-0" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.947406 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\") " pod="openstack/nova-cell0-conductor-0" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.947783 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\") " pod="openstack/nova-cell0-conductor-0" Jan 29 07:04:33 crc kubenswrapper[4826]: I0129 07:04:33.958791 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz7sh\" (UniqueName: \"kubernetes.io/projected/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-kube-api-access-zz7sh\") pod \"nova-cell0-conductor-0\" (UID: \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\") " pod="openstack/nova-cell0-conductor-0" Jan 29 07:04:34 crc kubenswrapper[4826]: I0129 07:04:34.137739 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 07:04:34 crc kubenswrapper[4826]: I0129 07:04:34.636092 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71636ef8-0a41-494e-bd82-703d109ce02d","Type":"ContainerStarted","Data":"aa6690f5903dc1fde0e2caa25b55d22e8ba87e4a164e835836e30ac4b77f074d"} Jan 29 07:04:34 crc kubenswrapper[4826]: I0129 07:04:34.702634 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 07:04:34 crc kubenswrapper[4826]: W0129 07:04:34.706211 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod229ff3bd_fac5_4bb5_ba1e_9e829c30f45b.slice/crio-7b42d0f18b68219a5e0a2872c3674527f3e5e36f99156a6a740edeb9484f5f8e WatchSource:0}: Error finding container 7b42d0f18b68219a5e0a2872c3674527f3e5e36f99156a6a740edeb9484f5f8e: Status 404 returned error can't find the container with id 7b42d0f18b68219a5e0a2872c3674527f3e5e36f99156a6a740edeb9484f5f8e Jan 29 07:04:35 crc kubenswrapper[4826]: I0129 07:04:35.649498 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b","Type":"ContainerStarted","Data":"50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2"} Jan 29 07:04:35 crc kubenswrapper[4826]: I0129 07:04:35.649989 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b","Type":"ContainerStarted","Data":"7b42d0f18b68219a5e0a2872c3674527f3e5e36f99156a6a740edeb9484f5f8e"} Jan 29 07:04:35 crc kubenswrapper[4826]: I0129 07:04:35.651479 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 07:04:35 crc kubenswrapper[4826]: I0129 07:04:35.673842 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71636ef8-0a41-494e-bd82-703d109ce02d","Type":"ContainerStarted","Data":"d401a9ac6019a08c84060d8d8758b0be5997d04fd6ff50f44e226e51449b24bc"} Jan 29 07:04:35 crc kubenswrapper[4826]: I0129 07:04:35.673884 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71636ef8-0a41-494e-bd82-703d109ce02d","Type":"ContainerStarted","Data":"646954d7e0b3243e82c3bd8eec6ddb3ac4613db0296061f328a2301ec4bb2830"} Jan 29 07:04:35 crc kubenswrapper[4826]: I0129 07:04:35.689987 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.6899665390000003 podStartE2EDuration="2.689966539s" podCreationTimestamp="2026-01-29 07:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:04:35.678731604 +0000 UTC m=+1259.540524683" watchObservedRunningTime="2026-01-29 07:04:35.689966539 +0000 UTC m=+1259.551759618" Jan 29 07:04:37 crc kubenswrapper[4826]: I0129 07:04:37.704205 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71636ef8-0a41-494e-bd82-703d109ce02d","Type":"ContainerStarted","Data":"55cb22a65b41a39b6f7f68c1e1e2c47e3bd3ae66d73b5b7be134773beef7bd46"} Jan 29 07:04:37 crc kubenswrapper[4826]: I0129 07:04:37.705094 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 07:04:37 crc kubenswrapper[4826]: I0129 07:04:37.747699 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.354389064 podStartE2EDuration="6.747669s" podCreationTimestamp="2026-01-29 07:04:31 +0000 UTC" firstStartedPulling="2026-01-29 07:04:32.529480532 +0000 UTC m=+1256.391273641" lastFinishedPulling="2026-01-29 07:04:36.922760498 +0000 UTC m=+1260.784553577" observedRunningTime="2026-01-29 07:04:37.739995858 +0000 UTC m=+1261.601789007" watchObservedRunningTime="2026-01-29 07:04:37.747669 +0000 UTC m=+1261.609462079" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.179506 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.754956 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8rj98"] Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.757220 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.777276 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.778289 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.813664 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8rj98"] Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.816946 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-scripts\") pod \"nova-cell0-cell-mapping-8rj98\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.817090 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rf7m\" (UniqueName: \"kubernetes.io/projected/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-kube-api-access-6rf7m\") pod \"nova-cell0-cell-mapping-8rj98\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.817280 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-config-data\") pod \"nova-cell0-cell-mapping-8rj98\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.817543 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8rj98\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.939502 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.941152 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.945372 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rf7m\" (UniqueName: \"kubernetes.io/projected/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-kube-api-access-6rf7m\") pod \"nova-cell0-cell-mapping-8rj98\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.945596 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-config-data\") pod \"nova-cell0-cell-mapping-8rj98\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.945838 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8rj98\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.946009 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-scripts\") pod \"nova-cell0-cell-mapping-8rj98\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.952639 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.961187 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-config-data\") pod \"nova-cell0-cell-mapping-8rj98\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.962369 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8rj98\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.964958 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.969876 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-scripts\") pod \"nova-cell0-cell-mapping-8rj98\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.974608 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.975895 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.982100 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.983124 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 07:04:39 crc kubenswrapper[4826]: I0129 07:04:39.999516 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rf7m\" (UniqueName: \"kubernetes.io/projected/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-kube-api-access-6rf7m\") pod \"nova-cell0-cell-mapping-8rj98\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.047221 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-logs\") pod \"nova-api-0\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " pod="openstack/nova-api-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.047267 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " pod="openstack/nova-api-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.047309 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl5vd\" (UniqueName: \"kubernetes.io/projected/c83e466a-631c-4461-b605-858b6b774743-kube-api-access-fl5vd\") pod \"nova-scheduler-0\" (UID: \"c83e466a-631c-4461-b605-858b6b774743\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.047348 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83e466a-631c-4461-b605-858b6b774743-config-data\") pod \"nova-scheduler-0\" (UID: \"c83e466a-631c-4461-b605-858b6b774743\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.047401 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-config-data\") pod \"nova-api-0\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " pod="openstack/nova-api-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.047430 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83e466a-631c-4461-b605-858b6b774743-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c83e466a-631c-4461-b605-858b6b774743\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.047483 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86l6v\" (UniqueName: \"kubernetes.io/projected/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-kube-api-access-86l6v\") pod \"nova-api-0\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " pod="openstack/nova-api-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.118447 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.145833 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.147636 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.149280 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83e466a-631c-4461-b605-858b6b774743-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c83e466a-631c-4461-b605-858b6b774743\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.149436 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86l6v\" (UniqueName: \"kubernetes.io/projected/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-kube-api-access-86l6v\") pod \"nova-api-0\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " pod="openstack/nova-api-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.149545 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-logs\") pod \"nova-api-0\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " pod="openstack/nova-api-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.149623 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " pod="openstack/nova-api-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.149693 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl5vd\" (UniqueName: \"kubernetes.io/projected/c83e466a-631c-4461-b605-858b6b774743-kube-api-access-fl5vd\") pod \"nova-scheduler-0\" (UID: \"c83e466a-631c-4461-b605-858b6b774743\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.149785 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83e466a-631c-4461-b605-858b6b774743-config-data\") pod \"nova-scheduler-0\" (UID: \"c83e466a-631c-4461-b605-858b6b774743\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.149891 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-config-data\") pod \"nova-api-0\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " pod="openstack/nova-api-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.154684 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-logs\") pod \"nova-api-0\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " pod="openstack/nova-api-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.160997 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.168563 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " pod="openstack/nova-api-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.170783 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.171457 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83e466a-631c-4461-b605-858b6b774743-config-data\") pod \"nova-scheduler-0\" (UID: \"c83e466a-631c-4461-b605-858b6b774743\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.188145 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-config-data\") pod \"nova-api-0\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " pod="openstack/nova-api-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.188637 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83e466a-631c-4461-b605-858b6b774743-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c83e466a-631c-4461-b605-858b6b774743\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.192645 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl5vd\" (UniqueName: \"kubernetes.io/projected/c83e466a-631c-4461-b605-858b6b774743-kube-api-access-fl5vd\") pod \"nova-scheduler-0\" (UID: \"c83e466a-631c-4461-b605-858b6b774743\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.196097 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86l6v\" (UniqueName: \"kubernetes.io/projected/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-kube-api-access-86l6v\") pod \"nova-api-0\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " pod="openstack/nova-api-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.209412 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.211233 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.219726 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.252477 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.253690 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a6da74e-889a-4df9-a68f-8626f0780e17-config-data\") pod \"nova-metadata-0\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.254233 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0405bde-d981-4583-80b3-b26b643dc78f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0405bde-d981-4583-80b3-b26b643dc78f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.254367 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a6da74e-889a-4df9-a68f-8626f0780e17-logs\") pod \"nova-metadata-0\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.254470 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0405bde-d981-4583-80b3-b26b643dc78f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0405bde-d981-4583-80b3-b26b643dc78f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.254706 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6da74e-889a-4df9-a68f-8626f0780e17-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.254783 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p6tc\" (UniqueName: \"kubernetes.io/projected/6a6da74e-889a-4df9-a68f-8626f0780e17-kube-api-access-2p6tc\") pod \"nova-metadata-0\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.254894 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hc52\" (UniqueName: \"kubernetes.io/projected/a0405bde-d981-4583-80b3-b26b643dc78f-kube-api-access-2hc52\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0405bde-d981-4583-80b3-b26b643dc78f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.279370 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-ptc9r"] Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.281060 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.300778 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-ptc9r"] Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.359435 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.359529 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a6da74e-889a-4df9-a68f-8626f0780e17-logs\") pod \"nova-metadata-0\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.359604 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0405bde-d981-4583-80b3-b26b643dc78f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0405bde-d981-4583-80b3-b26b643dc78f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.359710 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6da74e-889a-4df9-a68f-8626f0780e17-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.359738 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb4c9\" (UniqueName: \"kubernetes.io/projected/64dc8206-670d-44be-887b-48378dbf5a30-kube-api-access-fb4c9\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.359965 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.360016 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p6tc\" (UniqueName: \"kubernetes.io/projected/6a6da74e-889a-4df9-a68f-8626f0780e17-kube-api-access-2p6tc\") pod \"nova-metadata-0\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.360075 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.360125 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hc52\" (UniqueName: \"kubernetes.io/projected/a0405bde-d981-4583-80b3-b26b643dc78f-kube-api-access-2hc52\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0405bde-d981-4583-80b3-b26b643dc78f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.360156 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-config\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.360208 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.360232 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a6da74e-889a-4df9-a68f-8626f0780e17-config-data\") pod \"nova-metadata-0\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.360278 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0405bde-d981-4583-80b3-b26b643dc78f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0405bde-d981-4583-80b3-b26b643dc78f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.360797 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a6da74e-889a-4df9-a68f-8626f0780e17-logs\") pod \"nova-metadata-0\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.364978 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a6da74e-889a-4df9-a68f-8626f0780e17-config-data\") pod \"nova-metadata-0\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.381232 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6da74e-889a-4df9-a68f-8626f0780e17-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.381793 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0405bde-d981-4583-80b3-b26b643dc78f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0405bde-d981-4583-80b3-b26b643dc78f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.384198 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.387694 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0405bde-d981-4583-80b3-b26b643dc78f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0405bde-d981-4583-80b3-b26b643dc78f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.396908 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p6tc\" (UniqueName: \"kubernetes.io/projected/6a6da74e-889a-4df9-a68f-8626f0780e17-kube-api-access-2p6tc\") pod \"nova-metadata-0\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.400380 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hc52\" (UniqueName: \"kubernetes.io/projected/a0405bde-d981-4583-80b3-b26b643dc78f-kube-api-access-2hc52\") pod \"nova-cell1-novncproxy-0\" (UID: \"a0405bde-d981-4583-80b3-b26b643dc78f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.401090 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.466378 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb4c9\" (UniqueName: \"kubernetes.io/projected/64dc8206-670d-44be-887b-48378dbf5a30-kube-api-access-fb4c9\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.466495 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.466603 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.466676 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-config\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.466754 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.466932 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.468593 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.468925 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-config\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.469011 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.469590 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.474149 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.493987 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb4c9\" (UniqueName: \"kubernetes.io/projected/64dc8206-670d-44be-887b-48378dbf5a30-kube-api-access-fb4c9\") pod \"dnsmasq-dns-557bbc7df7-ptc9r\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.614618 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.628320 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.640689 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.798659 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8rj98"] Jan 29 07:04:40 crc kubenswrapper[4826]: I0129 07:04:40.972103 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.060562 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x8x4z"] Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.062291 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.068873 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.069103 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.100347 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x8x4z"] Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.136448 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.195647 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x8x4z\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.196285 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xppf\" (UniqueName: \"kubernetes.io/projected/acc386ea-3778-41d9-9c92-3ebc8f96f700-kube-api-access-9xppf\") pod \"nova-cell1-conductor-db-sync-x8x4z\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.196345 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-scripts\") pod \"nova-cell1-conductor-db-sync-x8x4z\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.196643 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-config-data\") pod \"nova-cell1-conductor-db-sync-x8x4z\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.202956 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.227747 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:41 crc kubenswrapper[4826]: W0129 07:04:41.240902 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a6da74e_889a_4df9_a68f_8626f0780e17.slice/crio-4503010d43119672832b2819cd65956b643a68e9955d1dcc1f1ee9ce3be54a44 WatchSource:0}: Error finding container 4503010d43119672832b2819cd65956b643a68e9955d1dcc1f1ee9ce3be54a44: Status 404 returned error can't find the container with id 4503010d43119672832b2819cd65956b643a68e9955d1dcc1f1ee9ce3be54a44 Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.298883 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-config-data\") pod \"nova-cell1-conductor-db-sync-x8x4z\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.299009 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x8x4z\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.299053 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xppf\" (UniqueName: \"kubernetes.io/projected/acc386ea-3778-41d9-9c92-3ebc8f96f700-kube-api-access-9xppf\") pod \"nova-cell1-conductor-db-sync-x8x4z\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.299075 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-scripts\") pod \"nova-cell1-conductor-db-sync-x8x4z\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.306022 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-scripts\") pod \"nova-cell1-conductor-db-sync-x8x4z\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.307108 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x8x4z\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.307625 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-config-data\") pod \"nova-cell1-conductor-db-sync-x8x4z\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.321770 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xppf\" (UniqueName: \"kubernetes.io/projected/acc386ea-3778-41d9-9c92-3ebc8f96f700-kube-api-access-9xppf\") pod \"nova-cell1-conductor-db-sync-x8x4z\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: W0129 07:04:41.375139 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64dc8206_670d_44be_887b_48378dbf5a30.slice/crio-ea587ee6d4fc68a79aa6cbdc996c8c1ba1acb034c159a6a3ae91107437608c11 WatchSource:0}: Error finding container ea587ee6d4fc68a79aa6cbdc996c8c1ba1acb034c159a6a3ae91107437608c11: Status 404 returned error can't find the container with id ea587ee6d4fc68a79aa6cbdc996c8c1ba1acb034c159a6a3ae91107437608c11 Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.375894 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-ptc9r"] Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.405708 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.755568 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a","Type":"ContainerStarted","Data":"f37f03c8ccb495a689291b312e1c3709d869440a024a62c8ca7ca45f03cbd455"} Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.758961 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c83e466a-631c-4461-b605-858b6b774743","Type":"ContainerStarted","Data":"2effe84c0d20580c11ee8ff6a05172f698634a46d418ef01577f9f9cc53c60f4"} Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.760659 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a6da74e-889a-4df9-a68f-8626f0780e17","Type":"ContainerStarted","Data":"4503010d43119672832b2819cd65956b643a68e9955d1dcc1f1ee9ce3be54a44"} Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.762875 4826 generic.go:334] "Generic (PLEG): container finished" podID="64dc8206-670d-44be-887b-48378dbf5a30" containerID="7f7c3928cc87fd53ead82bd27079afad9b5240c322abf1aee250b3ba09da9edb" exitCode=0 Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.762925 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" event={"ID":"64dc8206-670d-44be-887b-48378dbf5a30","Type":"ContainerDied","Data":"7f7c3928cc87fd53ead82bd27079afad9b5240c322abf1aee250b3ba09da9edb"} Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.762946 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" event={"ID":"64dc8206-670d-44be-887b-48378dbf5a30","Type":"ContainerStarted","Data":"ea587ee6d4fc68a79aa6cbdc996c8c1ba1acb034c159a6a3ae91107437608c11"} Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.773573 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8rj98" event={"ID":"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c","Type":"ContainerStarted","Data":"c6920cb0da3a23ded2fd24cbb3779d77c24038298111549583e3ace69ab21a37"} Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.773621 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8rj98" event={"ID":"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c","Type":"ContainerStarted","Data":"9b2f0358aefc12a4367ba2684a775d200b3fb7fec43cf3216735650754e5f483"} Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.777517 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a0405bde-d981-4583-80b3-b26b643dc78f","Type":"ContainerStarted","Data":"813494d2eebd6afd46c02c84501ea4e2f7c5d5fb352f5585dd943edb5493557d"} Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.810164 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8rj98" podStartSLOduration=2.8101472579999998 podStartE2EDuration="2.810147258s" podCreationTimestamp="2026-01-29 07:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:04:41.800194037 +0000 UTC m=+1265.661987096" watchObservedRunningTime="2026-01-29 07:04:41.810147258 +0000 UTC m=+1265.671940327" Jan 29 07:04:41 crc kubenswrapper[4826]: I0129 07:04:41.931696 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x8x4z"] Jan 29 07:04:42 crc kubenswrapper[4826]: I0129 07:04:42.796086 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x8x4z" event={"ID":"acc386ea-3778-41d9-9c92-3ebc8f96f700","Type":"ContainerStarted","Data":"6873be83fc4a55210d906801985bb6bb4c3941be3e78f955610aac735d9256bd"} Jan 29 07:04:42 crc kubenswrapper[4826]: I0129 07:04:42.796517 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x8x4z" event={"ID":"acc386ea-3778-41d9-9c92-3ebc8f96f700","Type":"ContainerStarted","Data":"10aae8a3af3800d3b38a3cce7edbf40e9b2eecbce02c1693ed43b903aae53b47"} Jan 29 07:04:42 crc kubenswrapper[4826]: I0129 07:04:42.798308 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" event={"ID":"64dc8206-670d-44be-887b-48378dbf5a30","Type":"ContainerStarted","Data":"4d01dedb9003a15f270c28f7b19b8fe2be70b4ca3372d8950cfac8e113b150a4"} Jan 29 07:04:42 crc kubenswrapper[4826]: I0129 07:04:42.798367 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:42 crc kubenswrapper[4826]: I0129 07:04:42.817260 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-x8x4z" podStartSLOduration=1.8172405569999999 podStartE2EDuration="1.817240557s" podCreationTimestamp="2026-01-29 07:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:04:42.810329075 +0000 UTC m=+1266.672122144" watchObservedRunningTime="2026-01-29 07:04:42.817240557 +0000 UTC m=+1266.679033616" Jan 29 07:04:42 crc kubenswrapper[4826]: I0129 07:04:42.833185 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" podStartSLOduration=2.833164626 podStartE2EDuration="2.833164626s" podCreationTimestamp="2026-01-29 07:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:04:42.82612408 +0000 UTC m=+1266.687917149" watchObservedRunningTime="2026-01-29 07:04:42.833164626 +0000 UTC m=+1266.694957695" Jan 29 07:04:43 crc kubenswrapper[4826]: I0129 07:04:43.896942 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 07:04:43 crc kubenswrapper[4826]: I0129 07:04:43.925576 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:44 crc kubenswrapper[4826]: I0129 07:04:44.868346 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a","Type":"ContainerStarted","Data":"9bd866a3dd235a21752e0c3262aef5fa22bc67f7059dfe445c6b4738796eecfa"} Jan 29 07:04:44 crc kubenswrapper[4826]: I0129 07:04:44.872505 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c83e466a-631c-4461-b605-858b6b774743","Type":"ContainerStarted","Data":"4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b"} Jan 29 07:04:44 crc kubenswrapper[4826]: I0129 07:04:44.874809 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a6da74e-889a-4df9-a68f-8626f0780e17","Type":"ContainerStarted","Data":"2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0"} Jan 29 07:04:44 crc kubenswrapper[4826]: I0129 07:04:44.886918 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a0405bde-d981-4583-80b3-b26b643dc78f","Type":"ContainerStarted","Data":"1c3ff3cbd16ba6d294469d9e6ea35aa7aff48c5f39e6d8bb4a89a2715f91009c"} Jan 29 07:04:44 crc kubenswrapper[4826]: I0129 07:04:44.887030 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a0405bde-d981-4583-80b3-b26b643dc78f" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://1c3ff3cbd16ba6d294469d9e6ea35aa7aff48c5f39e6d8bb4a89a2715f91009c" gracePeriod=30 Jan 29 07:04:44 crc kubenswrapper[4826]: I0129 07:04:44.907451 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.695148051 podStartE2EDuration="5.907422122s" podCreationTimestamp="2026-01-29 07:04:39 +0000 UTC" firstStartedPulling="2026-01-29 07:04:41.152799196 +0000 UTC m=+1265.014592265" lastFinishedPulling="2026-01-29 07:04:44.365073277 +0000 UTC m=+1268.226866336" observedRunningTime="2026-01-29 07:04:44.894945574 +0000 UTC m=+1268.756738643" watchObservedRunningTime="2026-01-29 07:04:44.907422122 +0000 UTC m=+1268.769215201" Jan 29 07:04:44 crc kubenswrapper[4826]: I0129 07:04:44.919389 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.777483478 podStartE2EDuration="4.919365446s" podCreationTimestamp="2026-01-29 07:04:40 +0000 UTC" firstStartedPulling="2026-01-29 07:04:41.224828192 +0000 UTC m=+1265.086621261" lastFinishedPulling="2026-01-29 07:04:44.36671016 +0000 UTC m=+1268.228503229" observedRunningTime="2026-01-29 07:04:44.915099184 +0000 UTC m=+1268.776892253" watchObservedRunningTime="2026-01-29 07:04:44.919365446 +0000 UTC m=+1268.781158515" Jan 29 07:04:45 crc kubenswrapper[4826]: I0129 07:04:45.402942 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 07:04:45 crc kubenswrapper[4826]: I0129 07:04:45.615167 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:04:45 crc kubenswrapper[4826]: I0129 07:04:45.931249 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a","Type":"ContainerStarted","Data":"8e18ff4b605bbbdc0e151943f63abc7f6ba212c2efcee70e8c8b8bf32db86d8a"} Jan 29 07:04:45 crc kubenswrapper[4826]: I0129 07:04:45.937356 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a6da74e-889a-4df9-a68f-8626f0780e17","Type":"ContainerStarted","Data":"e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4"} Jan 29 07:04:45 crc kubenswrapper[4826]: I0129 07:04:45.937540 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a6da74e-889a-4df9-a68f-8626f0780e17" containerName="nova-metadata-log" containerID="cri-o://2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0" gracePeriod=30 Jan 29 07:04:45 crc kubenswrapper[4826]: I0129 07:04:45.937573 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a6da74e-889a-4df9-a68f-8626f0780e17" containerName="nova-metadata-metadata" containerID="cri-o://e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4" gracePeriod=30 Jan 29 07:04:45 crc kubenswrapper[4826]: I0129 07:04:45.967876 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.5937899460000002 podStartE2EDuration="6.967850664s" podCreationTimestamp="2026-01-29 07:04:39 +0000 UTC" firstStartedPulling="2026-01-29 07:04:41.002024648 +0000 UTC m=+1264.863817717" lastFinishedPulling="2026-01-29 07:04:44.376085366 +0000 UTC m=+1268.237878435" observedRunningTime="2026-01-29 07:04:45.959597197 +0000 UTC m=+1269.821390306" watchObservedRunningTime="2026-01-29 07:04:45.967850664 +0000 UTC m=+1269.829643753" Jan 29 07:04:45 crc kubenswrapper[4826]: I0129 07:04:45.991867 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.86865864 podStartE2EDuration="5.991812805s" podCreationTimestamp="2026-01-29 07:04:40 +0000 UTC" firstStartedPulling="2026-01-29 07:04:41.243628247 +0000 UTC m=+1265.105421316" lastFinishedPulling="2026-01-29 07:04:44.366782402 +0000 UTC m=+1268.228575481" observedRunningTime="2026-01-29 07:04:45.985456367 +0000 UTC m=+1269.847249436" watchObservedRunningTime="2026-01-29 07:04:45.991812805 +0000 UTC m=+1269.853605884" Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.562567 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.740968 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6da74e-889a-4df9-a68f-8626f0780e17-combined-ca-bundle\") pod \"6a6da74e-889a-4df9-a68f-8626f0780e17\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.741379 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a6da74e-889a-4df9-a68f-8626f0780e17-logs\") pod \"6a6da74e-889a-4df9-a68f-8626f0780e17\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.741525 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a6da74e-889a-4df9-a68f-8626f0780e17-config-data\") pod \"6a6da74e-889a-4df9-a68f-8626f0780e17\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.741579 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p6tc\" (UniqueName: \"kubernetes.io/projected/6a6da74e-889a-4df9-a68f-8626f0780e17-kube-api-access-2p6tc\") pod \"6a6da74e-889a-4df9-a68f-8626f0780e17\" (UID: \"6a6da74e-889a-4df9-a68f-8626f0780e17\") " Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.743134 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a6da74e-889a-4df9-a68f-8626f0780e17-logs" (OuterVolumeSpecName: "logs") pod "6a6da74e-889a-4df9-a68f-8626f0780e17" (UID: "6a6da74e-889a-4df9-a68f-8626f0780e17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.755733 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a6da74e-889a-4df9-a68f-8626f0780e17-kube-api-access-2p6tc" (OuterVolumeSpecName: "kube-api-access-2p6tc") pod "6a6da74e-889a-4df9-a68f-8626f0780e17" (UID: "6a6da74e-889a-4df9-a68f-8626f0780e17"). InnerVolumeSpecName "kube-api-access-2p6tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.777821 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a6da74e-889a-4df9-a68f-8626f0780e17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a6da74e-889a-4df9-a68f-8626f0780e17" (UID: "6a6da74e-889a-4df9-a68f-8626f0780e17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.788922 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a6da74e-889a-4df9-a68f-8626f0780e17-config-data" (OuterVolumeSpecName: "config-data") pod "6a6da74e-889a-4df9-a68f-8626f0780e17" (UID: "6a6da74e-889a-4df9-a68f-8626f0780e17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.843962 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a6da74e-889a-4df9-a68f-8626f0780e17-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.844015 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p6tc\" (UniqueName: \"kubernetes.io/projected/6a6da74e-889a-4df9-a68f-8626f0780e17-kube-api-access-2p6tc\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.844034 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6da74e-889a-4df9-a68f-8626f0780e17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.844049 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a6da74e-889a-4df9-a68f-8626f0780e17-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.949168 4826 generic.go:334] "Generic (PLEG): container finished" podID="6a6da74e-889a-4df9-a68f-8626f0780e17" containerID="e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4" exitCode=0 Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.950618 4826 generic.go:334] "Generic (PLEG): container finished" podID="6a6da74e-889a-4df9-a68f-8626f0780e17" containerID="2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0" exitCode=143 Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.949250 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a6da74e-889a-4df9-a68f-8626f0780e17","Type":"ContainerDied","Data":"e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4"} Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.950773 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a6da74e-889a-4df9-a68f-8626f0780e17","Type":"ContainerDied","Data":"2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0"} Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.950799 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a6da74e-889a-4df9-a68f-8626f0780e17","Type":"ContainerDied","Data":"4503010d43119672832b2819cd65956b643a68e9955d1dcc1f1ee9ce3be54a44"} Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.950822 4826 scope.go:117] "RemoveContainer" containerID="e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4" Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.949268 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.980921 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:46 crc kubenswrapper[4826]: I0129 07:04:46.981964 4826 scope.go:117] "RemoveContainer" containerID="2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.024973 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.025349 4826 scope.go:117] "RemoveContainer" containerID="e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4" Jan 29 07:04:47 crc kubenswrapper[4826]: E0129 07:04:47.026004 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4\": container with ID starting with e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4 not found: ID does not exist" containerID="e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.026078 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4"} err="failed to get container status \"e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4\": rpc error: code = NotFound desc = could not find container \"e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4\": container with ID starting with e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4 not found: ID does not exist" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.026117 4826 scope.go:117] "RemoveContainer" containerID="2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0" Jan 29 07:04:47 crc kubenswrapper[4826]: E0129 07:04:47.026576 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0\": container with ID starting with 2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0 not found: ID does not exist" containerID="2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.026633 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0"} err="failed to get container status \"2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0\": rpc error: code = NotFound desc = could not find container \"2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0\": container with ID starting with 2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0 not found: ID does not exist" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.026657 4826 scope.go:117] "RemoveContainer" containerID="e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.028151 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4"} err="failed to get container status \"e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4\": rpc error: code = NotFound desc = could not find container \"e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4\": container with ID starting with e67a6175a6619f4c3a7cf46ec6afbd75e4d22509aa0d5982a1c302e7812b5fe4 not found: ID does not exist" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.028210 4826 scope.go:117] "RemoveContainer" containerID="2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.028611 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0"} err="failed to get container status \"2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0\": rpc error: code = NotFound desc = could not find container \"2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0\": container with ID starting with 2ec3f576bc83cb49cf11473121c916417c0344700dcb29b07012fe45da331fd0 not found: ID does not exist" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.051434 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:47 crc kubenswrapper[4826]: E0129 07:04:47.052144 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6da74e-889a-4df9-a68f-8626f0780e17" containerName="nova-metadata-log" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.052166 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6da74e-889a-4df9-a68f-8626f0780e17" containerName="nova-metadata-log" Jan 29 07:04:47 crc kubenswrapper[4826]: E0129 07:04:47.052205 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6da74e-889a-4df9-a68f-8626f0780e17" containerName="nova-metadata-metadata" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.052216 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6da74e-889a-4df9-a68f-8626f0780e17" containerName="nova-metadata-metadata" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.052571 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a6da74e-889a-4df9-a68f-8626f0780e17" containerName="nova-metadata-log" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.052591 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a6da74e-889a-4df9-a68f-8626f0780e17" containerName="nova-metadata-metadata" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.054255 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.057012 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.060005 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.064572 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.072497 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.073019 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-config-data\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.073270 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.073954 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8zsj\" (UniqueName: \"kubernetes.io/projected/7cd5b7da-b930-4016-beeb-1d3e78def0eb-kube-api-access-j8zsj\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.074073 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd5b7da-b930-4016-beeb-1d3e78def0eb-logs\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.175798 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8zsj\" (UniqueName: \"kubernetes.io/projected/7cd5b7da-b930-4016-beeb-1d3e78def0eb-kube-api-access-j8zsj\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.175872 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd5b7da-b930-4016-beeb-1d3e78def0eb-logs\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.175911 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.176015 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-config-data\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.176100 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.176438 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd5b7da-b930-4016-beeb-1d3e78def0eb-logs\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.180664 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.180747 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-config-data\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.181762 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.191347 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8zsj\" (UniqueName: \"kubernetes.io/projected/7cd5b7da-b930-4016-beeb-1d3e78def0eb-kube-api-access-j8zsj\") pod \"nova-metadata-0\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.374925 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.897544 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:47 crc kubenswrapper[4826]: W0129 07:04:47.917235 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cd5b7da_b930_4016_beeb_1d3e78def0eb.slice/crio-e655d3d1120da50b2a38e4f3a026ff17b499802e41cc60536face1d88e6e3d0b WatchSource:0}: Error finding container e655d3d1120da50b2a38e4f3a026ff17b499802e41cc60536face1d88e6e3d0b: Status 404 returned error can't find the container with id e655d3d1120da50b2a38e4f3a026ff17b499802e41cc60536face1d88e6e3d0b Jan 29 07:04:47 crc kubenswrapper[4826]: I0129 07:04:47.963660 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cd5b7da-b930-4016-beeb-1d3e78def0eb","Type":"ContainerStarted","Data":"e655d3d1120da50b2a38e4f3a026ff17b499802e41cc60536face1d88e6e3d0b"} Jan 29 07:04:48 crc kubenswrapper[4826]: I0129 07:04:48.824470 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a6da74e-889a-4df9-a68f-8626f0780e17" path="/var/lib/kubelet/pods/6a6da74e-889a-4df9-a68f-8626f0780e17/volumes" Jan 29 07:04:48 crc kubenswrapper[4826]: I0129 07:04:48.985320 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cd5b7da-b930-4016-beeb-1d3e78def0eb","Type":"ContainerStarted","Data":"e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07"} Jan 29 07:04:48 crc kubenswrapper[4826]: I0129 07:04:48.985932 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cd5b7da-b930-4016-beeb-1d3e78def0eb","Type":"ContainerStarted","Data":"7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f"} Jan 29 07:04:49 crc kubenswrapper[4826]: I0129 07:04:49.022173 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.022139856 podStartE2EDuration="3.022139856s" podCreationTimestamp="2026-01-29 07:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:04:49.013999802 +0000 UTC m=+1272.875792901" watchObservedRunningTime="2026-01-29 07:04:49.022139856 +0000 UTC m=+1272.883932935" Jan 29 07:04:50 crc kubenswrapper[4826]: I0129 07:04:50.000786 4826 generic.go:334] "Generic (PLEG): container finished" podID="1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c" containerID="c6920cb0da3a23ded2fd24cbb3779d77c24038298111549583e3ace69ab21a37" exitCode=0 Jan 29 07:04:50 crc kubenswrapper[4826]: I0129 07:04:50.001330 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8rj98" event={"ID":"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c","Type":"ContainerDied","Data":"c6920cb0da3a23ded2fd24cbb3779d77c24038298111549583e3ace69ab21a37"} Jan 29 07:04:50 crc kubenswrapper[4826]: I0129 07:04:50.383672 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 07:04:50 crc kubenswrapper[4826]: I0129 07:04:50.383797 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 07:04:50 crc kubenswrapper[4826]: I0129 07:04:50.402442 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 07:04:50 crc kubenswrapper[4826]: I0129 07:04:50.441395 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 07:04:50 crc kubenswrapper[4826]: I0129 07:04:50.644265 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:04:50 crc kubenswrapper[4826]: I0129 07:04:50.736523 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-6jddr"] Jan 29 07:04:50 crc kubenswrapper[4826]: I0129 07:04:50.736885 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" podUID="3313021d-864e-430f-acda-e4641355dd75" containerName="dnsmasq-dns" containerID="cri-o://02d4a65c2248f6909e6e137019714be7d0e23232d76da80adf01c2b971465ddc" gracePeriod=10 Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.019221 4826 generic.go:334] "Generic (PLEG): container finished" podID="3313021d-864e-430f-acda-e4641355dd75" containerID="02d4a65c2248f6909e6e137019714be7d0e23232d76da80adf01c2b971465ddc" exitCode=0 Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.019320 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" event={"ID":"3313021d-864e-430f-acda-e4641355dd75","Type":"ContainerDied","Data":"02d4a65c2248f6909e6e137019714be7d0e23232d76da80adf01c2b971465ddc"} Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.058025 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.313577 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.466940 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.467235 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.491925 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-dns-swift-storage-0\") pod \"3313021d-864e-430f-acda-e4641355dd75\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.491995 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb2b2\" (UniqueName: \"kubernetes.io/projected/3313021d-864e-430f-acda-e4641355dd75-kube-api-access-sb2b2\") pod \"3313021d-864e-430f-acda-e4641355dd75\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.492051 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-dns-svc\") pod \"3313021d-864e-430f-acda-e4641355dd75\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.492122 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-ovsdbserver-sb\") pod \"3313021d-864e-430f-acda-e4641355dd75\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.492187 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-ovsdbserver-nb\") pod \"3313021d-864e-430f-acda-e4641355dd75\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.492249 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-config\") pod \"3313021d-864e-430f-acda-e4641355dd75\" (UID: \"3313021d-864e-430f-acda-e4641355dd75\") " Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.519559 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3313021d-864e-430f-acda-e4641355dd75-kube-api-access-sb2b2" (OuterVolumeSpecName: "kube-api-access-sb2b2") pod "3313021d-864e-430f-acda-e4641355dd75" (UID: "3313021d-864e-430f-acda-e4641355dd75"). InnerVolumeSpecName "kube-api-access-sb2b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.545883 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.557264 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3313021d-864e-430f-acda-e4641355dd75" (UID: "3313021d-864e-430f-acda-e4641355dd75"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.590832 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3313021d-864e-430f-acda-e4641355dd75" (UID: "3313021d-864e-430f-acda-e4641355dd75"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.591195 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-config" (OuterVolumeSpecName: "config") pod "3313021d-864e-430f-acda-e4641355dd75" (UID: "3313021d-864e-430f-acda-e4641355dd75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.596322 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.596341 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb2b2\" (UniqueName: \"kubernetes.io/projected/3313021d-864e-430f-acda-e4641355dd75-kube-api-access-sb2b2\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.596350 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.596359 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.606767 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3313021d-864e-430f-acda-e4641355dd75" (UID: "3313021d-864e-430f-acda-e4641355dd75"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.608051 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3313021d-864e-430f-acda-e4641355dd75" (UID: "3313021d-864e-430f-acda-e4641355dd75"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.698010 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-scripts\") pod \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.698067 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rf7m\" (UniqueName: \"kubernetes.io/projected/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-kube-api-access-6rf7m\") pod \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.698227 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-combined-ca-bundle\") pod \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.698642 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-config-data\") pod \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\" (UID: \"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c\") " Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.699128 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.699182 4826 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3313021d-864e-430f-acda-e4641355dd75-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.701508 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-scripts" (OuterVolumeSpecName: "scripts") pod "1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c" (UID: "1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.709565 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-kube-api-access-6rf7m" (OuterVolumeSpecName: "kube-api-access-6rf7m") pod "1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c" (UID: "1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c"). InnerVolumeSpecName "kube-api-access-6rf7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.735842 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-config-data" (OuterVolumeSpecName: "config-data") pod "1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c" (UID: "1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.753483 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c" (UID: "1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.800547 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.800588 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rf7m\" (UniqueName: \"kubernetes.io/projected/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-kube-api-access-6rf7m\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.800604 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:51 crc kubenswrapper[4826]: I0129 07:04:51.800617 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.031647 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8rj98" event={"ID":"1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c","Type":"ContainerDied","Data":"9b2f0358aefc12a4367ba2684a775d200b3fb7fec43cf3216735650754e5f483"} Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.031714 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b2f0358aefc12a4367ba2684a775d200b3fb7fec43cf3216735650754e5f483" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.031844 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8rj98" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.042289 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" event={"ID":"3313021d-864e-430f-acda-e4641355dd75","Type":"ContainerDied","Data":"a8c034e222578c6a1716797a7b81422350d30a6d28b6fc673d56ce6b7985fa71"} Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.042363 4826 scope.go:117] "RemoveContainer" containerID="02d4a65c2248f6909e6e137019714be7d0e23232d76da80adf01c2b971465ddc" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.042398 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-6jddr" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.084681 4826 scope.go:117] "RemoveContainer" containerID="5f878ff6a21842a4ece617c1339449f3b4ff0d535f8c0ab4b4081e7c617e27f1" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.101432 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-6jddr"] Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.110793 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-6jddr"] Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.235580 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.236442 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" containerName="nova-api-api" containerID="cri-o://8e18ff4b605bbbdc0e151943f63abc7f6ba212c2efcee70e8c8b8bf32db86d8a" gracePeriod=30 Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.235934 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" containerName="nova-api-log" containerID="cri-o://9bd866a3dd235a21752e0c3262aef5fa22bc67f7059dfe445c6b4738796eecfa" gracePeriod=30 Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.261342 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.276046 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.276566 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7cd5b7da-b930-4016-beeb-1d3e78def0eb" containerName="nova-metadata-log" containerID="cri-o://7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f" gracePeriod=30 Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.277162 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7cd5b7da-b930-4016-beeb-1d3e78def0eb" containerName="nova-metadata-metadata" containerID="cri-o://e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07" gracePeriod=30 Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.404521 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.404612 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.787921 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.822346 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3313021d-864e-430f-acda-e4641355dd75" path="/var/lib/kubelet/pods/3313021d-864e-430f-acda-e4641355dd75/volumes" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.929691 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd5b7da-b930-4016-beeb-1d3e78def0eb-logs\") pod \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.930172 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-nova-metadata-tls-certs\") pod \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.930244 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-config-data\") pod \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.930249 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cd5b7da-b930-4016-beeb-1d3e78def0eb-logs" (OuterVolumeSpecName: "logs") pod "7cd5b7da-b930-4016-beeb-1d3e78def0eb" (UID: "7cd5b7da-b930-4016-beeb-1d3e78def0eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.930348 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-combined-ca-bundle\") pod \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.930403 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8zsj\" (UniqueName: \"kubernetes.io/projected/7cd5b7da-b930-4016-beeb-1d3e78def0eb-kube-api-access-j8zsj\") pod \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\" (UID: \"7cd5b7da-b930-4016-beeb-1d3e78def0eb\") " Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.932196 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd5b7da-b930-4016-beeb-1d3e78def0eb-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.936626 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd5b7da-b930-4016-beeb-1d3e78def0eb-kube-api-access-j8zsj" (OuterVolumeSpecName: "kube-api-access-j8zsj") pod "7cd5b7da-b930-4016-beeb-1d3e78def0eb" (UID: "7cd5b7da-b930-4016-beeb-1d3e78def0eb"). InnerVolumeSpecName "kube-api-access-j8zsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.973110 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cd5b7da-b930-4016-beeb-1d3e78def0eb" (UID: "7cd5b7da-b930-4016-beeb-1d3e78def0eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:52 crc kubenswrapper[4826]: I0129 07:04:52.995839 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7cd5b7da-b930-4016-beeb-1d3e78def0eb" (UID: "7cd5b7da-b930-4016-beeb-1d3e78def0eb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.000749 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-config-data" (OuterVolumeSpecName: "config-data") pod "7cd5b7da-b930-4016-beeb-1d3e78def0eb" (UID: "7cd5b7da-b930-4016-beeb-1d3e78def0eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.033973 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.034034 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8zsj\" (UniqueName: \"kubernetes.io/projected/7cd5b7da-b930-4016-beeb-1d3e78def0eb-kube-api-access-j8zsj\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.034048 4826 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.034062 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd5b7da-b930-4016-beeb-1d3e78def0eb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.058718 4826 generic.go:334] "Generic (PLEG): container finished" podID="71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" containerID="9bd866a3dd235a21752e0c3262aef5fa22bc67f7059dfe445c6b4738796eecfa" exitCode=143 Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.058787 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a","Type":"ContainerDied","Data":"9bd866a3dd235a21752e0c3262aef5fa22bc67f7059dfe445c6b4738796eecfa"} Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.062109 4826 generic.go:334] "Generic (PLEG): container finished" podID="7cd5b7da-b930-4016-beeb-1d3e78def0eb" containerID="e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07" exitCode=0 Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.062141 4826 generic.go:334] "Generic (PLEG): container finished" podID="7cd5b7da-b930-4016-beeb-1d3e78def0eb" containerID="7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f" exitCode=143 Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.062169 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.062207 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cd5b7da-b930-4016-beeb-1d3e78def0eb","Type":"ContainerDied","Data":"e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07"} Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.062252 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cd5b7da-b930-4016-beeb-1d3e78def0eb","Type":"ContainerDied","Data":"7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f"} Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.062267 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7cd5b7da-b930-4016-beeb-1d3e78def0eb","Type":"ContainerDied","Data":"e655d3d1120da50b2a38e4f3a026ff17b499802e41cc60536face1d88e6e3d0b"} Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.062287 4826 scope.go:117] "RemoveContainer" containerID="e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.062726 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c83e466a-631c-4461-b605-858b6b774743" containerName="nova-scheduler-scheduler" containerID="cri-o://4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b" gracePeriod=30 Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.099189 4826 scope.go:117] "RemoveContainer" containerID="7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.105637 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.118421 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.132004 4826 scope.go:117] "RemoveContainer" containerID="e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07" Jan 29 07:04:53 crc kubenswrapper[4826]: E0129 07:04:53.132939 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07\": container with ID starting with e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07 not found: ID does not exist" containerID="e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.132980 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07"} err="failed to get container status \"e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07\": rpc error: code = NotFound desc = could not find container \"e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07\": container with ID starting with e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07 not found: ID does not exist" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.133007 4826 scope.go:117] "RemoveContainer" containerID="7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f" Jan 29 07:04:53 crc kubenswrapper[4826]: E0129 07:04:53.133694 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f\": container with ID starting with 7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f not found: ID does not exist" containerID="7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.133718 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f"} err="failed to get container status \"7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f\": rpc error: code = NotFound desc = could not find container \"7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f\": container with ID starting with 7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f not found: ID does not exist" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.133733 4826 scope.go:117] "RemoveContainer" containerID="e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.134039 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07"} err="failed to get container status \"e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07\": rpc error: code = NotFound desc = could not find container \"e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07\": container with ID starting with e3a0fb3e331d0868d94d164f84e8ce2f908db1c06620ca159bdaff7528ba4f07 not found: ID does not exist" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.134062 4826 scope.go:117] "RemoveContainer" containerID="7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.137959 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f"} err="failed to get container status \"7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f\": rpc error: code = NotFound desc = could not find container \"7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f\": container with ID starting with 7b158ef13e692f5ba8961c401b821177ff87263890e89893cb676119a2c27c8f not found: ID does not exist" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.143667 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:53 crc kubenswrapper[4826]: E0129 07:04:53.149015 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd5b7da-b930-4016-beeb-1d3e78def0eb" containerName="nova-metadata-log" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.149141 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd5b7da-b930-4016-beeb-1d3e78def0eb" containerName="nova-metadata-log" Jan 29 07:04:53 crc kubenswrapper[4826]: E0129 07:04:53.149160 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3313021d-864e-430f-acda-e4641355dd75" containerName="dnsmasq-dns" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.149170 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3313021d-864e-430f-acda-e4641355dd75" containerName="dnsmasq-dns" Jan 29 07:04:53 crc kubenswrapper[4826]: E0129 07:04:53.149208 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3313021d-864e-430f-acda-e4641355dd75" containerName="init" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.149218 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3313021d-864e-430f-acda-e4641355dd75" containerName="init" Jan 29 07:04:53 crc kubenswrapper[4826]: E0129 07:04:53.149238 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c" containerName="nova-manage" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.149247 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c" containerName="nova-manage" Jan 29 07:04:53 crc kubenswrapper[4826]: E0129 07:04:53.149282 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd5b7da-b930-4016-beeb-1d3e78def0eb" containerName="nova-metadata-metadata" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.149350 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd5b7da-b930-4016-beeb-1d3e78def0eb" containerName="nova-metadata-metadata" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.149589 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd5b7da-b930-4016-beeb-1d3e78def0eb" containerName="nova-metadata-log" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.149618 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c" containerName="nova-manage" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.149640 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd5b7da-b930-4016-beeb-1d3e78def0eb" containerName="nova-metadata-metadata" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.149657 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3313021d-864e-430f-acda-e4641355dd75" containerName="dnsmasq-dns" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.151193 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.154403 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.157667 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.157775 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.237156 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpc7v\" (UniqueName: \"kubernetes.io/projected/f0df13a7-7612-4388-8ad5-b399b2305d4c-kube-api-access-qpc7v\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.237225 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.237262 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0df13a7-7612-4388-8ad5-b399b2305d4c-logs\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.237318 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-config-data\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.237378 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.338532 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.338613 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpc7v\" (UniqueName: \"kubernetes.io/projected/f0df13a7-7612-4388-8ad5-b399b2305d4c-kube-api-access-qpc7v\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.338673 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.338718 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0df13a7-7612-4388-8ad5-b399b2305d4c-logs\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.338769 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-config-data\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.339565 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0df13a7-7612-4388-8ad5-b399b2305d4c-logs\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.343065 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-config-data\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.343105 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.343632 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.373519 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpc7v\" (UniqueName: \"kubernetes.io/projected/f0df13a7-7612-4388-8ad5-b399b2305d4c-kube-api-access-qpc7v\") pod \"nova-metadata-0\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " pod="openstack/nova-metadata-0" Jan 29 07:04:53 crc kubenswrapper[4826]: I0129 07:04:53.473627 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:04:54 crc kubenswrapper[4826]: I0129 07:04:54.003041 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:04:54 crc kubenswrapper[4826]: W0129 07:04:54.005884 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0df13a7_7612_4388_8ad5_b399b2305d4c.slice/crio-895f752d91a7ad74c1b5578b94f3bfd42e7ce4f6cc071ff87724622c870a8742 WatchSource:0}: Error finding container 895f752d91a7ad74c1b5578b94f3bfd42e7ce4f6cc071ff87724622c870a8742: Status 404 returned error can't find the container with id 895f752d91a7ad74c1b5578b94f3bfd42e7ce4f6cc071ff87724622c870a8742 Jan 29 07:04:54 crc kubenswrapper[4826]: I0129 07:04:54.071711 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0df13a7-7612-4388-8ad5-b399b2305d4c","Type":"ContainerStarted","Data":"895f752d91a7ad74c1b5578b94f3bfd42e7ce4f6cc071ff87724622c870a8742"} Jan 29 07:04:54 crc kubenswrapper[4826]: I0129 07:04:54.072719 4826 generic.go:334] "Generic (PLEG): container finished" podID="acc386ea-3778-41d9-9c92-3ebc8f96f700" containerID="6873be83fc4a55210d906801985bb6bb4c3941be3e78f955610aac735d9256bd" exitCode=0 Jan 29 07:04:54 crc kubenswrapper[4826]: I0129 07:04:54.072746 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x8x4z" event={"ID":"acc386ea-3778-41d9-9c92-3ebc8f96f700","Type":"ContainerDied","Data":"6873be83fc4a55210d906801985bb6bb4c3941be3e78f955610aac735d9256bd"} Jan 29 07:04:54 crc kubenswrapper[4826]: I0129 07:04:54.842196 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd5b7da-b930-4016-beeb-1d3e78def0eb" path="/var/lib/kubelet/pods/7cd5b7da-b930-4016-beeb-1d3e78def0eb/volumes" Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.089738 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0df13a7-7612-4388-8ad5-b399b2305d4c","Type":"ContainerStarted","Data":"b529d4058dbd8a282ad5ad44f5d2a4b74a2edc7b98e5444a205fa82df456d9b1"} Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.089808 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0df13a7-7612-4388-8ad5-b399b2305d4c","Type":"ContainerStarted","Data":"e66df0cba5d164702489ab88b03e84afe38292edadb59080fb4903202cc2879b"} Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.127184 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.127161656 podStartE2EDuration="2.127161656s" podCreationTimestamp="2026-01-29 07:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:04:55.11894017 +0000 UTC m=+1278.980733279" watchObservedRunningTime="2026-01-29 07:04:55.127161656 +0000 UTC m=+1278.988954735" Jan 29 07:04:55 crc kubenswrapper[4826]: E0129 07:04:55.404766 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 07:04:55 crc kubenswrapper[4826]: E0129 07:04:55.408963 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 07:04:55 crc kubenswrapper[4826]: E0129 07:04:55.410639 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 07:04:55 crc kubenswrapper[4826]: E0129 07:04:55.410686 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c83e466a-631c-4461-b605-858b6b774743" containerName="nova-scheduler-scheduler" Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.502775 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.687087 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-scripts\") pod \"acc386ea-3778-41d9-9c92-3ebc8f96f700\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.687181 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-config-data\") pod \"acc386ea-3778-41d9-9c92-3ebc8f96f700\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.687432 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-combined-ca-bundle\") pod \"acc386ea-3778-41d9-9c92-3ebc8f96f700\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.687527 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xppf\" (UniqueName: \"kubernetes.io/projected/acc386ea-3778-41d9-9c92-3ebc8f96f700-kube-api-access-9xppf\") pod \"acc386ea-3778-41d9-9c92-3ebc8f96f700\" (UID: \"acc386ea-3778-41d9-9c92-3ebc8f96f700\") " Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.692601 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc386ea-3778-41d9-9c92-3ebc8f96f700-kube-api-access-9xppf" (OuterVolumeSpecName: "kube-api-access-9xppf") pod "acc386ea-3778-41d9-9c92-3ebc8f96f700" (UID: "acc386ea-3778-41d9-9c92-3ebc8f96f700"). InnerVolumeSpecName "kube-api-access-9xppf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.704734 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-scripts" (OuterVolumeSpecName: "scripts") pod "acc386ea-3778-41d9-9c92-3ebc8f96f700" (UID: "acc386ea-3778-41d9-9c92-3ebc8f96f700"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.726537 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-config-data" (OuterVolumeSpecName: "config-data") pod "acc386ea-3778-41d9-9c92-3ebc8f96f700" (UID: "acc386ea-3778-41d9-9c92-3ebc8f96f700"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.732172 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acc386ea-3778-41d9-9c92-3ebc8f96f700" (UID: "acc386ea-3778-41d9-9c92-3ebc8f96f700"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.789732 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.789767 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.789778 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xppf\" (UniqueName: \"kubernetes.io/projected/acc386ea-3778-41d9-9c92-3ebc8f96f700-kube-api-access-9xppf\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:55 crc kubenswrapper[4826]: I0129 07:04:55.789786 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acc386ea-3778-41d9-9c92-3ebc8f96f700-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.104259 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x8x4z" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.111535 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x8x4z" event={"ID":"acc386ea-3778-41d9-9c92-3ebc8f96f700","Type":"ContainerDied","Data":"10aae8a3af3800d3b38a3cce7edbf40e9b2eecbce02c1693ed43b903aae53b47"} Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.111596 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10aae8a3af3800d3b38a3cce7edbf40e9b2eecbce02c1693ed43b903aae53b47" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.242065 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 07:04:56 crc kubenswrapper[4826]: E0129 07:04:56.242792 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc386ea-3778-41d9-9c92-3ebc8f96f700" containerName="nova-cell1-conductor-db-sync" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.242816 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc386ea-3778-41d9-9c92-3ebc8f96f700" containerName="nova-cell1-conductor-db-sync" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.243029 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc386ea-3778-41d9-9c92-3ebc8f96f700" containerName="nova-cell1-conductor-db-sync" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.244156 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.253008 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.254045 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.298401 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\") " pod="openstack/nova-cell1-conductor-0" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.298456 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l6fw\" (UniqueName: \"kubernetes.io/projected/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-kube-api-access-9l6fw\") pod \"nova-cell1-conductor-0\" (UID: \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\") " pod="openstack/nova-cell1-conductor-0" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.298562 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\") " pod="openstack/nova-cell1-conductor-0" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.400014 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\") " pod="openstack/nova-cell1-conductor-0" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.400073 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l6fw\" (UniqueName: \"kubernetes.io/projected/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-kube-api-access-9l6fw\") pod \"nova-cell1-conductor-0\" (UID: \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\") " pod="openstack/nova-cell1-conductor-0" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.400182 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\") " pod="openstack/nova-cell1-conductor-0" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.406099 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\") " pod="openstack/nova-cell1-conductor-0" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.421568 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\") " pod="openstack/nova-cell1-conductor-0" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.427929 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l6fw\" (UniqueName: \"kubernetes.io/projected/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-kube-api-access-9l6fw\") pod \"nova-cell1-conductor-0\" (UID: \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\") " pod="openstack/nova-cell1-conductor-0" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.560008 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.882515 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.908548 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83e466a-631c-4461-b605-858b6b774743-config-data\") pod \"c83e466a-631c-4461-b605-858b6b774743\" (UID: \"c83e466a-631c-4461-b605-858b6b774743\") " Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.908695 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83e466a-631c-4461-b605-858b6b774743-combined-ca-bundle\") pod \"c83e466a-631c-4461-b605-858b6b774743\" (UID: \"c83e466a-631c-4461-b605-858b6b774743\") " Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.908850 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl5vd\" (UniqueName: \"kubernetes.io/projected/c83e466a-631c-4461-b605-858b6b774743-kube-api-access-fl5vd\") pod \"c83e466a-631c-4461-b605-858b6b774743\" (UID: \"c83e466a-631c-4461-b605-858b6b774743\") " Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.913985 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c83e466a-631c-4461-b605-858b6b774743-kube-api-access-fl5vd" (OuterVolumeSpecName: "kube-api-access-fl5vd") pod "c83e466a-631c-4461-b605-858b6b774743" (UID: "c83e466a-631c-4461-b605-858b6b774743"). InnerVolumeSpecName "kube-api-access-fl5vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.937598 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83e466a-631c-4461-b605-858b6b774743-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c83e466a-631c-4461-b605-858b6b774743" (UID: "c83e466a-631c-4461-b605-858b6b774743"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:56 crc kubenswrapper[4826]: I0129 07:04:56.957587 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83e466a-631c-4461-b605-858b6b774743-config-data" (OuterVolumeSpecName: "config-data") pod "c83e466a-631c-4461-b605-858b6b774743" (UID: "c83e466a-631c-4461-b605-858b6b774743"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.013828 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl5vd\" (UniqueName: \"kubernetes.io/projected/c83e466a-631c-4461-b605-858b6b774743-kube-api-access-fl5vd\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.013869 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83e466a-631c-4461-b605-858b6b774743-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.013880 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83e466a-631c-4461-b605-858b6b774743-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.099008 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 07:04:57 crc kubenswrapper[4826]: W0129 07:04:57.102598 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode27383b1_aba6_4c25_9d4b_3b9cceb2b739.slice/crio-86ff871e0662a98006b7e62240f7736f72820eb368c483d39b6ce7198c2584ed WatchSource:0}: Error finding container 86ff871e0662a98006b7e62240f7736f72820eb368c483d39b6ce7198c2584ed: Status 404 returned error can't find the container with id 86ff871e0662a98006b7e62240f7736f72820eb368c483d39b6ce7198c2584ed Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.120607 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e27383b1-aba6-4c25-9d4b-3b9cceb2b739","Type":"ContainerStarted","Data":"86ff871e0662a98006b7e62240f7736f72820eb368c483d39b6ce7198c2584ed"} Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.122244 4826 generic.go:334] "Generic (PLEG): container finished" podID="71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" containerID="8e18ff4b605bbbdc0e151943f63abc7f6ba212c2efcee70e8c8b8bf32db86d8a" exitCode=0 Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.122287 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a","Type":"ContainerDied","Data":"8e18ff4b605bbbdc0e151943f63abc7f6ba212c2efcee70e8c8b8bf32db86d8a"} Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.124011 4826 generic.go:334] "Generic (PLEG): container finished" podID="c83e466a-631c-4461-b605-858b6b774743" containerID="4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b" exitCode=0 Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.124036 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c83e466a-631c-4461-b605-858b6b774743","Type":"ContainerDied","Data":"4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b"} Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.124053 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c83e466a-631c-4461-b605-858b6b774743","Type":"ContainerDied","Data":"2effe84c0d20580c11ee8ff6a05172f698634a46d418ef01577f9f9cc53c60f4"} Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.124069 4826 scope.go:117] "RemoveContainer" containerID="4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.124173 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.132286 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.209119 4826 scope.go:117] "RemoveContainer" containerID="4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b" Jan 29 07:04:57 crc kubenswrapper[4826]: E0129 07:04:57.209742 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b\": container with ID starting with 4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b not found: ID does not exist" containerID="4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.209806 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b"} err="failed to get container status \"4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b\": rpc error: code = NotFound desc = could not find container \"4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b\": container with ID starting with 4d3ad9ca60e6238554cf32581fbdd213d22311ea88430734aaa51348d2be7d8b not found: ID does not exist" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.223330 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86l6v\" (UniqueName: \"kubernetes.io/projected/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-kube-api-access-86l6v\") pod \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.223394 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-logs\") pod \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.223470 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-combined-ca-bundle\") pod \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.223559 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-config-data\") pod \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\" (UID: \"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a\") " Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.224279 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-logs" (OuterVolumeSpecName: "logs") pod "71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" (UID: "71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.225024 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.227223 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.231623 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-kube-api-access-86l6v" (OuterVolumeSpecName: "kube-api-access-86l6v") pod "71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" (UID: "71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a"). InnerVolumeSpecName "kube-api-access-86l6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.255121 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-config-data" (OuterVolumeSpecName: "config-data") pod "71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" (UID: "71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.258941 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.262666 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" (UID: "71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.270904 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:04:57 crc kubenswrapper[4826]: E0129 07:04:57.271412 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83e466a-631c-4461-b605-858b6b774743" containerName="nova-scheduler-scheduler" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.271437 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83e466a-631c-4461-b605-858b6b774743" containerName="nova-scheduler-scheduler" Jan 29 07:04:57 crc kubenswrapper[4826]: E0129 07:04:57.271481 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" containerName="nova-api-log" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.271492 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" containerName="nova-api-log" Jan 29 07:04:57 crc kubenswrapper[4826]: E0129 07:04:57.271504 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" containerName="nova-api-api" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.271514 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" containerName="nova-api-api" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.271788 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83e466a-631c-4461-b605-858b6b774743" containerName="nova-scheduler-scheduler" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.271805 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" containerName="nova-api-api" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.271820 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" containerName="nova-api-log" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.272596 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.279205 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.283809 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.326839 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.326912 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-config-data\") pod \"nova-scheduler-0\" (UID: \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.326950 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtn6r\" (UniqueName: \"kubernetes.io/projected/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-kube-api-access-qtn6r\") pod \"nova-scheduler-0\" (UID: \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.327229 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.327271 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86l6v\" (UniqueName: \"kubernetes.io/projected/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-kube-api-access-86l6v\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.327285 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.428943 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.429006 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-config-data\") pod \"nova-scheduler-0\" (UID: \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.429044 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtn6r\" (UniqueName: \"kubernetes.io/projected/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-kube-api-access-qtn6r\") pod \"nova-scheduler-0\" (UID: \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.433907 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-config-data\") pod \"nova-scheduler-0\" (UID: \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.434004 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.454185 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtn6r\" (UniqueName: \"kubernetes.io/projected/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-kube-api-access-qtn6r\") pod \"nova-scheduler-0\" (UID: \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\") " pod="openstack/nova-scheduler-0" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.596903 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 07:04:57 crc kubenswrapper[4826]: I0129 07:04:57.895564 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.142354 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b5d8ef1-2ca8-497b-8245-c37998f31fdb","Type":"ContainerStarted","Data":"c3c0598735eafaa68dfcee92f51585880b80e38ec7eb3bd5aed485d119ab2b34"} Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.146032 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e27383b1-aba6-4c25-9d4b-3b9cceb2b739","Type":"ContainerStarted","Data":"cafb58d89e7c6ef227f4fff8321a876edaab7b5f0a9dcbdb25a9840e49c6af78"} Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.146688 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.151717 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a","Type":"ContainerDied","Data":"f37f03c8ccb495a689291b312e1c3709d869440a024a62c8ca7ca45f03cbd455"} Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.151811 4826 scope.go:117] "RemoveContainer" containerID="8e18ff4b605bbbdc0e151943f63abc7f6ba212c2efcee70e8c8b8bf32db86d8a" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.151736 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.179370 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.179345714 podStartE2EDuration="2.179345714s" podCreationTimestamp="2026-01-29 07:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:04:58.168277313 +0000 UTC m=+1282.030070372" watchObservedRunningTime="2026-01-29 07:04:58.179345714 +0000 UTC m=+1282.041138783" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.203701 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.216497 4826 scope.go:117] "RemoveContainer" containerID="9bd866a3dd235a21752e0c3262aef5fa22bc67f7059dfe445c6b4738796eecfa" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.232388 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.244390 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.246025 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.248454 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdd7a6a7-486e-4736-af7e-70f16394252f-logs\") pod \"nova-api-0\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.248587 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd7a6a7-486e-4736-af7e-70f16394252f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.248764 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hxkh\" (UniqueName: \"kubernetes.io/projected/bdd7a6a7-486e-4736-af7e-70f16394252f-kube-api-access-7hxkh\") pod \"nova-api-0\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.248814 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd7a6a7-486e-4736-af7e-70f16394252f-config-data\") pod \"nova-api-0\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.249986 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.256002 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.349482 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdd7a6a7-486e-4736-af7e-70f16394252f-logs\") pod \"nova-api-0\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.349585 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd7a6a7-486e-4736-af7e-70f16394252f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.349754 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hxkh\" (UniqueName: \"kubernetes.io/projected/bdd7a6a7-486e-4736-af7e-70f16394252f-kube-api-access-7hxkh\") pod \"nova-api-0\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.349801 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd7a6a7-486e-4736-af7e-70f16394252f-config-data\") pod \"nova-api-0\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.349861 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdd7a6a7-486e-4736-af7e-70f16394252f-logs\") pod \"nova-api-0\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.357659 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd7a6a7-486e-4736-af7e-70f16394252f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.360145 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd7a6a7-486e-4736-af7e-70f16394252f-config-data\") pod \"nova-api-0\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.371077 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hxkh\" (UniqueName: \"kubernetes.io/projected/bdd7a6a7-486e-4736-af7e-70f16394252f-kube-api-access-7hxkh\") pod \"nova-api-0\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.473759 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.474755 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.573725 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.824087 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a" path="/var/lib/kubelet/pods/71ca3a72-da7d-4fbc-8d5a-9cb2b56d799a/volumes" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.826969 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83e466a-631c-4461-b605-858b6b774743" path="/var/lib/kubelet/pods/c83e466a-631c-4461-b605-858b6b774743/volumes" Jan 29 07:04:58 crc kubenswrapper[4826]: I0129 07:04:58.881771 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:04:58 crc kubenswrapper[4826]: W0129 07:04:58.883025 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdd7a6a7_486e_4736_af7e_70f16394252f.slice/crio-fd2a72655e58bbcd22c774adc3bedf4c2a85d09062cdf5a39c87e4dadc1fc5ad WatchSource:0}: Error finding container fd2a72655e58bbcd22c774adc3bedf4c2a85d09062cdf5a39c87e4dadc1fc5ad: Status 404 returned error can't find the container with id fd2a72655e58bbcd22c774adc3bedf4c2a85d09062cdf5a39c87e4dadc1fc5ad Jan 29 07:04:59 crc kubenswrapper[4826]: I0129 07:04:59.181107 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b5d8ef1-2ca8-497b-8245-c37998f31fdb","Type":"ContainerStarted","Data":"0418191ed0b042868c16dbc2385f651ded763d791ad329ba66d9fa1fc9bf6d89"} Jan 29 07:04:59 crc kubenswrapper[4826]: I0129 07:04:59.186019 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdd7a6a7-486e-4736-af7e-70f16394252f","Type":"ContainerStarted","Data":"a70ebef7793fc16d23eb3cf7086d5a32ae98490e208816e9aff22022cb690fb9"} Jan 29 07:04:59 crc kubenswrapper[4826]: I0129 07:04:59.186203 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdd7a6a7-486e-4736-af7e-70f16394252f","Type":"ContainerStarted","Data":"fd2a72655e58bbcd22c774adc3bedf4c2a85d09062cdf5a39c87e4dadc1fc5ad"} Jan 29 07:04:59 crc kubenswrapper[4826]: I0129 07:04:59.227547 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.227520822 podStartE2EDuration="2.227520822s" podCreationTimestamp="2026-01-29 07:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:04:59.216051731 +0000 UTC m=+1283.077844810" watchObservedRunningTime="2026-01-29 07:04:59.227520822 +0000 UTC m=+1283.089314031" Jan 29 07:05:00 crc kubenswrapper[4826]: I0129 07:05:00.197859 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdd7a6a7-486e-4736-af7e-70f16394252f","Type":"ContainerStarted","Data":"8d5b3fff199854dae6cde1a71f974779070350c89f28ec58fbdde4598e08fca8"} Jan 29 07:05:00 crc kubenswrapper[4826]: I0129 07:05:00.223649 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.223624171 podStartE2EDuration="2.223624171s" podCreationTimestamp="2026-01-29 07:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:05:00.219169054 +0000 UTC m=+1284.080962123" watchObservedRunningTime="2026-01-29 07:05:00.223624171 +0000 UTC m=+1284.085417270" Jan 29 07:05:02 crc kubenswrapper[4826]: I0129 07:05:02.072093 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 07:05:02 crc kubenswrapper[4826]: I0129 07:05:02.598122 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 07:05:03 crc kubenswrapper[4826]: I0129 07:05:03.474498 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 07:05:03 crc kubenswrapper[4826]: I0129 07:05:03.474968 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 07:05:04 crc kubenswrapper[4826]: I0129 07:05:04.486468 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 07:05:04 crc kubenswrapper[4826]: I0129 07:05:04.486480 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 07:05:06 crc kubenswrapper[4826]: I0129 07:05:06.145494 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 07:05:06 crc kubenswrapper[4826]: I0129 07:05:06.145732 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1" containerName="kube-state-metrics" containerID="cri-o://3be1f88eab62bf3521f70d5f30c9c6d8f758049c6f738bf9d863789d95170f4f" gracePeriod=30 Jan 29 07:05:06 crc kubenswrapper[4826]: I0129 07:05:06.596336 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 07:05:06 crc kubenswrapper[4826]: I0129 07:05:06.712152 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 07:05:06 crc kubenswrapper[4826]: I0129 07:05:06.849325 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxprp\" (UniqueName: \"kubernetes.io/projected/c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1-kube-api-access-nxprp\") pod \"c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1\" (UID: \"c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1\") " Jan 29 07:05:06 crc kubenswrapper[4826]: I0129 07:05:06.855145 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1-kube-api-access-nxprp" (OuterVolumeSpecName: "kube-api-access-nxprp") pod "c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1" (UID: "c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1"). InnerVolumeSpecName "kube-api-access-nxprp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:05:06 crc kubenswrapper[4826]: I0129 07:05:06.955091 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxprp\" (UniqueName: \"kubernetes.io/projected/c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1-kube-api-access-nxprp\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.273716 4826 generic.go:334] "Generic (PLEG): container finished" podID="c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1" containerID="3be1f88eab62bf3521f70d5f30c9c6d8f758049c6f738bf9d863789d95170f4f" exitCode=2 Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.273765 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1","Type":"ContainerDied","Data":"3be1f88eab62bf3521f70d5f30c9c6d8f758049c6f738bf9d863789d95170f4f"} Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.273799 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1","Type":"ContainerDied","Data":"b311d228575a7bff201b4df9489cc4ea8f4da60a8100a5aae86d49ad212a434e"} Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.273822 4826 scope.go:117] "RemoveContainer" containerID="3be1f88eab62bf3521f70d5f30c9c6d8f758049c6f738bf9d863789d95170f4f" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.273969 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.302692 4826 scope.go:117] "RemoveContainer" containerID="3be1f88eab62bf3521f70d5f30c9c6d8f758049c6f738bf9d863789d95170f4f" Jan 29 07:05:07 crc kubenswrapper[4826]: E0129 07:05:07.303197 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be1f88eab62bf3521f70d5f30c9c6d8f758049c6f738bf9d863789d95170f4f\": container with ID starting with 3be1f88eab62bf3521f70d5f30c9c6d8f758049c6f738bf9d863789d95170f4f not found: ID does not exist" containerID="3be1f88eab62bf3521f70d5f30c9c6d8f758049c6f738bf9d863789d95170f4f" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.303231 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be1f88eab62bf3521f70d5f30c9c6d8f758049c6f738bf9d863789d95170f4f"} err="failed to get container status \"3be1f88eab62bf3521f70d5f30c9c6d8f758049c6f738bf9d863789d95170f4f\": rpc error: code = NotFound desc = could not find container \"3be1f88eab62bf3521f70d5f30c9c6d8f758049c6f738bf9d863789d95170f4f\": container with ID starting with 3be1f88eab62bf3521f70d5f30c9c6d8f758049c6f738bf9d863789d95170f4f not found: ID does not exist" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.315916 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.348751 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.362595 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 07:05:07 crc kubenswrapper[4826]: E0129 07:05:07.363040 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1" containerName="kube-state-metrics" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.363060 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1" containerName="kube-state-metrics" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.363267 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1" containerName="kube-state-metrics" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.363952 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.366011 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.367242 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.407050 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.465732 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpwkq\" (UniqueName: \"kubernetes.io/projected/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-api-access-qpwkq\") pod \"kube-state-metrics-0\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.465785 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.465859 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.465965 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.567357 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpwkq\" (UniqueName: \"kubernetes.io/projected/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-api-access-qpwkq\") pod \"kube-state-metrics-0\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.567398 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.567432 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.567460 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.573806 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.574532 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.576141 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.590771 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpwkq\" (UniqueName: \"kubernetes.io/projected/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-api-access-qpwkq\") pod \"kube-state-metrics-0\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " pod="openstack/kube-state-metrics-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.597468 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.646903 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 07:05:07 crc kubenswrapper[4826]: I0129 07:05:07.723011 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 07:05:08 crc kubenswrapper[4826]: I0129 07:05:08.215143 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 07:05:08 crc kubenswrapper[4826]: W0129 07:05:08.222491 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42903a4e_8bdc_4c7b_bd44_b87199a848e6.slice/crio-488a48e797ef76f33b6069e6c9b9a59c9f8e91084b26a9a80593c02d656a4d8f WatchSource:0}: Error finding container 488a48e797ef76f33b6069e6c9b9a59c9f8e91084b26a9a80593c02d656a4d8f: Status 404 returned error can't find the container with id 488a48e797ef76f33b6069e6c9b9a59c9f8e91084b26a9a80593c02d656a4d8f Jan 29 07:05:08 crc kubenswrapper[4826]: I0129 07:05:08.284398 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42903a4e-8bdc-4c7b-bd44-b87199a848e6","Type":"ContainerStarted","Data":"488a48e797ef76f33b6069e6c9b9a59c9f8e91084b26a9a80593c02d656a4d8f"} Jan 29 07:05:08 crc kubenswrapper[4826]: I0129 07:05:08.316009 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 07:05:08 crc kubenswrapper[4826]: I0129 07:05:08.408494 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:08 crc kubenswrapper[4826]: I0129 07:05:08.408981 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="ceilometer-central-agent" containerID="cri-o://aa6690f5903dc1fde0e2caa25b55d22e8ba87e4a164e835836e30ac4b77f074d" gracePeriod=30 Jan 29 07:05:08 crc kubenswrapper[4826]: I0129 07:05:08.409173 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="proxy-httpd" containerID="cri-o://55cb22a65b41a39b6f7f68c1e1e2c47e3bd3ae66d73b5b7be134773beef7bd46" gracePeriod=30 Jan 29 07:05:08 crc kubenswrapper[4826]: I0129 07:05:08.409330 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="sg-core" containerID="cri-o://d401a9ac6019a08c84060d8d8758b0be5997d04fd6ff50f44e226e51449b24bc" gracePeriod=30 Jan 29 07:05:08 crc kubenswrapper[4826]: I0129 07:05:08.409407 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="ceilometer-notification-agent" containerID="cri-o://646954d7e0b3243e82c3bd8eec6ddb3ac4613db0296061f328a2301ec4bb2830" gracePeriod=30 Jan 29 07:05:08 crc kubenswrapper[4826]: I0129 07:05:08.574360 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 07:05:08 crc kubenswrapper[4826]: I0129 07:05:08.574427 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 07:05:08 crc kubenswrapper[4826]: I0129 07:05:08.820810 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1" path="/var/lib/kubelet/pods/c2c7d16c-55e0-4a1e-81f4-a751dcef2cf1/volumes" Jan 29 07:05:09 crc kubenswrapper[4826]: I0129 07:05:09.311145 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42903a4e-8bdc-4c7b-bd44-b87199a848e6","Type":"ContainerStarted","Data":"bddf8e03297919dd378b31ed57d97c8e00f8aa6eb7eb5c177dd2b26d1146eb32"} Jan 29 07:05:09 crc kubenswrapper[4826]: I0129 07:05:09.311642 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 07:05:09 crc kubenswrapper[4826]: I0129 07:05:09.318153 4826 generic.go:334] "Generic (PLEG): container finished" podID="71636ef8-0a41-494e-bd82-703d109ce02d" containerID="55cb22a65b41a39b6f7f68c1e1e2c47e3bd3ae66d73b5b7be134773beef7bd46" exitCode=0 Jan 29 07:05:09 crc kubenswrapper[4826]: I0129 07:05:09.318175 4826 generic.go:334] "Generic (PLEG): container finished" podID="71636ef8-0a41-494e-bd82-703d109ce02d" containerID="d401a9ac6019a08c84060d8d8758b0be5997d04fd6ff50f44e226e51449b24bc" exitCode=2 Jan 29 07:05:09 crc kubenswrapper[4826]: I0129 07:05:09.318183 4826 generic.go:334] "Generic (PLEG): container finished" podID="71636ef8-0a41-494e-bd82-703d109ce02d" containerID="aa6690f5903dc1fde0e2caa25b55d22e8ba87e4a164e835836e30ac4b77f074d" exitCode=0 Jan 29 07:05:09 crc kubenswrapper[4826]: I0129 07:05:09.318210 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71636ef8-0a41-494e-bd82-703d109ce02d","Type":"ContainerDied","Data":"55cb22a65b41a39b6f7f68c1e1e2c47e3bd3ae66d73b5b7be134773beef7bd46"} Jan 29 07:05:09 crc kubenswrapper[4826]: I0129 07:05:09.318284 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71636ef8-0a41-494e-bd82-703d109ce02d","Type":"ContainerDied","Data":"d401a9ac6019a08c84060d8d8758b0be5997d04fd6ff50f44e226e51449b24bc"} Jan 29 07:05:09 crc kubenswrapper[4826]: I0129 07:05:09.318335 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71636ef8-0a41-494e-bd82-703d109ce02d","Type":"ContainerDied","Data":"aa6690f5903dc1fde0e2caa25b55d22e8ba87e4a164e835836e30ac4b77f074d"} Jan 29 07:05:09 crc kubenswrapper[4826]: I0129 07:05:09.344850 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.947111403 podStartE2EDuration="2.344832001s" podCreationTimestamp="2026-01-29 07:05:07 +0000 UTC" firstStartedPulling="2026-01-29 07:05:08.225362226 +0000 UTC m=+1292.087155305" lastFinishedPulling="2026-01-29 07:05:08.623082814 +0000 UTC m=+1292.484875903" observedRunningTime="2026-01-29 07:05:09.335863755 +0000 UTC m=+1293.197656864" watchObservedRunningTime="2026-01-29 07:05:09.344832001 +0000 UTC m=+1293.206625070" Jan 29 07:05:09 crc kubenswrapper[4826]: I0129 07:05:09.656468 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bdd7a6a7-486e-4736-af7e-70f16394252f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 07:05:09 crc kubenswrapper[4826]: I0129 07:05:09.656527 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bdd7a6a7-486e-4736-af7e-70f16394252f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 07:05:12 crc kubenswrapper[4826]: I0129 07:05:12.350662 4826 generic.go:334] "Generic (PLEG): container finished" podID="71636ef8-0a41-494e-bd82-703d109ce02d" containerID="646954d7e0b3243e82c3bd8eec6ddb3ac4613db0296061f328a2301ec4bb2830" exitCode=0 Jan 29 07:05:12 crc kubenswrapper[4826]: I0129 07:05:12.350995 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71636ef8-0a41-494e-bd82-703d109ce02d","Type":"ContainerDied","Data":"646954d7e0b3243e82c3bd8eec6ddb3ac4613db0296061f328a2301ec4bb2830"} Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.002694 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.083477 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-combined-ca-bundle\") pod \"71636ef8-0a41-494e-bd82-703d109ce02d\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.083532 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7lp9\" (UniqueName: \"kubernetes.io/projected/71636ef8-0a41-494e-bd82-703d109ce02d-kube-api-access-z7lp9\") pod \"71636ef8-0a41-494e-bd82-703d109ce02d\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.083607 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71636ef8-0a41-494e-bd82-703d109ce02d-run-httpd\") pod \"71636ef8-0a41-494e-bd82-703d109ce02d\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.083667 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-config-data\") pod \"71636ef8-0a41-494e-bd82-703d109ce02d\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.083691 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-sg-core-conf-yaml\") pod \"71636ef8-0a41-494e-bd82-703d109ce02d\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.083709 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-scripts\") pod \"71636ef8-0a41-494e-bd82-703d109ce02d\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.083827 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71636ef8-0a41-494e-bd82-703d109ce02d-log-httpd\") pod \"71636ef8-0a41-494e-bd82-703d109ce02d\" (UID: \"71636ef8-0a41-494e-bd82-703d109ce02d\") " Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.085041 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71636ef8-0a41-494e-bd82-703d109ce02d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "71636ef8-0a41-494e-bd82-703d109ce02d" (UID: "71636ef8-0a41-494e-bd82-703d109ce02d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.085948 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71636ef8-0a41-494e-bd82-703d109ce02d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "71636ef8-0a41-494e-bd82-703d109ce02d" (UID: "71636ef8-0a41-494e-bd82-703d109ce02d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.103809 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-scripts" (OuterVolumeSpecName: "scripts") pod "71636ef8-0a41-494e-bd82-703d109ce02d" (UID: "71636ef8-0a41-494e-bd82-703d109ce02d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.103917 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71636ef8-0a41-494e-bd82-703d109ce02d-kube-api-access-z7lp9" (OuterVolumeSpecName: "kube-api-access-z7lp9") pod "71636ef8-0a41-494e-bd82-703d109ce02d" (UID: "71636ef8-0a41-494e-bd82-703d109ce02d"). InnerVolumeSpecName "kube-api-access-z7lp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.128323 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "71636ef8-0a41-494e-bd82-703d109ce02d" (UID: "71636ef8-0a41-494e-bd82-703d109ce02d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.177505 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71636ef8-0a41-494e-bd82-703d109ce02d" (UID: "71636ef8-0a41-494e-bd82-703d109ce02d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.185532 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71636ef8-0a41-494e-bd82-703d109ce02d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.185561 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.185574 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.185582 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71636ef8-0a41-494e-bd82-703d109ce02d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.185590 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.185598 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7lp9\" (UniqueName: \"kubernetes.io/projected/71636ef8-0a41-494e-bd82-703d109ce02d-kube-api-access-z7lp9\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.210845 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-config-data" (OuterVolumeSpecName: "config-data") pod "71636ef8-0a41-494e-bd82-703d109ce02d" (UID: "71636ef8-0a41-494e-bd82-703d109ce02d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.288391 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71636ef8-0a41-494e-bd82-703d109ce02d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.366673 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71636ef8-0a41-494e-bd82-703d109ce02d","Type":"ContainerDied","Data":"218ebbd79527b104dc3ab226a096d1d242c9974de791d984b67c808d8c9f630f"} Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.366737 4826 scope.go:117] "RemoveContainer" containerID="55cb22a65b41a39b6f7f68c1e1e2c47e3bd3ae66d73b5b7be134773beef7bd46" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.366862 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.402185 4826 scope.go:117] "RemoveContainer" containerID="d401a9ac6019a08c84060d8d8758b0be5997d04fd6ff50f44e226e51449b24bc" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.435073 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.450538 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.460923 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:13 crc kubenswrapper[4826]: E0129 07:05:13.461554 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="ceilometer-notification-agent" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.461628 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="ceilometer-notification-agent" Jan 29 07:05:13 crc kubenswrapper[4826]: E0129 07:05:13.461707 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="sg-core" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.461758 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="sg-core" Jan 29 07:05:13 crc kubenswrapper[4826]: E0129 07:05:13.461813 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="proxy-httpd" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.461873 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="proxy-httpd" Jan 29 07:05:13 crc kubenswrapper[4826]: E0129 07:05:13.461949 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="ceilometer-central-agent" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.462001 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="ceilometer-central-agent" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.462216 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="proxy-httpd" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.462284 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="ceilometer-notification-agent" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.462391 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="sg-core" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.462453 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" containerName="ceilometer-central-agent" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.464282 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.467579 4826 scope.go:117] "RemoveContainer" containerID="646954d7e0b3243e82c3bd8eec6ddb3ac4613db0296061f328a2301ec4bb2830" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.467796 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.468059 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.468237 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.473849 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.484632 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.487864 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.514868 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.526590 4826 scope.go:117] "RemoveContainer" containerID="aa6690f5903dc1fde0e2caa25b55d22e8ba87e4a164e835836e30ac4b77f074d" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.592887 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-scripts\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.593117 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.593187 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.593232 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ce2480-4daa-4360-841c-2d604bd232d1-log-httpd\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.593252 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-config-data\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.593276 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.593339 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ce2480-4daa-4360-841c-2d604bd232d1-run-httpd\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.593367 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztdtw\" (UniqueName: \"kubernetes.io/projected/03ce2480-4daa-4360-841c-2d604bd232d1-kube-api-access-ztdtw\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.695494 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ce2480-4daa-4360-841c-2d604bd232d1-log-httpd\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.695538 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-config-data\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.695572 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.695614 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ce2480-4daa-4360-841c-2d604bd232d1-run-httpd\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.695644 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztdtw\" (UniqueName: \"kubernetes.io/projected/03ce2480-4daa-4360-841c-2d604bd232d1-kube-api-access-ztdtw\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.695683 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-scripts\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.695700 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.695751 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.696010 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ce2480-4daa-4360-841c-2d604bd232d1-log-httpd\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.696370 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ce2480-4daa-4360-841c-2d604bd232d1-run-httpd\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.701662 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.701733 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-scripts\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.704519 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.707707 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.715698 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-config-data\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.728887 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztdtw\" (UniqueName: \"kubernetes.io/projected/03ce2480-4daa-4360-841c-2d604bd232d1-kube-api-access-ztdtw\") pod \"ceilometer-0\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " pod="openstack/ceilometer-0" Jan 29 07:05:13 crc kubenswrapper[4826]: I0129 07:05:13.809727 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:05:14 crc kubenswrapper[4826]: I0129 07:05:14.223281 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:14 crc kubenswrapper[4826]: W0129 07:05:14.223817 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03ce2480_4daa_4360_841c_2d604bd232d1.slice/crio-db0e68cc3a40e8eb51e2d7f1b05856cc37646bdcadbd3157069f7489070a06ad WatchSource:0}: Error finding container db0e68cc3a40e8eb51e2d7f1b05856cc37646bdcadbd3157069f7489070a06ad: Status 404 returned error can't find the container with id db0e68cc3a40e8eb51e2d7f1b05856cc37646bdcadbd3157069f7489070a06ad Jan 29 07:05:14 crc kubenswrapper[4826]: I0129 07:05:14.376542 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03ce2480-4daa-4360-841c-2d604bd232d1","Type":"ContainerStarted","Data":"db0e68cc3a40e8eb51e2d7f1b05856cc37646bdcadbd3157069f7489070a06ad"} Jan 29 07:05:14 crc kubenswrapper[4826]: I0129 07:05:14.382969 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 07:05:14 crc kubenswrapper[4826]: I0129 07:05:14.822221 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71636ef8-0a41-494e-bd82-703d109ce02d" path="/var/lib/kubelet/pods/71636ef8-0a41-494e-bd82-703d109ce02d/volumes" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.278574 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.336854 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0405bde-d981-4583-80b3-b26b643dc78f-combined-ca-bundle\") pod \"a0405bde-d981-4583-80b3-b26b643dc78f\" (UID: \"a0405bde-d981-4583-80b3-b26b643dc78f\") " Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.336966 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0405bde-d981-4583-80b3-b26b643dc78f-config-data\") pod \"a0405bde-d981-4583-80b3-b26b643dc78f\" (UID: \"a0405bde-d981-4583-80b3-b26b643dc78f\") " Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.337231 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hc52\" (UniqueName: \"kubernetes.io/projected/a0405bde-d981-4583-80b3-b26b643dc78f-kube-api-access-2hc52\") pod \"a0405bde-d981-4583-80b3-b26b643dc78f\" (UID: \"a0405bde-d981-4583-80b3-b26b643dc78f\") " Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.346431 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0405bde-d981-4583-80b3-b26b643dc78f-kube-api-access-2hc52" (OuterVolumeSpecName: "kube-api-access-2hc52") pod "a0405bde-d981-4583-80b3-b26b643dc78f" (UID: "a0405bde-d981-4583-80b3-b26b643dc78f"). InnerVolumeSpecName "kube-api-access-2hc52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.364605 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0405bde-d981-4583-80b3-b26b643dc78f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0405bde-d981-4583-80b3-b26b643dc78f" (UID: "a0405bde-d981-4583-80b3-b26b643dc78f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.376482 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0405bde-d981-4583-80b3-b26b643dc78f-config-data" (OuterVolumeSpecName: "config-data") pod "a0405bde-d981-4583-80b3-b26b643dc78f" (UID: "a0405bde-d981-4583-80b3-b26b643dc78f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.388243 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03ce2480-4daa-4360-841c-2d604bd232d1","Type":"ContainerStarted","Data":"668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65"} Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.391038 4826 generic.go:334] "Generic (PLEG): container finished" podID="a0405bde-d981-4583-80b3-b26b643dc78f" containerID="1c3ff3cbd16ba6d294469d9e6ea35aa7aff48c5f39e6d8bb4a89a2715f91009c" exitCode=137 Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.391066 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.391100 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a0405bde-d981-4583-80b3-b26b643dc78f","Type":"ContainerDied","Data":"1c3ff3cbd16ba6d294469d9e6ea35aa7aff48c5f39e6d8bb4a89a2715f91009c"} Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.391138 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a0405bde-d981-4583-80b3-b26b643dc78f","Type":"ContainerDied","Data":"813494d2eebd6afd46c02c84501ea4e2f7c5d5fb352f5585dd943edb5493557d"} Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.391159 4826 scope.go:117] "RemoveContainer" containerID="1c3ff3cbd16ba6d294469d9e6ea35aa7aff48c5f39e6d8bb4a89a2715f91009c" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.427496 4826 scope.go:117] "RemoveContainer" containerID="1c3ff3cbd16ba6d294469d9e6ea35aa7aff48c5f39e6d8bb4a89a2715f91009c" Jan 29 07:05:15 crc kubenswrapper[4826]: E0129 07:05:15.428265 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c3ff3cbd16ba6d294469d9e6ea35aa7aff48c5f39e6d8bb4a89a2715f91009c\": container with ID starting with 1c3ff3cbd16ba6d294469d9e6ea35aa7aff48c5f39e6d8bb4a89a2715f91009c not found: ID does not exist" containerID="1c3ff3cbd16ba6d294469d9e6ea35aa7aff48c5f39e6d8bb4a89a2715f91009c" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.428325 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c3ff3cbd16ba6d294469d9e6ea35aa7aff48c5f39e6d8bb4a89a2715f91009c"} err="failed to get container status \"1c3ff3cbd16ba6d294469d9e6ea35aa7aff48c5f39e6d8bb4a89a2715f91009c\": rpc error: code = NotFound desc = could not find container \"1c3ff3cbd16ba6d294469d9e6ea35aa7aff48c5f39e6d8bb4a89a2715f91009c\": container with ID starting with 1c3ff3cbd16ba6d294469d9e6ea35aa7aff48c5f39e6d8bb4a89a2715f91009c not found: ID does not exist" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.434756 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.439123 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hc52\" (UniqueName: \"kubernetes.io/projected/a0405bde-d981-4583-80b3-b26b643dc78f-kube-api-access-2hc52\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.439149 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0405bde-d981-4583-80b3-b26b643dc78f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.439159 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0405bde-d981-4583-80b3-b26b643dc78f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.447412 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.458740 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 07:05:15 crc kubenswrapper[4826]: E0129 07:05:15.459094 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0405bde-d981-4583-80b3-b26b643dc78f" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.459110 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0405bde-d981-4583-80b3-b26b643dc78f" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.459337 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0405bde-d981-4583-80b3-b26b643dc78f" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.459881 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.469713 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.471402 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.471633 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.471812 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.540655 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.540729 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.540856 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkvx5\" (UniqueName: \"kubernetes.io/projected/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-kube-api-access-tkvx5\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.540911 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.541088 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.643044 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.643225 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.643293 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.643437 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkvx5\" (UniqueName: \"kubernetes.io/projected/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-kube-api-access-tkvx5\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.643515 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.650096 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.651787 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.656001 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.656216 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.673420 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkvx5\" (UniqueName: \"kubernetes.io/projected/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-kube-api-access-tkvx5\") pod \"nova-cell1-novncproxy-0\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:15 crc kubenswrapper[4826]: I0129 07:05:15.781777 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:16 crc kubenswrapper[4826]: I0129 07:05:16.303645 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 07:05:16 crc kubenswrapper[4826]: W0129 07:05:16.309089 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfbd8f8f_3f8e_4fe2_a2e9_e3085d7bded3.slice/crio-743d704c4f345bd59582de57fcf653935af47cac09f32902456426b73fda301a WatchSource:0}: Error finding container 743d704c4f345bd59582de57fcf653935af47cac09f32902456426b73fda301a: Status 404 returned error can't find the container with id 743d704c4f345bd59582de57fcf653935af47cac09f32902456426b73fda301a Jan 29 07:05:16 crc kubenswrapper[4826]: I0129 07:05:16.403335 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03ce2480-4daa-4360-841c-2d604bd232d1","Type":"ContainerStarted","Data":"cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241"} Jan 29 07:05:16 crc kubenswrapper[4826]: I0129 07:05:16.403397 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03ce2480-4daa-4360-841c-2d604bd232d1","Type":"ContainerStarted","Data":"14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e"} Jan 29 07:05:16 crc kubenswrapper[4826]: I0129 07:05:16.408796 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3","Type":"ContainerStarted","Data":"743d704c4f345bd59582de57fcf653935af47cac09f32902456426b73fda301a"} Jan 29 07:05:16 crc kubenswrapper[4826]: I0129 07:05:16.817898 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0405bde-d981-4583-80b3-b26b643dc78f" path="/var/lib/kubelet/pods/a0405bde-d981-4583-80b3-b26b643dc78f/volumes" Jan 29 07:05:17 crc kubenswrapper[4826]: I0129 07:05:17.422668 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3","Type":"ContainerStarted","Data":"ba8beccf426f4dc7182586b8d6e4748afa9e641836ccaafd3e63a134a7f45556"} Jan 29 07:05:17 crc kubenswrapper[4826]: I0129 07:05:17.448321 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.448287132 podStartE2EDuration="2.448287132s" podCreationTimestamp="2026-01-29 07:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:05:17.434791207 +0000 UTC m=+1301.296584286" watchObservedRunningTime="2026-01-29 07:05:17.448287132 +0000 UTC m=+1301.310080201" Jan 29 07:05:17 crc kubenswrapper[4826]: I0129 07:05:17.735831 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 07:05:18 crc kubenswrapper[4826]: I0129 07:05:18.579500 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 07:05:18 crc kubenswrapper[4826]: I0129 07:05:18.579923 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 07:05:18 crc kubenswrapper[4826]: I0129 07:05:18.582925 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 07:05:18 crc kubenswrapper[4826]: I0129 07:05:18.583464 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.447383 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03ce2480-4daa-4360-841c-2d604bd232d1","Type":"ContainerStarted","Data":"ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f"} Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.447772 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.447808 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.487138 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.530743691 podStartE2EDuration="6.487112837s" podCreationTimestamp="2026-01-29 07:05:13 +0000 UTC" firstStartedPulling="2026-01-29 07:05:14.227005875 +0000 UTC m=+1298.088798954" lastFinishedPulling="2026-01-29 07:05:18.183375031 +0000 UTC m=+1302.045168100" observedRunningTime="2026-01-29 07:05:19.48646395 +0000 UTC m=+1303.348257039" watchObservedRunningTime="2026-01-29 07:05:19.487112837 +0000 UTC m=+1303.348905906" Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.612795 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.787760 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-9wnpk"] Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.790045 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.816411 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-9wnpk"] Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.960976 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-config\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.961068 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-dns-svc\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.961090 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.961201 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.961317 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n25zz\" (UniqueName: \"kubernetes.io/projected/b099774a-b73b-4803-9259-98b9e1b5933b-kube-api-access-n25zz\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:19 crc kubenswrapper[4826]: I0129 07:05:19.961380 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.063747 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-config\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.063868 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-dns-svc\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.063898 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.063932 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.063972 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n25zz\" (UniqueName: \"kubernetes.io/projected/b099774a-b73b-4803-9259-98b9e1b5933b-kube-api-access-n25zz\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.064015 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.064792 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-dns-svc\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.064809 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-config\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.064977 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.065629 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.065856 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.084840 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n25zz\" (UniqueName: \"kubernetes.io/projected/b099774a-b73b-4803-9259-98b9e1b5933b-kube-api-access-n25zz\") pod \"dnsmasq-dns-5ddd577785-9wnpk\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.129678 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.581927 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-9wnpk"] Jan 29 07:05:20 crc kubenswrapper[4826]: I0129 07:05:20.782775 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:21 crc kubenswrapper[4826]: I0129 07:05:21.462980 4826 generic.go:334] "Generic (PLEG): container finished" podID="b099774a-b73b-4803-9259-98b9e1b5933b" containerID="3ce2a3dac061c88d9174949819ebe06f54e7dd354603e5038aedc5ba03f8bdd8" exitCode=0 Jan 29 07:05:21 crc kubenswrapper[4826]: I0129 07:05:21.463025 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" event={"ID":"b099774a-b73b-4803-9259-98b9e1b5933b","Type":"ContainerDied","Data":"3ce2a3dac061c88d9174949819ebe06f54e7dd354603e5038aedc5ba03f8bdd8"} Jan 29 07:05:21 crc kubenswrapper[4826]: I0129 07:05:21.463269 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" event={"ID":"b099774a-b73b-4803-9259-98b9e1b5933b","Type":"ContainerStarted","Data":"89c52ecfc033b538d52eaf8a4b988c05f22e16c202532a2218a058c064a11eea"} Jan 29 07:05:22 crc kubenswrapper[4826]: I0129 07:05:22.473465 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" event={"ID":"b099774a-b73b-4803-9259-98b9e1b5933b","Type":"ContainerStarted","Data":"85d5f42395b3063c15f60fc73e13a264e9f6ec4017758aed23359d39c4b8b103"} Jan 29 07:05:22 crc kubenswrapper[4826]: I0129 07:05:22.473696 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:22 crc kubenswrapper[4826]: I0129 07:05:22.499515 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" podStartSLOduration=3.499495156 podStartE2EDuration="3.499495156s" podCreationTimestamp="2026-01-29 07:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:05:22.492248635 +0000 UTC m=+1306.354041704" watchObservedRunningTime="2026-01-29 07:05:22.499495156 +0000 UTC m=+1306.361288225" Jan 29 07:05:22 crc kubenswrapper[4826]: I0129 07:05:22.875148 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:05:22 crc kubenswrapper[4826]: I0129 07:05:22.875541 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bdd7a6a7-486e-4736-af7e-70f16394252f" containerName="nova-api-log" containerID="cri-o://a70ebef7793fc16d23eb3cf7086d5a32ae98490e208816e9aff22022cb690fb9" gracePeriod=30 Jan 29 07:05:22 crc kubenswrapper[4826]: I0129 07:05:22.875618 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bdd7a6a7-486e-4736-af7e-70f16394252f" containerName="nova-api-api" containerID="cri-o://8d5b3fff199854dae6cde1a71f974779070350c89f28ec58fbdde4598e08fca8" gracePeriod=30 Jan 29 07:05:23 crc kubenswrapper[4826]: I0129 07:05:23.211231 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:23 crc kubenswrapper[4826]: I0129 07:05:23.211673 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="ceilometer-central-agent" containerID="cri-o://668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65" gracePeriod=30 Jan 29 07:05:23 crc kubenswrapper[4826]: I0129 07:05:23.211885 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="proxy-httpd" containerID="cri-o://ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f" gracePeriod=30 Jan 29 07:05:23 crc kubenswrapper[4826]: I0129 07:05:23.212038 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="ceilometer-notification-agent" containerID="cri-o://14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e" gracePeriod=30 Jan 29 07:05:23 crc kubenswrapper[4826]: I0129 07:05:23.212125 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="sg-core" containerID="cri-o://cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241" gracePeriod=30 Jan 29 07:05:23 crc kubenswrapper[4826]: I0129 07:05:23.487063 4826 generic.go:334] "Generic (PLEG): container finished" podID="bdd7a6a7-486e-4736-af7e-70f16394252f" containerID="a70ebef7793fc16d23eb3cf7086d5a32ae98490e208816e9aff22022cb690fb9" exitCode=143 Jan 29 07:05:23 crc kubenswrapper[4826]: I0129 07:05:23.487143 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdd7a6a7-486e-4736-af7e-70f16394252f","Type":"ContainerDied","Data":"a70ebef7793fc16d23eb3cf7086d5a32ae98490e208816e9aff22022cb690fb9"} Jan 29 07:05:23 crc kubenswrapper[4826]: I0129 07:05:23.490322 4826 generic.go:334] "Generic (PLEG): container finished" podID="03ce2480-4daa-4360-841c-2d604bd232d1" containerID="ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f" exitCode=0 Jan 29 07:05:23 crc kubenswrapper[4826]: I0129 07:05:23.490351 4826 generic.go:334] "Generic (PLEG): container finished" podID="03ce2480-4daa-4360-841c-2d604bd232d1" containerID="cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241" exitCode=2 Jan 29 07:05:23 crc kubenswrapper[4826]: I0129 07:05:23.490558 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03ce2480-4daa-4360-841c-2d604bd232d1","Type":"ContainerDied","Data":"ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f"} Jan 29 07:05:23 crc kubenswrapper[4826]: I0129 07:05:23.490609 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03ce2480-4daa-4360-841c-2d604bd232d1","Type":"ContainerDied","Data":"cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241"} Jan 29 07:05:23 crc kubenswrapper[4826]: I0129 07:05:23.931846 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.046968 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-config-data\") pod \"03ce2480-4daa-4360-841c-2d604bd232d1\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.047004 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ce2480-4daa-4360-841c-2d604bd232d1-run-httpd\") pod \"03ce2480-4daa-4360-841c-2d604bd232d1\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.047029 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-combined-ca-bundle\") pod \"03ce2480-4daa-4360-841c-2d604bd232d1\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.047049 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-sg-core-conf-yaml\") pod \"03ce2480-4daa-4360-841c-2d604bd232d1\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.047080 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ce2480-4daa-4360-841c-2d604bd232d1-log-httpd\") pod \"03ce2480-4daa-4360-841c-2d604bd232d1\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.047120 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztdtw\" (UniqueName: \"kubernetes.io/projected/03ce2480-4daa-4360-841c-2d604bd232d1-kube-api-access-ztdtw\") pod \"03ce2480-4daa-4360-841c-2d604bd232d1\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.047191 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-ceilometer-tls-certs\") pod \"03ce2480-4daa-4360-841c-2d604bd232d1\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.047207 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-scripts\") pod \"03ce2480-4daa-4360-841c-2d604bd232d1\" (UID: \"03ce2480-4daa-4360-841c-2d604bd232d1\") " Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.048679 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03ce2480-4daa-4360-841c-2d604bd232d1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "03ce2480-4daa-4360-841c-2d604bd232d1" (UID: "03ce2480-4daa-4360-841c-2d604bd232d1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.048902 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03ce2480-4daa-4360-841c-2d604bd232d1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "03ce2480-4daa-4360-841c-2d604bd232d1" (UID: "03ce2480-4daa-4360-841c-2d604bd232d1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.055905 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ce2480-4daa-4360-841c-2d604bd232d1-kube-api-access-ztdtw" (OuterVolumeSpecName: "kube-api-access-ztdtw") pod "03ce2480-4daa-4360-841c-2d604bd232d1" (UID: "03ce2480-4daa-4360-841c-2d604bd232d1"). InnerVolumeSpecName "kube-api-access-ztdtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.057074 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-scripts" (OuterVolumeSpecName: "scripts") pod "03ce2480-4daa-4360-841c-2d604bd232d1" (UID: "03ce2480-4daa-4360-841c-2d604bd232d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.085084 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "03ce2480-4daa-4360-841c-2d604bd232d1" (UID: "03ce2480-4daa-4360-841c-2d604bd232d1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.149690 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "03ce2480-4daa-4360-841c-2d604bd232d1" (UID: "03ce2480-4daa-4360-841c-2d604bd232d1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.151128 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ce2480-4daa-4360-841c-2d604bd232d1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.151157 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.151173 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/03ce2480-4daa-4360-841c-2d604bd232d1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.151189 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztdtw\" (UniqueName: \"kubernetes.io/projected/03ce2480-4daa-4360-841c-2d604bd232d1-kube-api-access-ztdtw\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.151209 4826 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.151224 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.164485 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03ce2480-4daa-4360-841c-2d604bd232d1" (UID: "03ce2480-4daa-4360-841c-2d604bd232d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.200934 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-config-data" (OuterVolumeSpecName: "config-data") pod "03ce2480-4daa-4360-841c-2d604bd232d1" (UID: "03ce2480-4daa-4360-841c-2d604bd232d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.252923 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.252968 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ce2480-4daa-4360-841c-2d604bd232d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.506220 4826 generic.go:334] "Generic (PLEG): container finished" podID="03ce2480-4daa-4360-841c-2d604bd232d1" containerID="14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e" exitCode=0 Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.506270 4826 generic.go:334] "Generic (PLEG): container finished" podID="03ce2480-4daa-4360-841c-2d604bd232d1" containerID="668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65" exitCode=0 Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.506339 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03ce2480-4daa-4360-841c-2d604bd232d1","Type":"ContainerDied","Data":"14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e"} Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.506393 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03ce2480-4daa-4360-841c-2d604bd232d1","Type":"ContainerDied","Data":"668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65"} Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.506424 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"03ce2480-4daa-4360-841c-2d604bd232d1","Type":"ContainerDied","Data":"db0e68cc3a40e8eb51e2d7f1b05856cc37646bdcadbd3157069f7489070a06ad"} Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.506462 4826 scope.go:117] "RemoveContainer" containerID="ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.506470 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.538576 4826 scope.go:117] "RemoveContainer" containerID="cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.559252 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.566565 4826 scope.go:117] "RemoveContainer" containerID="14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.570777 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.594280 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:24 crc kubenswrapper[4826]: E0129 07:05:24.594950 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="ceilometer-central-agent" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.594981 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="ceilometer-central-agent" Jan 29 07:05:24 crc kubenswrapper[4826]: E0129 07:05:24.595009 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="sg-core" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.595023 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="sg-core" Jan 29 07:05:24 crc kubenswrapper[4826]: E0129 07:05:24.595066 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="ceilometer-notification-agent" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.595080 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="ceilometer-notification-agent" Jan 29 07:05:24 crc kubenswrapper[4826]: E0129 07:05:24.595116 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="proxy-httpd" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.595129 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="proxy-httpd" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.595490 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="ceilometer-notification-agent" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.595514 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="sg-core" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.595545 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="ceilometer-central-agent" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.595564 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" containerName="proxy-httpd" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.599692 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.603167 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.603259 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.603418 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.604563 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.613204 4826 scope.go:117] "RemoveContainer" containerID="668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.650340 4826 scope.go:117] "RemoveContainer" containerID="ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f" Jan 29 07:05:24 crc kubenswrapper[4826]: E0129 07:05:24.651281 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f\": container with ID starting with ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f not found: ID does not exist" containerID="ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.651346 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f"} err="failed to get container status \"ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f\": rpc error: code = NotFound desc = could not find container \"ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f\": container with ID starting with ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f not found: ID does not exist" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.651373 4826 scope.go:117] "RemoveContainer" containerID="cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241" Jan 29 07:05:24 crc kubenswrapper[4826]: E0129 07:05:24.660852 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241\": container with ID starting with cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241 not found: ID does not exist" containerID="cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.660895 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241"} err="failed to get container status \"cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241\": rpc error: code = NotFound desc = could not find container \"cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241\": container with ID starting with cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241 not found: ID does not exist" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.660925 4826 scope.go:117] "RemoveContainer" containerID="14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e" Jan 29 07:05:24 crc kubenswrapper[4826]: E0129 07:05:24.662657 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e\": container with ID starting with 14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e not found: ID does not exist" containerID="14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.662720 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e"} err="failed to get container status \"14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e\": rpc error: code = NotFound desc = could not find container \"14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e\": container with ID starting with 14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e not found: ID does not exist" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.662756 4826 scope.go:117] "RemoveContainer" containerID="668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65" Jan 29 07:05:24 crc kubenswrapper[4826]: E0129 07:05:24.667720 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65\": container with ID starting with 668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65 not found: ID does not exist" containerID="668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.667762 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65"} err="failed to get container status \"668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65\": rpc error: code = NotFound desc = could not find container \"668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65\": container with ID starting with 668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65 not found: ID does not exist" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.667785 4826 scope.go:117] "RemoveContainer" containerID="ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.668166 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f"} err="failed to get container status \"ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f\": rpc error: code = NotFound desc = could not find container \"ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f\": container with ID starting with ed322a473564d0a3988a85f5ed5cd138c6708c0e7eb9f7fc6cb88318e06d041f not found: ID does not exist" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.668200 4826 scope.go:117] "RemoveContainer" containerID="cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.668663 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241"} err="failed to get container status \"cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241\": rpc error: code = NotFound desc = could not find container \"cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241\": container with ID starting with cb5094b306eb438eb067301ac4b0f25e865f8f4e59d555df4b429adf5b2de241 not found: ID does not exist" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.668737 4826 scope.go:117] "RemoveContainer" containerID="14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.669370 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e"} err="failed to get container status \"14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e\": rpc error: code = NotFound desc = could not find container \"14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e\": container with ID starting with 14a142905ae9018ad2445716f20413fc0e458a0d9c4b1abb1773b8258d839d8e not found: ID does not exist" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.669397 4826 scope.go:117] "RemoveContainer" containerID="668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.670751 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65"} err="failed to get container status \"668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65\": rpc error: code = NotFound desc = could not find container \"668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65\": container with ID starting with 668dfaf93109d117dfae03f4bb969b43cef2e41f1638a1ca4f5fca0b8f467c65 not found: ID does not exist" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.761255 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.761342 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4c7089-bd03-4d18-a40e-860c17aa25f3-run-httpd\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.761364 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4c7089-bd03-4d18-a40e-860c17aa25f3-log-httpd\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.761402 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.761451 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69n89\" (UniqueName: \"kubernetes.io/projected/8a4c7089-bd03-4d18-a40e-860c17aa25f3-kube-api-access-69n89\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.761483 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.761510 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-config-data\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.761544 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-scripts\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.852901 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ce2480-4daa-4360-841c-2d604bd232d1" path="/var/lib/kubelet/pods/03ce2480-4daa-4360-841c-2d604bd232d1/volumes" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.862963 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.863029 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-config-data\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.863086 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-scripts\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.863121 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.863195 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4c7089-bd03-4d18-a40e-860c17aa25f3-run-httpd\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.863221 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4c7089-bd03-4d18-a40e-860c17aa25f3-log-httpd\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.863283 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.863393 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69n89\" (UniqueName: \"kubernetes.io/projected/8a4c7089-bd03-4d18-a40e-860c17aa25f3-kube-api-access-69n89\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.863683 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4c7089-bd03-4d18-a40e-860c17aa25f3-run-httpd\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.863755 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4c7089-bd03-4d18-a40e-860c17aa25f3-log-httpd\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.868166 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-config-data\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.868209 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.872713 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-scripts\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.874248 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.880451 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.893904 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69n89\" (UniqueName: \"kubernetes.io/projected/8a4c7089-bd03-4d18-a40e-860c17aa25f3-kube-api-access-69n89\") pod \"ceilometer-0\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " pod="openstack/ceilometer-0" Jan 29 07:05:24 crc kubenswrapper[4826]: I0129 07:05:24.922546 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:05:25 crc kubenswrapper[4826]: I0129 07:05:25.290857 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:25 crc kubenswrapper[4826]: I0129 07:05:25.478800 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:25 crc kubenswrapper[4826]: I0129 07:05:25.523663 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4c7089-bd03-4d18-a40e-860c17aa25f3","Type":"ContainerStarted","Data":"6b1d31a0b2daa801a19cdc022eb5e0f5deb862f9e0e68da0cd86cdb02d421fb8"} Jan 29 07:05:25 crc kubenswrapper[4826]: I0129 07:05:25.782658 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:25 crc kubenswrapper[4826]: I0129 07:05:25.810670 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.538261 4826 generic.go:334] "Generic (PLEG): container finished" podID="bdd7a6a7-486e-4736-af7e-70f16394252f" containerID="8d5b3fff199854dae6cde1a71f974779070350c89f28ec58fbdde4598e08fca8" exitCode=0 Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.538343 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdd7a6a7-486e-4736-af7e-70f16394252f","Type":"ContainerDied","Data":"8d5b3fff199854dae6cde1a71f974779070350c89f28ec58fbdde4598e08fca8"} Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.538690 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdd7a6a7-486e-4736-af7e-70f16394252f","Type":"ContainerDied","Data":"fd2a72655e58bbcd22c774adc3bedf4c2a85d09062cdf5a39c87e4dadc1fc5ad"} Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.538705 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd2a72655e58bbcd22c774adc3bedf4c2a85d09062cdf5a39c87e4dadc1fc5ad" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.542102 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4c7089-bd03-4d18-a40e-860c17aa25f3","Type":"ContainerStarted","Data":"55728658f30579f410bc355a3e3d03547c9ff3e7d15db0b68215cf4f777001fc"} Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.562015 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.621318 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.712634 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd7a6a7-486e-4736-af7e-70f16394252f-config-data\") pod \"bdd7a6a7-486e-4736-af7e-70f16394252f\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.712733 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdd7a6a7-486e-4736-af7e-70f16394252f-logs\") pod \"bdd7a6a7-486e-4736-af7e-70f16394252f\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.712786 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hxkh\" (UniqueName: \"kubernetes.io/projected/bdd7a6a7-486e-4736-af7e-70f16394252f-kube-api-access-7hxkh\") pod \"bdd7a6a7-486e-4736-af7e-70f16394252f\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.712804 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd7a6a7-486e-4736-af7e-70f16394252f-combined-ca-bundle\") pod \"bdd7a6a7-486e-4736-af7e-70f16394252f\" (UID: \"bdd7a6a7-486e-4736-af7e-70f16394252f\") " Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.716094 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdd7a6a7-486e-4736-af7e-70f16394252f-logs" (OuterVolumeSpecName: "logs") pod "bdd7a6a7-486e-4736-af7e-70f16394252f" (UID: "bdd7a6a7-486e-4736-af7e-70f16394252f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.720102 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdd7a6a7-486e-4736-af7e-70f16394252f-kube-api-access-7hxkh" (OuterVolumeSpecName: "kube-api-access-7hxkh") pod "bdd7a6a7-486e-4736-af7e-70f16394252f" (UID: "bdd7a6a7-486e-4736-af7e-70f16394252f"). InnerVolumeSpecName "kube-api-access-7hxkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.738725 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sb2qx"] Jan 29 07:05:26 crc kubenswrapper[4826]: E0129 07:05:26.739233 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd7a6a7-486e-4736-af7e-70f16394252f" containerName="nova-api-log" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.739257 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd7a6a7-486e-4736-af7e-70f16394252f" containerName="nova-api-log" Jan 29 07:05:26 crc kubenswrapper[4826]: E0129 07:05:26.739274 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd7a6a7-486e-4736-af7e-70f16394252f" containerName="nova-api-api" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.739281 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd7a6a7-486e-4736-af7e-70f16394252f" containerName="nova-api-api" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.739498 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd7a6a7-486e-4736-af7e-70f16394252f" containerName="nova-api-log" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.739512 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd7a6a7-486e-4736-af7e-70f16394252f" containerName="nova-api-api" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.740234 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.749573 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sb2qx"] Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.753649 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.753942 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.757857 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd7a6a7-486e-4736-af7e-70f16394252f-config-data" (OuterVolumeSpecName: "config-data") pod "bdd7a6a7-486e-4736-af7e-70f16394252f" (UID: "bdd7a6a7-486e-4736-af7e-70f16394252f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.770367 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdd7a6a7-486e-4736-af7e-70f16394252f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdd7a6a7-486e-4736-af7e-70f16394252f" (UID: "bdd7a6a7-486e-4736-af7e-70f16394252f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.816040 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sb2qx\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.816949 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-scripts\") pod \"nova-cell1-cell-mapping-sb2qx\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.817022 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99xq6\" (UniqueName: \"kubernetes.io/projected/2950980d-48bd-4b72-964c-14e8d11d5b77-kube-api-access-99xq6\") pod \"nova-cell1-cell-mapping-sb2qx\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.817104 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-config-data\") pod \"nova-cell1-cell-mapping-sb2qx\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.817182 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdd7a6a7-486e-4736-af7e-70f16394252f-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.817194 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hxkh\" (UniqueName: \"kubernetes.io/projected/bdd7a6a7-486e-4736-af7e-70f16394252f-kube-api-access-7hxkh\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.817203 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdd7a6a7-486e-4736-af7e-70f16394252f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.817212 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdd7a6a7-486e-4736-af7e-70f16394252f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.918722 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-scripts\") pod \"nova-cell1-cell-mapping-sb2qx\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.918819 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99xq6\" (UniqueName: \"kubernetes.io/projected/2950980d-48bd-4b72-964c-14e8d11d5b77-kube-api-access-99xq6\") pod \"nova-cell1-cell-mapping-sb2qx\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.918920 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-config-data\") pod \"nova-cell1-cell-mapping-sb2qx\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.919002 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sb2qx\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.922917 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-scripts\") pod \"nova-cell1-cell-mapping-sb2qx\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.923248 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-config-data\") pod \"nova-cell1-cell-mapping-sb2qx\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.925285 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sb2qx\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:26 crc kubenswrapper[4826]: I0129 07:05:26.933774 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99xq6\" (UniqueName: \"kubernetes.io/projected/2950980d-48bd-4b72-964c-14e8d11d5b77-kube-api-access-99xq6\") pod \"nova-cell1-cell-mapping-sb2qx\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.080742 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.553391 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.554689 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4c7089-bd03-4d18-a40e-860c17aa25f3","Type":"ContainerStarted","Data":"36b94163bc4e28fe7f9dde6f49999ac2cff4676d3ee8f35bc3f0a3109fa9bf5e"} Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.582105 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.604271 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.616635 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sb2qx"] Jan 29 07:05:27 crc kubenswrapper[4826]: W0129 07:05:27.619925 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2950980d_48bd_4b72_964c_14e8d11d5b77.slice/crio-1b8323b8240a2d63e98999b13f571e872d51b526cdb159f3e63d51016b43e485 WatchSource:0}: Error finding container 1b8323b8240a2d63e98999b13f571e872d51b526cdb159f3e63d51016b43e485: Status 404 returned error can't find the container with id 1b8323b8240a2d63e98999b13f571e872d51b526cdb159f3e63d51016b43e485 Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.626204 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.627763 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.633602 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.633802 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.633959 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.643257 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.736075 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nv46\" (UniqueName: \"kubernetes.io/projected/02d44a2c-0465-48d4-96a0-e248904d3213-kube-api-access-4nv46\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.736172 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.736642 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02d44a2c-0465-48d4-96a0-e248904d3213-logs\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.736692 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-internal-tls-certs\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.736748 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-public-tls-certs\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.736802 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-config-data\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.839755 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nv46\" (UniqueName: \"kubernetes.io/projected/02d44a2c-0465-48d4-96a0-e248904d3213-kube-api-access-4nv46\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.839841 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.839902 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02d44a2c-0465-48d4-96a0-e248904d3213-logs\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.839933 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-internal-tls-certs\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.839958 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-public-tls-certs\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.839995 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-config-data\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.840502 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02d44a2c-0465-48d4-96a0-e248904d3213-logs\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.845837 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-public-tls-certs\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.845910 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.846359 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-config-data\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.849975 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-internal-tls-certs\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.858234 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nv46\" (UniqueName: \"kubernetes.io/projected/02d44a2c-0465-48d4-96a0-e248904d3213-kube-api-access-4nv46\") pod \"nova-api-0\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " pod="openstack/nova-api-0" Jan 29 07:05:27 crc kubenswrapper[4826]: I0129 07:05:27.987817 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:05:28 crc kubenswrapper[4826]: W0129 07:05:28.541866 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d44a2c_0465_48d4_96a0_e248904d3213.slice/crio-5df61b41361edf81ff23b5a55a94a9be818d468a21b4dd7e9da2a738010ded17 WatchSource:0}: Error finding container 5df61b41361edf81ff23b5a55a94a9be818d468a21b4dd7e9da2a738010ded17: Status 404 returned error can't find the container with id 5df61b41361edf81ff23b5a55a94a9be818d468a21b4dd7e9da2a738010ded17 Jan 29 07:05:28 crc kubenswrapper[4826]: I0129 07:05:28.545493 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:05:28 crc kubenswrapper[4826]: I0129 07:05:28.580551 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4c7089-bd03-4d18-a40e-860c17aa25f3","Type":"ContainerStarted","Data":"9bdf5a89569d2863d0660f30de13222c5965f6915e71cd43dadb6eb558e0ca87"} Jan 29 07:05:28 crc kubenswrapper[4826]: I0129 07:05:28.581608 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02d44a2c-0465-48d4-96a0-e248904d3213","Type":"ContainerStarted","Data":"5df61b41361edf81ff23b5a55a94a9be818d468a21b4dd7e9da2a738010ded17"} Jan 29 07:05:28 crc kubenswrapper[4826]: I0129 07:05:28.583273 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sb2qx" event={"ID":"2950980d-48bd-4b72-964c-14e8d11d5b77","Type":"ContainerStarted","Data":"c2c81f5f8792fc508f613a497ea4d651ee5e12ec7d124a7e01884a02455c9ffc"} Jan 29 07:05:28 crc kubenswrapper[4826]: I0129 07:05:28.583315 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sb2qx" event={"ID":"2950980d-48bd-4b72-964c-14e8d11d5b77","Type":"ContainerStarted","Data":"1b8323b8240a2d63e98999b13f571e872d51b526cdb159f3e63d51016b43e485"} Jan 29 07:05:28 crc kubenswrapper[4826]: I0129 07:05:28.614374 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-sb2qx" podStartSLOduration=2.614353315 podStartE2EDuration="2.614353315s" podCreationTimestamp="2026-01-29 07:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:05:28.602273617 +0000 UTC m=+1312.464066686" watchObservedRunningTime="2026-01-29 07:05:28.614353315 +0000 UTC m=+1312.476146384" Jan 29 07:05:28 crc kubenswrapper[4826]: I0129 07:05:28.820130 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdd7a6a7-486e-4736-af7e-70f16394252f" path="/var/lib/kubelet/pods/bdd7a6a7-486e-4736-af7e-70f16394252f/volumes" Jan 29 07:05:29 crc kubenswrapper[4826]: I0129 07:05:29.593883 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02d44a2c-0465-48d4-96a0-e248904d3213","Type":"ContainerStarted","Data":"d238b8c0e32fea300b228a709635986145f0215dc574d2f201145f87abc62869"} Jan 29 07:05:29 crc kubenswrapper[4826]: I0129 07:05:29.594200 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02d44a2c-0465-48d4-96a0-e248904d3213","Type":"ContainerStarted","Data":"9f2a5c75ecac54e98c282cadd3b308623c6203378cd4010e5c37c28ffec8c902"} Jan 29 07:05:29 crc kubenswrapper[4826]: I0129 07:05:29.615125 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.615107006 podStartE2EDuration="2.615107006s" podCreationTimestamp="2026-01-29 07:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:05:29.610406232 +0000 UTC m=+1313.472199311" watchObservedRunningTime="2026-01-29 07:05:29.615107006 +0000 UTC m=+1313.476900075" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.131495 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.206315 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-ptc9r"] Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.206559 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" podUID="64dc8206-670d-44be-887b-48378dbf5a30" containerName="dnsmasq-dns" containerID="cri-o://4d01dedb9003a15f270c28f7b19b8fe2be70b4ca3372d8950cfac8e113b150a4" gracePeriod=10 Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.625488 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4c7089-bd03-4d18-a40e-860c17aa25f3","Type":"ContainerStarted","Data":"102be978d41fbf60cf28c89c51b4cbd5e796e3f4a34163101d28010a02fe7cb1"} Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.625920 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="ceilometer-central-agent" containerID="cri-o://55728658f30579f410bc355a3e3d03547c9ff3e7d15db0b68215cf4f777001fc" gracePeriod=30 Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.626175 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.626444 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="proxy-httpd" containerID="cri-o://102be978d41fbf60cf28c89c51b4cbd5e796e3f4a34163101d28010a02fe7cb1" gracePeriod=30 Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.626488 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="sg-core" containerID="cri-o://9bdf5a89569d2863d0660f30de13222c5965f6915e71cd43dadb6eb558e0ca87" gracePeriod=30 Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.626524 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="ceilometer-notification-agent" containerID="cri-o://36b94163bc4e28fe7f9dde6f49999ac2cff4676d3ee8f35bc3f0a3109fa9bf5e" gracePeriod=30 Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.638486 4826 generic.go:334] "Generic (PLEG): container finished" podID="64dc8206-670d-44be-887b-48378dbf5a30" containerID="4d01dedb9003a15f270c28f7b19b8fe2be70b4ca3372d8950cfac8e113b150a4" exitCode=0 Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.639406 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" event={"ID":"64dc8206-670d-44be-887b-48378dbf5a30","Type":"ContainerDied","Data":"4d01dedb9003a15f270c28f7b19b8fe2be70b4ca3372d8950cfac8e113b150a4"} Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.651444 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.023815649 podStartE2EDuration="6.651425973s" podCreationTimestamp="2026-01-29 07:05:24 +0000 UTC" firstStartedPulling="2026-01-29 07:05:25.479570474 +0000 UTC m=+1309.341363573" lastFinishedPulling="2026-01-29 07:05:30.107180818 +0000 UTC m=+1313.968973897" observedRunningTime="2026-01-29 07:05:30.64370273 +0000 UTC m=+1314.505495799" watchObservedRunningTime="2026-01-29 07:05:30.651425973 +0000 UTC m=+1314.513219042" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.730167 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.796839 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-dns-svc\") pod \"64dc8206-670d-44be-887b-48378dbf5a30\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.796919 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb4c9\" (UniqueName: \"kubernetes.io/projected/64dc8206-670d-44be-887b-48378dbf5a30-kube-api-access-fb4c9\") pod \"64dc8206-670d-44be-887b-48378dbf5a30\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.796942 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-ovsdbserver-nb\") pod \"64dc8206-670d-44be-887b-48378dbf5a30\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.797050 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-dns-swift-storage-0\") pod \"64dc8206-670d-44be-887b-48378dbf5a30\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.797120 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-ovsdbserver-sb\") pod \"64dc8206-670d-44be-887b-48378dbf5a30\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.797157 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-config\") pod \"64dc8206-670d-44be-887b-48378dbf5a30\" (UID: \"64dc8206-670d-44be-887b-48378dbf5a30\") " Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.802769 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64dc8206-670d-44be-887b-48378dbf5a30-kube-api-access-fb4c9" (OuterVolumeSpecName: "kube-api-access-fb4c9") pod "64dc8206-670d-44be-887b-48378dbf5a30" (UID: "64dc8206-670d-44be-887b-48378dbf5a30"). InnerVolumeSpecName "kube-api-access-fb4c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.863223 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "64dc8206-670d-44be-887b-48378dbf5a30" (UID: "64dc8206-670d-44be-887b-48378dbf5a30"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.864589 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64dc8206-670d-44be-887b-48378dbf5a30" (UID: "64dc8206-670d-44be-887b-48378dbf5a30"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.866089 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64dc8206-670d-44be-887b-48378dbf5a30" (UID: "64dc8206-670d-44be-887b-48378dbf5a30"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.875512 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-config" (OuterVolumeSpecName: "config") pod "64dc8206-670d-44be-887b-48378dbf5a30" (UID: "64dc8206-670d-44be-887b-48378dbf5a30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.880735 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "64dc8206-670d-44be-887b-48378dbf5a30" (UID: "64dc8206-670d-44be-887b-48378dbf5a30"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.900664 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.900705 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.900715 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.900723 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb4c9\" (UniqueName: \"kubernetes.io/projected/64dc8206-670d-44be-887b-48378dbf5a30-kube-api-access-fb4c9\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.900734 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:30 crc kubenswrapper[4826]: I0129 07:05:30.900742 4826 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64dc8206-670d-44be-887b-48378dbf5a30-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:31 crc kubenswrapper[4826]: I0129 07:05:31.649328 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" event={"ID":"64dc8206-670d-44be-887b-48378dbf5a30","Type":"ContainerDied","Data":"ea587ee6d4fc68a79aa6cbdc996c8c1ba1acb034c159a6a3ae91107437608c11"} Jan 29 07:05:31 crc kubenswrapper[4826]: I0129 07:05:31.649367 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" Jan 29 07:05:31 crc kubenswrapper[4826]: I0129 07:05:31.649373 4826 scope.go:117] "RemoveContainer" containerID="4d01dedb9003a15f270c28f7b19b8fe2be70b4ca3372d8950cfac8e113b150a4" Jan 29 07:05:31 crc kubenswrapper[4826]: I0129 07:05:31.659950 4826 generic.go:334] "Generic (PLEG): container finished" podID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerID="102be978d41fbf60cf28c89c51b4cbd5e796e3f4a34163101d28010a02fe7cb1" exitCode=0 Jan 29 07:05:31 crc kubenswrapper[4826]: I0129 07:05:31.659977 4826 generic.go:334] "Generic (PLEG): container finished" podID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerID="9bdf5a89569d2863d0660f30de13222c5965f6915e71cd43dadb6eb558e0ca87" exitCode=2 Jan 29 07:05:31 crc kubenswrapper[4826]: I0129 07:05:31.659984 4826 generic.go:334] "Generic (PLEG): container finished" podID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerID="36b94163bc4e28fe7f9dde6f49999ac2cff4676d3ee8f35bc3f0a3109fa9bf5e" exitCode=0 Jan 29 07:05:31 crc kubenswrapper[4826]: I0129 07:05:31.660004 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4c7089-bd03-4d18-a40e-860c17aa25f3","Type":"ContainerDied","Data":"102be978d41fbf60cf28c89c51b4cbd5e796e3f4a34163101d28010a02fe7cb1"} Jan 29 07:05:31 crc kubenswrapper[4826]: I0129 07:05:31.660028 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4c7089-bd03-4d18-a40e-860c17aa25f3","Type":"ContainerDied","Data":"9bdf5a89569d2863d0660f30de13222c5965f6915e71cd43dadb6eb558e0ca87"} Jan 29 07:05:31 crc kubenswrapper[4826]: I0129 07:05:31.660039 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4c7089-bd03-4d18-a40e-860c17aa25f3","Type":"ContainerDied","Data":"36b94163bc4e28fe7f9dde6f49999ac2cff4676d3ee8f35bc3f0a3109fa9bf5e"} Jan 29 07:05:31 crc kubenswrapper[4826]: I0129 07:05:31.717223 4826 scope.go:117] "RemoveContainer" containerID="7f7c3928cc87fd53ead82bd27079afad9b5240c322abf1aee250b3ba09da9edb" Jan 29 07:05:31 crc kubenswrapper[4826]: I0129 07:05:31.719083 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-ptc9r"] Jan 29 07:05:31 crc kubenswrapper[4826]: I0129 07:05:31.732343 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-ptc9r"] Jan 29 07:05:32 crc kubenswrapper[4826]: I0129 07:05:32.820456 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64dc8206-670d-44be-887b-48378dbf5a30" path="/var/lib/kubelet/pods/64dc8206-670d-44be-887b-48378dbf5a30/volumes" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.507568 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.547267 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69n89\" (UniqueName: \"kubernetes.io/projected/8a4c7089-bd03-4d18-a40e-860c17aa25f3-kube-api-access-69n89\") pod \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.547375 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4c7089-bd03-4d18-a40e-860c17aa25f3-run-httpd\") pod \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.547466 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-combined-ca-bundle\") pod \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.547535 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4c7089-bd03-4d18-a40e-860c17aa25f3-log-httpd\") pod \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.547565 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-scripts\") pod \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.547590 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-config-data\") pod \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.547622 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-sg-core-conf-yaml\") pod \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.547659 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-ceilometer-tls-certs\") pod \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\" (UID: \"8a4c7089-bd03-4d18-a40e-860c17aa25f3\") " Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.548033 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a4c7089-bd03-4d18-a40e-860c17aa25f3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8a4c7089-bd03-4d18-a40e-860c17aa25f3" (UID: "8a4c7089-bd03-4d18-a40e-860c17aa25f3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.548210 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a4c7089-bd03-4d18-a40e-860c17aa25f3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8a4c7089-bd03-4d18-a40e-860c17aa25f3" (UID: "8a4c7089-bd03-4d18-a40e-860c17aa25f3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.548694 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4c7089-bd03-4d18-a40e-860c17aa25f3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.548725 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a4c7089-bd03-4d18-a40e-860c17aa25f3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.558770 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4c7089-bd03-4d18-a40e-860c17aa25f3-kube-api-access-69n89" (OuterVolumeSpecName: "kube-api-access-69n89") pod "8a4c7089-bd03-4d18-a40e-860c17aa25f3" (UID: "8a4c7089-bd03-4d18-a40e-860c17aa25f3"). InnerVolumeSpecName "kube-api-access-69n89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.570392 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-scripts" (OuterVolumeSpecName: "scripts") pod "8a4c7089-bd03-4d18-a40e-860c17aa25f3" (UID: "8a4c7089-bd03-4d18-a40e-860c17aa25f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.589663 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8a4c7089-bd03-4d18-a40e-860c17aa25f3" (UID: "8a4c7089-bd03-4d18-a40e-860c17aa25f3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.621637 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8a4c7089-bd03-4d18-a40e-860c17aa25f3" (UID: "8a4c7089-bd03-4d18-a40e-860c17aa25f3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.644263 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a4c7089-bd03-4d18-a40e-860c17aa25f3" (UID: "8a4c7089-bd03-4d18-a40e-860c17aa25f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.650695 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69n89\" (UniqueName: \"kubernetes.io/projected/8a4c7089-bd03-4d18-a40e-860c17aa25f3-kube-api-access-69n89\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.650740 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.650754 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.650771 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.650782 4826 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.657845 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-config-data" (OuterVolumeSpecName: "config-data") pod "8a4c7089-bd03-4d18-a40e-860c17aa25f3" (UID: "8a4c7089-bd03-4d18-a40e-860c17aa25f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.679248 4826 generic.go:334] "Generic (PLEG): container finished" podID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerID="55728658f30579f410bc355a3e3d03547c9ff3e7d15db0b68215cf4f777001fc" exitCode=0 Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.679407 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4c7089-bd03-4d18-a40e-860c17aa25f3","Type":"ContainerDied","Data":"55728658f30579f410bc355a3e3d03547c9ff3e7d15db0b68215cf4f777001fc"} Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.679455 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a4c7089-bd03-4d18-a40e-860c17aa25f3","Type":"ContainerDied","Data":"6b1d31a0b2daa801a19cdc022eb5e0f5deb862f9e0e68da0cd86cdb02d421fb8"} Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.679485 4826 scope.go:117] "RemoveContainer" containerID="102be978d41fbf60cf28c89c51b4cbd5e796e3f4a34163101d28010a02fe7cb1" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.679464 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.680903 4826 generic.go:334] "Generic (PLEG): container finished" podID="2950980d-48bd-4b72-964c-14e8d11d5b77" containerID="c2c81f5f8792fc508f613a497ea4d651ee5e12ec7d124a7e01884a02455c9ffc" exitCode=0 Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.680932 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sb2qx" event={"ID":"2950980d-48bd-4b72-964c-14e8d11d5b77","Type":"ContainerDied","Data":"c2c81f5f8792fc508f613a497ea4d651ee5e12ec7d124a7e01884a02455c9ffc"} Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.704140 4826 scope.go:117] "RemoveContainer" containerID="9bdf5a89569d2863d0660f30de13222c5965f6915e71cd43dadb6eb558e0ca87" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.729937 4826 scope.go:117] "RemoveContainer" containerID="36b94163bc4e28fe7f9dde6f49999ac2cff4676d3ee8f35bc3f0a3109fa9bf5e" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.749802 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.759613 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4c7089-bd03-4d18-a40e-860c17aa25f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.761402 4826 scope.go:117] "RemoveContainer" containerID="55728658f30579f410bc355a3e3d03547c9ff3e7d15db0b68215cf4f777001fc" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.773580 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.782127 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:33 crc kubenswrapper[4826]: E0129 07:05:33.782572 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="proxy-httpd" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.782593 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="proxy-httpd" Jan 29 07:05:33 crc kubenswrapper[4826]: E0129 07:05:33.782606 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64dc8206-670d-44be-887b-48378dbf5a30" containerName="init" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.782613 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="64dc8206-670d-44be-887b-48378dbf5a30" containerName="init" Jan 29 07:05:33 crc kubenswrapper[4826]: E0129 07:05:33.782627 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="ceilometer-notification-agent" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.782634 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="ceilometer-notification-agent" Jan 29 07:05:33 crc kubenswrapper[4826]: E0129 07:05:33.782651 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="ceilometer-central-agent" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.782657 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="ceilometer-central-agent" Jan 29 07:05:33 crc kubenswrapper[4826]: E0129 07:05:33.782669 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64dc8206-670d-44be-887b-48378dbf5a30" containerName="dnsmasq-dns" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.782674 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="64dc8206-670d-44be-887b-48378dbf5a30" containerName="dnsmasq-dns" Jan 29 07:05:33 crc kubenswrapper[4826]: E0129 07:05:33.782691 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="sg-core" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.782696 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="sg-core" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.782870 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="ceilometer-notification-agent" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.782893 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="proxy-httpd" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.782918 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="sg-core" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.782928 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" containerName="ceilometer-central-agent" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.782937 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="64dc8206-670d-44be-887b-48378dbf5a30" containerName="dnsmasq-dns" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.784516 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.787880 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.788208 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.788393 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.794013 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.796598 4826 scope.go:117] "RemoveContainer" containerID="102be978d41fbf60cf28c89c51b4cbd5e796e3f4a34163101d28010a02fe7cb1" Jan 29 07:05:33 crc kubenswrapper[4826]: E0129 07:05:33.797060 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"102be978d41fbf60cf28c89c51b4cbd5e796e3f4a34163101d28010a02fe7cb1\": container with ID starting with 102be978d41fbf60cf28c89c51b4cbd5e796e3f4a34163101d28010a02fe7cb1 not found: ID does not exist" containerID="102be978d41fbf60cf28c89c51b4cbd5e796e3f4a34163101d28010a02fe7cb1" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.797105 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"102be978d41fbf60cf28c89c51b4cbd5e796e3f4a34163101d28010a02fe7cb1"} err="failed to get container status \"102be978d41fbf60cf28c89c51b4cbd5e796e3f4a34163101d28010a02fe7cb1\": rpc error: code = NotFound desc = could not find container \"102be978d41fbf60cf28c89c51b4cbd5e796e3f4a34163101d28010a02fe7cb1\": container with ID starting with 102be978d41fbf60cf28c89c51b4cbd5e796e3f4a34163101d28010a02fe7cb1 not found: ID does not exist" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.797124 4826 scope.go:117] "RemoveContainer" containerID="9bdf5a89569d2863d0660f30de13222c5965f6915e71cd43dadb6eb558e0ca87" Jan 29 07:05:33 crc kubenswrapper[4826]: E0129 07:05:33.797549 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bdf5a89569d2863d0660f30de13222c5965f6915e71cd43dadb6eb558e0ca87\": container with ID starting with 9bdf5a89569d2863d0660f30de13222c5965f6915e71cd43dadb6eb558e0ca87 not found: ID does not exist" containerID="9bdf5a89569d2863d0660f30de13222c5965f6915e71cd43dadb6eb558e0ca87" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.798002 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bdf5a89569d2863d0660f30de13222c5965f6915e71cd43dadb6eb558e0ca87"} err="failed to get container status \"9bdf5a89569d2863d0660f30de13222c5965f6915e71cd43dadb6eb558e0ca87\": rpc error: code = NotFound desc = could not find container \"9bdf5a89569d2863d0660f30de13222c5965f6915e71cd43dadb6eb558e0ca87\": container with ID starting with 9bdf5a89569d2863d0660f30de13222c5965f6915e71cd43dadb6eb558e0ca87 not found: ID does not exist" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.798043 4826 scope.go:117] "RemoveContainer" containerID="36b94163bc4e28fe7f9dde6f49999ac2cff4676d3ee8f35bc3f0a3109fa9bf5e" Jan 29 07:05:33 crc kubenswrapper[4826]: E0129 07:05:33.798383 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b94163bc4e28fe7f9dde6f49999ac2cff4676d3ee8f35bc3f0a3109fa9bf5e\": container with ID starting with 36b94163bc4e28fe7f9dde6f49999ac2cff4676d3ee8f35bc3f0a3109fa9bf5e not found: ID does not exist" containerID="36b94163bc4e28fe7f9dde6f49999ac2cff4676d3ee8f35bc3f0a3109fa9bf5e" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.798413 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b94163bc4e28fe7f9dde6f49999ac2cff4676d3ee8f35bc3f0a3109fa9bf5e"} err="failed to get container status \"36b94163bc4e28fe7f9dde6f49999ac2cff4676d3ee8f35bc3f0a3109fa9bf5e\": rpc error: code = NotFound desc = could not find container \"36b94163bc4e28fe7f9dde6f49999ac2cff4676d3ee8f35bc3f0a3109fa9bf5e\": container with ID starting with 36b94163bc4e28fe7f9dde6f49999ac2cff4676d3ee8f35bc3f0a3109fa9bf5e not found: ID does not exist" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.798429 4826 scope.go:117] "RemoveContainer" containerID="55728658f30579f410bc355a3e3d03547c9ff3e7d15db0b68215cf4f777001fc" Jan 29 07:05:33 crc kubenswrapper[4826]: E0129 07:05:33.799089 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55728658f30579f410bc355a3e3d03547c9ff3e7d15db0b68215cf4f777001fc\": container with ID starting with 55728658f30579f410bc355a3e3d03547c9ff3e7d15db0b68215cf4f777001fc not found: ID does not exist" containerID="55728658f30579f410bc355a3e3d03547c9ff3e7d15db0b68215cf4f777001fc" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.799115 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55728658f30579f410bc355a3e3d03547c9ff3e7d15db0b68215cf4f777001fc"} err="failed to get container status \"55728658f30579f410bc355a3e3d03547c9ff3e7d15db0b68215cf4f777001fc\": rpc error: code = NotFound desc = could not find container \"55728658f30579f410bc355a3e3d03547c9ff3e7d15db0b68215cf4f777001fc\": container with ID starting with 55728658f30579f410bc355a3e3d03547c9ff3e7d15db0b68215cf4f777001fc not found: ID does not exist" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.861261 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34f59971-b32b-4b19-950c-77af3de22fd6-run-httpd\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.861335 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-config-data\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.861377 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk5kl\" (UniqueName: \"kubernetes.io/projected/34f59971-b32b-4b19-950c-77af3de22fd6-kube-api-access-qk5kl\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.861403 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34f59971-b32b-4b19-950c-77af3de22fd6-log-httpd\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.861434 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.861454 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.861492 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-scripts\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.861508 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.963271 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34f59971-b32b-4b19-950c-77af3de22fd6-run-httpd\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.963346 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-config-data\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.963372 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk5kl\" (UniqueName: \"kubernetes.io/projected/34f59971-b32b-4b19-950c-77af3de22fd6-kube-api-access-qk5kl\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.963403 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34f59971-b32b-4b19-950c-77af3de22fd6-log-httpd\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.963433 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.963479 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.963526 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-scripts\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.963545 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.964316 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34f59971-b32b-4b19-950c-77af3de22fd6-log-httpd\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.965431 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34f59971-b32b-4b19-950c-77af3de22fd6-run-httpd\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.969370 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.969803 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-config-data\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.969831 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-scripts\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.970023 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.972491 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:33 crc kubenswrapper[4826]: I0129 07:05:33.991434 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk5kl\" (UniqueName: \"kubernetes.io/projected/34f59971-b32b-4b19-950c-77af3de22fd6-kube-api-access-qk5kl\") pod \"ceilometer-0\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " pod="openstack/ceilometer-0" Jan 29 07:05:34 crc kubenswrapper[4826]: I0129 07:05:34.110747 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:05:34 crc kubenswrapper[4826]: I0129 07:05:34.573625 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:05:34 crc kubenswrapper[4826]: I0129 07:05:34.694983 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34f59971-b32b-4b19-950c-77af3de22fd6","Type":"ContainerStarted","Data":"074537ed7501f1b8046e37ea9a5ed2b8b7b61370a032b224f0056899cf352002"} Jan 29 07:05:34 crc kubenswrapper[4826]: I0129 07:05:34.828476 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a4c7089-bd03-4d18-a40e-860c17aa25f3" path="/var/lib/kubelet/pods/8a4c7089-bd03-4d18-a40e-860c17aa25f3/volumes" Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.048104 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.202660 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-scripts\") pod \"2950980d-48bd-4b72-964c-14e8d11d5b77\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.202728 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99xq6\" (UniqueName: \"kubernetes.io/projected/2950980d-48bd-4b72-964c-14e8d11d5b77-kube-api-access-99xq6\") pod \"2950980d-48bd-4b72-964c-14e8d11d5b77\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.202785 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-combined-ca-bundle\") pod \"2950980d-48bd-4b72-964c-14e8d11d5b77\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.202886 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-config-data\") pod \"2950980d-48bd-4b72-964c-14e8d11d5b77\" (UID: \"2950980d-48bd-4b72-964c-14e8d11d5b77\") " Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.208459 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-scripts" (OuterVolumeSpecName: "scripts") pod "2950980d-48bd-4b72-964c-14e8d11d5b77" (UID: "2950980d-48bd-4b72-964c-14e8d11d5b77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.226048 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2950980d-48bd-4b72-964c-14e8d11d5b77-kube-api-access-99xq6" (OuterVolumeSpecName: "kube-api-access-99xq6") pod "2950980d-48bd-4b72-964c-14e8d11d5b77" (UID: "2950980d-48bd-4b72-964c-14e8d11d5b77"). InnerVolumeSpecName "kube-api-access-99xq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.232867 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2950980d-48bd-4b72-964c-14e8d11d5b77" (UID: "2950980d-48bd-4b72-964c-14e8d11d5b77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.236638 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-config-data" (OuterVolumeSpecName: "config-data") pod "2950980d-48bd-4b72-964c-14e8d11d5b77" (UID: "2950980d-48bd-4b72-964c-14e8d11d5b77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.305287 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.305345 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.305358 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99xq6\" (UniqueName: \"kubernetes.io/projected/2950980d-48bd-4b72-964c-14e8d11d5b77-kube-api-access-99xq6\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.305369 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2950980d-48bd-4b72-964c-14e8d11d5b77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.645513 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-557bbc7df7-ptc9r" podUID="64dc8206-670d-44be-887b-48378dbf5a30" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: i/o timeout" Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.706279 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sb2qx" event={"ID":"2950980d-48bd-4b72-964c-14e8d11d5b77","Type":"ContainerDied","Data":"1b8323b8240a2d63e98999b13f571e872d51b526cdb159f3e63d51016b43e485"} Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.706373 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b8323b8240a2d63e98999b13f571e872d51b526cdb159f3e63d51016b43e485" Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.706287 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sb2qx" Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.743683 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34f59971-b32b-4b19-950c-77af3de22fd6","Type":"ContainerStarted","Data":"7c7dba5d83accbaa998425c9ed45b4f96e2e00553764c182bf92c212452b3aee"} Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.958015 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.958375 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="02d44a2c-0465-48d4-96a0-e248904d3213" containerName="nova-api-api" containerID="cri-o://d238b8c0e32fea300b228a709635986145f0215dc574d2f201145f87abc62869" gracePeriod=30 Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.958284 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="02d44a2c-0465-48d4-96a0-e248904d3213" containerName="nova-api-log" containerID="cri-o://9f2a5c75ecac54e98c282cadd3b308623c6203378cd4010e5c37c28ffec8c902" gracePeriod=30 Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.992116 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:05:35 crc kubenswrapper[4826]: I0129 07:05:35.992471 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5b5d8ef1-2ca8-497b-8245-c37998f31fdb" containerName="nova-scheduler-scheduler" containerID="cri-o://0418191ed0b042868c16dbc2385f651ded763d791ad329ba66d9fa1fc9bf6d89" gracePeriod=30 Jan 29 07:05:36 crc kubenswrapper[4826]: I0129 07:05:36.031805 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:05:36 crc kubenswrapper[4826]: I0129 07:05:36.032682 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerName="nova-metadata-log" containerID="cri-o://e66df0cba5d164702489ab88b03e84afe38292edadb59080fb4903202cc2879b" gracePeriod=30 Jan 29 07:05:36 crc kubenswrapper[4826]: I0129 07:05:36.040393 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerName="nova-metadata-metadata" containerID="cri-o://b529d4058dbd8a282ad5ad44f5d2a4b74a2edc7b98e5444a205fa82df456d9b1" gracePeriod=30 Jan 29 07:05:36 crc kubenswrapper[4826]: I0129 07:05:36.783658 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34f59971-b32b-4b19-950c-77af3de22fd6","Type":"ContainerStarted","Data":"212d41639781efa740d28e2f69b9a84d9805cc97ad8560cd6ff518592748ede0"} Jan 29 07:05:36 crc kubenswrapper[4826]: I0129 07:05:36.786719 4826 generic.go:334] "Generic (PLEG): container finished" podID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerID="e66df0cba5d164702489ab88b03e84afe38292edadb59080fb4903202cc2879b" exitCode=143 Jan 29 07:05:36 crc kubenswrapper[4826]: I0129 07:05:36.786785 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0df13a7-7612-4388-8ad5-b399b2305d4c","Type":"ContainerDied","Data":"e66df0cba5d164702489ab88b03e84afe38292edadb59080fb4903202cc2879b"} Jan 29 07:05:36 crc kubenswrapper[4826]: I0129 07:05:36.788047 4826 generic.go:334] "Generic (PLEG): container finished" podID="02d44a2c-0465-48d4-96a0-e248904d3213" containerID="d238b8c0e32fea300b228a709635986145f0215dc574d2f201145f87abc62869" exitCode=0 Jan 29 07:05:36 crc kubenswrapper[4826]: I0129 07:05:36.788067 4826 generic.go:334] "Generic (PLEG): container finished" podID="02d44a2c-0465-48d4-96a0-e248904d3213" containerID="9f2a5c75ecac54e98c282cadd3b308623c6203378cd4010e5c37c28ffec8c902" exitCode=143 Jan 29 07:05:36 crc kubenswrapper[4826]: I0129 07:05:36.788084 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02d44a2c-0465-48d4-96a0-e248904d3213","Type":"ContainerDied","Data":"d238b8c0e32fea300b228a709635986145f0215dc574d2f201145f87abc62869"} Jan 29 07:05:36 crc kubenswrapper[4826]: I0129 07:05:36.788100 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02d44a2c-0465-48d4-96a0-e248904d3213","Type":"ContainerDied","Data":"9f2a5c75ecac54e98c282cadd3b308623c6203378cd4010e5c37c28ffec8c902"} Jan 29 07:05:36 crc kubenswrapper[4826]: I0129 07:05:36.985209 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.052135 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-combined-ca-bundle\") pod \"02d44a2c-0465-48d4-96a0-e248904d3213\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.052201 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-public-tls-certs\") pod \"02d44a2c-0465-48d4-96a0-e248904d3213\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.052252 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02d44a2c-0465-48d4-96a0-e248904d3213-logs\") pod \"02d44a2c-0465-48d4-96a0-e248904d3213\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.052448 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nv46\" (UniqueName: \"kubernetes.io/projected/02d44a2c-0465-48d4-96a0-e248904d3213-kube-api-access-4nv46\") pod \"02d44a2c-0465-48d4-96a0-e248904d3213\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.052672 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-internal-tls-certs\") pod \"02d44a2c-0465-48d4-96a0-e248904d3213\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.052717 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-config-data\") pod \"02d44a2c-0465-48d4-96a0-e248904d3213\" (UID: \"02d44a2c-0465-48d4-96a0-e248904d3213\") " Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.054239 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02d44a2c-0465-48d4-96a0-e248904d3213-logs" (OuterVolumeSpecName: "logs") pod "02d44a2c-0465-48d4-96a0-e248904d3213" (UID: "02d44a2c-0465-48d4-96a0-e248904d3213"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.057698 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d44a2c-0465-48d4-96a0-e248904d3213-kube-api-access-4nv46" (OuterVolumeSpecName: "kube-api-access-4nv46") pod "02d44a2c-0465-48d4-96a0-e248904d3213" (UID: "02d44a2c-0465-48d4-96a0-e248904d3213"). InnerVolumeSpecName "kube-api-access-4nv46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.084196 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-config-data" (OuterVolumeSpecName: "config-data") pod "02d44a2c-0465-48d4-96a0-e248904d3213" (UID: "02d44a2c-0465-48d4-96a0-e248904d3213"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.087547 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02d44a2c-0465-48d4-96a0-e248904d3213" (UID: "02d44a2c-0465-48d4-96a0-e248904d3213"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.110711 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "02d44a2c-0465-48d4-96a0-e248904d3213" (UID: "02d44a2c-0465-48d4-96a0-e248904d3213"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.126209 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "02d44a2c-0465-48d4-96a0-e248904d3213" (UID: "02d44a2c-0465-48d4-96a0-e248904d3213"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.155404 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nv46\" (UniqueName: \"kubernetes.io/projected/02d44a2c-0465-48d4-96a0-e248904d3213-kube-api-access-4nv46\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.155442 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.155458 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.155473 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.155485 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d44a2c-0465-48d4-96a0-e248904d3213-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.155496 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02d44a2c-0465-48d4-96a0-e248904d3213-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:37 crc kubenswrapper[4826]: E0129 07:05:37.599136 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0418191ed0b042868c16dbc2385f651ded763d791ad329ba66d9fa1fc9bf6d89" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 07:05:37 crc kubenswrapper[4826]: E0129 07:05:37.601728 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0418191ed0b042868c16dbc2385f651ded763d791ad329ba66d9fa1fc9bf6d89" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 07:05:37 crc kubenswrapper[4826]: E0129 07:05:37.603169 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0418191ed0b042868c16dbc2385f651ded763d791ad329ba66d9fa1fc9bf6d89" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 07:05:37 crc kubenswrapper[4826]: E0129 07:05:37.603222 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="5b5d8ef1-2ca8-497b-8245-c37998f31fdb" containerName="nova-scheduler-scheduler" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.798482 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34f59971-b32b-4b19-950c-77af3de22fd6","Type":"ContainerStarted","Data":"196b723afa334558daf445ce5209a0b55ad9e7a17b9e293e6f9442cc86628664"} Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.801010 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02d44a2c-0465-48d4-96a0-e248904d3213","Type":"ContainerDied","Data":"5df61b41361edf81ff23b5a55a94a9be818d468a21b4dd7e9da2a738010ded17"} Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.801085 4826 scope.go:117] "RemoveContainer" containerID="d238b8c0e32fea300b228a709635986145f0215dc574d2f201145f87abc62869" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.801098 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.834616 4826 scope.go:117] "RemoveContainer" containerID="9f2a5c75ecac54e98c282cadd3b308623c6203378cd4010e5c37c28ffec8c902" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.839974 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.848446 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.872609 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 07:05:37 crc kubenswrapper[4826]: E0129 07:05:37.873105 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d44a2c-0465-48d4-96a0-e248904d3213" containerName="nova-api-log" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.873124 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d44a2c-0465-48d4-96a0-e248904d3213" containerName="nova-api-log" Jan 29 07:05:37 crc kubenswrapper[4826]: E0129 07:05:37.873141 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2950980d-48bd-4b72-964c-14e8d11d5b77" containerName="nova-manage" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.873147 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2950980d-48bd-4b72-964c-14e8d11d5b77" containerName="nova-manage" Jan 29 07:05:37 crc kubenswrapper[4826]: E0129 07:05:37.873174 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d44a2c-0465-48d4-96a0-e248904d3213" containerName="nova-api-api" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.873181 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d44a2c-0465-48d4-96a0-e248904d3213" containerName="nova-api-api" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.873387 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2950980d-48bd-4b72-964c-14e8d11d5b77" containerName="nova-manage" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.873411 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d44a2c-0465-48d4-96a0-e248904d3213" containerName="nova-api-log" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.873430 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d44a2c-0465-48d4-96a0-e248904d3213" containerName="nova-api-api" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.874408 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.877766 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.877968 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.878568 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.882686 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.968158 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.968522 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3de4a3bc-a01f-424a-8f17-60deaba1f189-logs\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.968557 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-public-tls-certs\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.968589 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-config-data\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.968664 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q2fs\" (UniqueName: \"kubernetes.io/projected/3de4a3bc-a01f-424a-8f17-60deaba1f189-kube-api-access-4q2fs\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:37 crc kubenswrapper[4826]: I0129 07:05:37.968721 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.070279 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q2fs\" (UniqueName: \"kubernetes.io/projected/3de4a3bc-a01f-424a-8f17-60deaba1f189-kube-api-access-4q2fs\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.070367 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.070442 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.070481 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3de4a3bc-a01f-424a-8f17-60deaba1f189-logs\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.070511 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-public-tls-certs\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.070541 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-config-data\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.071544 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3de4a3bc-a01f-424a-8f17-60deaba1f189-logs\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.077778 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-public-tls-certs\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.077951 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-config-data\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.078174 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.078437 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.093149 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q2fs\" (UniqueName: \"kubernetes.io/projected/3de4a3bc-a01f-424a-8f17-60deaba1f189-kube-api-access-4q2fs\") pod \"nova-api-0\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.199393 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.680826 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.838010 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d44a2c-0465-48d4-96a0-e248904d3213" path="/var/lib/kubelet/pods/02d44a2c-0465-48d4-96a0-e248904d3213/volumes" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.839024 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.839055 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3de4a3bc-a01f-424a-8f17-60deaba1f189","Type":"ContainerStarted","Data":"1da69d9dcf7373733a52d1259b040c5b6ebf38233b267a25e8066704b0c9e8c2"} Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.839076 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34f59971-b32b-4b19-950c-77af3de22fd6","Type":"ContainerStarted","Data":"b9493f3c4087383b633335c756de30375c0411684ed837b040dded2644f790f2"} Jan 29 07:05:38 crc kubenswrapper[4826]: I0129 07:05:38.862339 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.891412113 podStartE2EDuration="5.862323002s" podCreationTimestamp="2026-01-29 07:05:33 +0000 UTC" firstStartedPulling="2026-01-29 07:05:34.573872355 +0000 UTC m=+1318.435665424" lastFinishedPulling="2026-01-29 07:05:38.544783244 +0000 UTC m=+1322.406576313" observedRunningTime="2026-01-29 07:05:38.851725173 +0000 UTC m=+1322.713518242" watchObservedRunningTime="2026-01-29 07:05:38.862323002 +0000 UTC m=+1322.724116071" Jan 29 07:05:39 crc kubenswrapper[4826]: I0129 07:05:39.480630 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": read tcp 10.217.0.2:43038->10.217.0.190:8775: read: connection reset by peer" Jan 29 07:05:39 crc kubenswrapper[4826]: I0129 07:05:39.480630 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": read tcp 10.217.0.2:43044->10.217.0.190:8775: read: connection reset by peer" Jan 29 07:05:39 crc kubenswrapper[4826]: E0129 07:05:39.788464 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d44a2c_0465_48d4_96a0_e248904d3213.slice/crio-5df61b41361edf81ff23b5a55a94a9be818d468a21b4dd7e9da2a738010ded17\": RecentStats: unable to find data in memory cache]" Jan 29 07:05:39 crc kubenswrapper[4826]: I0129 07:05:39.837894 4826 generic.go:334] "Generic (PLEG): container finished" podID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerID="b529d4058dbd8a282ad5ad44f5d2a4b74a2edc7b98e5444a205fa82df456d9b1" exitCode=0 Jan 29 07:05:39 crc kubenswrapper[4826]: I0129 07:05:39.837970 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0df13a7-7612-4388-8ad5-b399b2305d4c","Type":"ContainerDied","Data":"b529d4058dbd8a282ad5ad44f5d2a4b74a2edc7b98e5444a205fa82df456d9b1"} Jan 29 07:05:39 crc kubenswrapper[4826]: I0129 07:05:39.839824 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3de4a3bc-a01f-424a-8f17-60deaba1f189","Type":"ContainerStarted","Data":"d9368ce3c4b22eb7ed796c96a8b8b0a80f4ad3b81b2110fb2465fa6b3f09ec54"} Jan 29 07:05:39 crc kubenswrapper[4826]: I0129 07:05:39.839865 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3de4a3bc-a01f-424a-8f17-60deaba1f189","Type":"ContainerStarted","Data":"38411dbe679ef8c89b951c34afc01e2b04f84066915dd175b2c3a4d60a5cccb1"} Jan 29 07:05:39 crc kubenswrapper[4826]: I0129 07:05:39.860794 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.860772523 podStartE2EDuration="2.860772523s" podCreationTimestamp="2026-01-29 07:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:05:39.855424832 +0000 UTC m=+1323.717217901" watchObservedRunningTime="2026-01-29 07:05:39.860772523 +0000 UTC m=+1323.722565592" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.208950 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.322485 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpc7v\" (UniqueName: \"kubernetes.io/projected/f0df13a7-7612-4388-8ad5-b399b2305d4c-kube-api-access-qpc7v\") pod \"f0df13a7-7612-4388-8ad5-b399b2305d4c\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.322830 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-config-data\") pod \"f0df13a7-7612-4388-8ad5-b399b2305d4c\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.322987 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0df13a7-7612-4388-8ad5-b399b2305d4c-logs\") pod \"f0df13a7-7612-4388-8ad5-b399b2305d4c\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.323100 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-combined-ca-bundle\") pod \"f0df13a7-7612-4388-8ad5-b399b2305d4c\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.323546 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0df13a7-7612-4388-8ad5-b399b2305d4c-logs" (OuterVolumeSpecName: "logs") pod "f0df13a7-7612-4388-8ad5-b399b2305d4c" (UID: "f0df13a7-7612-4388-8ad5-b399b2305d4c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.323839 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-nova-metadata-tls-certs\") pod \"f0df13a7-7612-4388-8ad5-b399b2305d4c\" (UID: \"f0df13a7-7612-4388-8ad5-b399b2305d4c\") " Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.324439 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0df13a7-7612-4388-8ad5-b399b2305d4c-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.331444 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0df13a7-7612-4388-8ad5-b399b2305d4c-kube-api-access-qpc7v" (OuterVolumeSpecName: "kube-api-access-qpc7v") pod "f0df13a7-7612-4388-8ad5-b399b2305d4c" (UID: "f0df13a7-7612-4388-8ad5-b399b2305d4c"). InnerVolumeSpecName "kube-api-access-qpc7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.359692 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0df13a7-7612-4388-8ad5-b399b2305d4c" (UID: "f0df13a7-7612-4388-8ad5-b399b2305d4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.359708 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-config-data" (OuterVolumeSpecName: "config-data") pod "f0df13a7-7612-4388-8ad5-b399b2305d4c" (UID: "f0df13a7-7612-4388-8ad5-b399b2305d4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.396491 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f0df13a7-7612-4388-8ad5-b399b2305d4c" (UID: "f0df13a7-7612-4388-8ad5-b399b2305d4c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.425817 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpc7v\" (UniqueName: \"kubernetes.io/projected/f0df13a7-7612-4388-8ad5-b399b2305d4c-kube-api-access-qpc7v\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.425848 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.425863 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.425871 4826 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0df13a7-7612-4388-8ad5-b399b2305d4c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.851186 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.851344 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0df13a7-7612-4388-8ad5-b399b2305d4c","Type":"ContainerDied","Data":"895f752d91a7ad74c1b5578b94f3bfd42e7ce4f6cc071ff87724622c870a8742"} Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.852047 4826 scope.go:117] "RemoveContainer" containerID="b529d4058dbd8a282ad5ad44f5d2a4b74a2edc7b98e5444a205fa82df456d9b1" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.901635 4826 scope.go:117] "RemoveContainer" containerID="e66df0cba5d164702489ab88b03e84afe38292edadb59080fb4903202cc2879b" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.901779 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.927522 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.939011 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:05:40 crc kubenswrapper[4826]: E0129 07:05:40.939534 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerName="nova-metadata-metadata" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.939555 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerName="nova-metadata-metadata" Jan 29 07:05:40 crc kubenswrapper[4826]: E0129 07:05:40.939574 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerName="nova-metadata-log" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.939582 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerName="nova-metadata-log" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.939811 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerName="nova-metadata-log" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.939832 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0df13a7-7612-4388-8ad5-b399b2305d4c" containerName="nova-metadata-metadata" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.941015 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.945858 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.946073 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 07:05:40 crc kubenswrapper[4826]: I0129 07:05:40.947075 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.035388 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-config-data\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.035447 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvt7\" (UniqueName: \"kubernetes.io/projected/9349a8ff-2652-4dcf-89d9-6d440269be8c-kube-api-access-txvt7\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.035612 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.035656 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.035767 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9349a8ff-2652-4dcf-89d9-6d440269be8c-logs\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.137706 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.138005 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.138163 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9349a8ff-2652-4dcf-89d9-6d440269be8c-logs\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.138345 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-config-data\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.138460 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txvt7\" (UniqueName: \"kubernetes.io/projected/9349a8ff-2652-4dcf-89d9-6d440269be8c-kube-api-access-txvt7\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.138792 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9349a8ff-2652-4dcf-89d9-6d440269be8c-logs\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.142698 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.153996 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-config-data\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.156926 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.158185 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txvt7\" (UniqueName: \"kubernetes.io/projected/9349a8ff-2652-4dcf-89d9-6d440269be8c-kube-api-access-txvt7\") pod \"nova-metadata-0\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.272966 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.780950 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.861087 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9349a8ff-2652-4dcf-89d9-6d440269be8c","Type":"ContainerStarted","Data":"bdc684e76974439132ff0108524caf801319a4bee28decfd217a36973e91d2a1"} Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.862424 4826 generic.go:334] "Generic (PLEG): container finished" podID="5b5d8ef1-2ca8-497b-8245-c37998f31fdb" containerID="0418191ed0b042868c16dbc2385f651ded763d791ad329ba66d9fa1fc9bf6d89" exitCode=0 Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.862548 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b5d8ef1-2ca8-497b-8245-c37998f31fdb","Type":"ContainerDied","Data":"0418191ed0b042868c16dbc2385f651ded763d791ad329ba66d9fa1fc9bf6d89"} Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.862585 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b5d8ef1-2ca8-497b-8245-c37998f31fdb","Type":"ContainerDied","Data":"c3c0598735eafaa68dfcee92f51585880b80e38ec7eb3bd5aed485d119ab2b34"} Jan 29 07:05:41 crc kubenswrapper[4826]: I0129 07:05:41.862600 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3c0598735eafaa68dfcee92f51585880b80e38ec7eb3bd5aed485d119ab2b34" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.027519 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.056060 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtn6r\" (UniqueName: \"kubernetes.io/projected/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-kube-api-access-qtn6r\") pod \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\" (UID: \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\") " Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.056171 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-config-data\") pod \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\" (UID: \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\") " Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.056253 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-combined-ca-bundle\") pod \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\" (UID: \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\") " Jan 29 07:05:42 crc kubenswrapper[4826]: E0129 07:05:42.101611 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-combined-ca-bundle podName:5b5d8ef1-2ca8-497b-8245-c37998f31fdb nodeName:}" failed. No retries permitted until 2026-01-29 07:05:42.601580253 +0000 UTC m=+1326.463373322 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-combined-ca-bundle") pod "5b5d8ef1-2ca8-497b-8245-c37998f31fdb" (UID: "5b5d8ef1-2ca8-497b-8245-c37998f31fdb") : error deleting /var/lib/kubelet/pods/5b5d8ef1-2ca8-497b-8245-c37998f31fdb/volume-subpaths: remove /var/lib/kubelet/pods/5b5d8ef1-2ca8-497b-8245-c37998f31fdb/volume-subpaths: no such file or directory Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.101718 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-kube-api-access-qtn6r" (OuterVolumeSpecName: "kube-api-access-qtn6r") pod "5b5d8ef1-2ca8-497b-8245-c37998f31fdb" (UID: "5b5d8ef1-2ca8-497b-8245-c37998f31fdb"). InnerVolumeSpecName "kube-api-access-qtn6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.104561 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-config-data" (OuterVolumeSpecName: "config-data") pod "5b5d8ef1-2ca8-497b-8245-c37998f31fdb" (UID: "5b5d8ef1-2ca8-497b-8245-c37998f31fdb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.158655 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtn6r\" (UniqueName: \"kubernetes.io/projected/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-kube-api-access-qtn6r\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.158689 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.668644 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-combined-ca-bundle\") pod \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\" (UID: \"5b5d8ef1-2ca8-497b-8245-c37998f31fdb\") " Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.672534 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b5d8ef1-2ca8-497b-8245-c37998f31fdb" (UID: "5b5d8ef1-2ca8-497b-8245-c37998f31fdb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.770868 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5d8ef1-2ca8-497b-8245-c37998f31fdb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.820633 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0df13a7-7612-4388-8ad5-b399b2305d4c" path="/var/lib/kubelet/pods/f0df13a7-7612-4388-8ad5-b399b2305d4c/volumes" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.872801 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.873923 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9349a8ff-2652-4dcf-89d9-6d440269be8c","Type":"ContainerStarted","Data":"b50ac83dbd4ed6b1a94d8b1a7e79a0a2f0cbe1ecca7f07d779adbd94338a2040"} Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.873994 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9349a8ff-2652-4dcf-89d9-6d440269be8c","Type":"ContainerStarted","Data":"768c302445504a9f1d3eff35fdd9007e37101a52925f9f686a83584515eeb5c2"} Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.902928 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.902150065 podStartE2EDuration="2.902150065s" podCreationTimestamp="2026-01-29 07:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:05:42.896115016 +0000 UTC m=+1326.757908125" watchObservedRunningTime="2026-01-29 07:05:42.902150065 +0000 UTC m=+1326.763943124" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.965150 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.974778 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.985158 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:05:42 crc kubenswrapper[4826]: E0129 07:05:42.985712 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5d8ef1-2ca8-497b-8245-c37998f31fdb" containerName="nova-scheduler-scheduler" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.985735 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5d8ef1-2ca8-497b-8245-c37998f31fdb" containerName="nova-scheduler-scheduler" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.985944 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b5d8ef1-2ca8-497b-8245-c37998f31fdb" containerName="nova-scheduler-scheduler" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.986744 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.990343 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 07:05:42 crc kubenswrapper[4826]: I0129 07:05:42.992197 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:05:43 crc kubenswrapper[4826]: I0129 07:05:43.075470 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " pod="openstack/nova-scheduler-0" Jan 29 07:05:43 crc kubenswrapper[4826]: I0129 07:05:43.075554 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-config-data\") pod \"nova-scheduler-0\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " pod="openstack/nova-scheduler-0" Jan 29 07:05:43 crc kubenswrapper[4826]: I0129 07:05:43.075698 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbs7p\" (UniqueName: \"kubernetes.io/projected/960b6ae0-2577-444e-bc2a-bea4ec2917f9-kube-api-access-qbs7p\") pod \"nova-scheduler-0\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " pod="openstack/nova-scheduler-0" Jan 29 07:05:43 crc kubenswrapper[4826]: I0129 07:05:43.178170 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " pod="openstack/nova-scheduler-0" Jan 29 07:05:43 crc kubenswrapper[4826]: I0129 07:05:43.178390 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-config-data\") pod \"nova-scheduler-0\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " pod="openstack/nova-scheduler-0" Jan 29 07:05:43 crc kubenswrapper[4826]: I0129 07:05:43.178503 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbs7p\" (UniqueName: \"kubernetes.io/projected/960b6ae0-2577-444e-bc2a-bea4ec2917f9-kube-api-access-qbs7p\") pod \"nova-scheduler-0\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " pod="openstack/nova-scheduler-0" Jan 29 07:05:43 crc kubenswrapper[4826]: I0129 07:05:43.196916 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-config-data\") pod \"nova-scheduler-0\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " pod="openstack/nova-scheduler-0" Jan 29 07:05:43 crc kubenswrapper[4826]: I0129 07:05:43.197056 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " pod="openstack/nova-scheduler-0" Jan 29 07:05:43 crc kubenswrapper[4826]: I0129 07:05:43.200035 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbs7p\" (UniqueName: \"kubernetes.io/projected/960b6ae0-2577-444e-bc2a-bea4ec2917f9-kube-api-access-qbs7p\") pod \"nova-scheduler-0\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " pod="openstack/nova-scheduler-0" Jan 29 07:05:43 crc kubenswrapper[4826]: I0129 07:05:43.311780 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 07:05:43 crc kubenswrapper[4826]: I0129 07:05:43.828534 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:05:43 crc kubenswrapper[4826]: W0129 07:05:43.838169 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod960b6ae0_2577_444e_bc2a_bea4ec2917f9.slice/crio-71ea196572757137a9598994006b2dff02dcfbf2bea44bcbdea3081e3e3ab34e WatchSource:0}: Error finding container 71ea196572757137a9598994006b2dff02dcfbf2bea44bcbdea3081e3e3ab34e: Status 404 returned error can't find the container with id 71ea196572757137a9598994006b2dff02dcfbf2bea44bcbdea3081e3e3ab34e Jan 29 07:05:43 crc kubenswrapper[4826]: I0129 07:05:43.882441 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"960b6ae0-2577-444e-bc2a-bea4ec2917f9","Type":"ContainerStarted","Data":"71ea196572757137a9598994006b2dff02dcfbf2bea44bcbdea3081e3e3ab34e"} Jan 29 07:05:44 crc kubenswrapper[4826]: I0129 07:05:44.822171 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b5d8ef1-2ca8-497b-8245-c37998f31fdb" path="/var/lib/kubelet/pods/5b5d8ef1-2ca8-497b-8245-c37998f31fdb/volumes" Jan 29 07:05:44 crc kubenswrapper[4826]: I0129 07:05:44.894677 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"960b6ae0-2577-444e-bc2a-bea4ec2917f9","Type":"ContainerStarted","Data":"aa923caa75a4ef623de542ee9505460d14e9077582229b30f18ddbd944849073"} Jan 29 07:05:44 crc kubenswrapper[4826]: I0129 07:05:44.913661 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.91364742 podStartE2EDuration="2.91364742s" podCreationTimestamp="2026-01-29 07:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 07:05:44.91213052 +0000 UTC m=+1328.773923589" watchObservedRunningTime="2026-01-29 07:05:44.91364742 +0000 UTC m=+1328.775440489" Jan 29 07:05:46 crc kubenswrapper[4826]: I0129 07:05:46.273916 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 07:05:46 crc kubenswrapper[4826]: I0129 07:05:46.274703 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 07:05:48 crc kubenswrapper[4826]: I0129 07:05:48.200022 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 07:05:48 crc kubenswrapper[4826]: I0129 07:05:48.200383 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 07:05:48 crc kubenswrapper[4826]: I0129 07:05:48.317612 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 07:05:49 crc kubenswrapper[4826]: I0129 07:05:49.216578 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3de4a3bc-a01f-424a-8f17-60deaba1f189" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 07:05:49 crc kubenswrapper[4826]: I0129 07:05:49.216623 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3de4a3bc-a01f-424a-8f17-60deaba1f189" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 07:05:50 crc kubenswrapper[4826]: E0129 07:05:50.024586 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d44a2c_0465_48d4_96a0_e248904d3213.slice/crio-5df61b41361edf81ff23b5a55a94a9be818d468a21b4dd7e9da2a738010ded17\": RecentStats: unable to find data in memory cache]" Jan 29 07:05:51 crc kubenswrapper[4826]: I0129 07:05:51.273905 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 07:05:51 crc kubenswrapper[4826]: I0129 07:05:51.273953 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 07:05:52 crc kubenswrapper[4826]: I0129 07:05:52.289752 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 07:05:52 crc kubenswrapper[4826]: I0129 07:05:52.289758 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 07:05:53 crc kubenswrapper[4826]: I0129 07:05:53.312870 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 07:05:53 crc kubenswrapper[4826]: I0129 07:05:53.356483 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 07:05:54 crc kubenswrapper[4826]: I0129 07:05:54.053290 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 07:05:58 crc kubenswrapper[4826]: I0129 07:05:58.206647 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 07:05:58 crc kubenswrapper[4826]: I0129 07:05:58.207270 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 07:05:58 crc kubenswrapper[4826]: I0129 07:05:58.207853 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 07:05:58 crc kubenswrapper[4826]: I0129 07:05:58.208222 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 07:05:58 crc kubenswrapper[4826]: I0129 07:05:58.214898 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 07:05:58 crc kubenswrapper[4826]: I0129 07:05:58.216561 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 07:06:00 crc kubenswrapper[4826]: E0129 07:06:00.330059 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d44a2c_0465_48d4_96a0_e248904d3213.slice/crio-5df61b41361edf81ff23b5a55a94a9be818d468a21b4dd7e9da2a738010ded17\": RecentStats: unable to find data in memory cache]" Jan 29 07:06:01 crc kubenswrapper[4826]: I0129 07:06:01.279542 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 07:06:01 crc kubenswrapper[4826]: I0129 07:06:01.283159 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 07:06:01 crc kubenswrapper[4826]: I0129 07:06:01.289800 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 07:06:02 crc kubenswrapper[4826]: I0129 07:06:02.116845 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 07:06:04 crc kubenswrapper[4826]: I0129 07:06:04.126440 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 07:06:05 crc kubenswrapper[4826]: I0129 07:06:05.656458 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:06:05 crc kubenswrapper[4826]: I0129 07:06:05.656903 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:06:10 crc kubenswrapper[4826]: E0129 07:06:10.577945 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d44a2c_0465_48d4_96a0_e248904d3213.slice/crio-5df61b41361edf81ff23b5a55a94a9be818d468a21b4dd7e9da2a738010ded17\": RecentStats: unable to find data in memory cache]" Jan 29 07:06:20 crc kubenswrapper[4826]: I0129 07:06:20.648512 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jrjtp"] Jan 29 07:06:20 crc kubenswrapper[4826]: I0129 07:06:20.654952 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:20 crc kubenswrapper[4826]: I0129 07:06:20.670425 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrjtp"] Jan 29 07:06:20 crc kubenswrapper[4826]: I0129 07:06:20.721963 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf226f75-106f-4f53-b33b-59f9ebbbefc3-catalog-content\") pod \"redhat-operators-jrjtp\" (UID: \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\") " pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:20 crc kubenswrapper[4826]: I0129 07:06:20.722077 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf226f75-106f-4f53-b33b-59f9ebbbefc3-utilities\") pod \"redhat-operators-jrjtp\" (UID: \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\") " pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:20 crc kubenswrapper[4826]: I0129 07:06:20.722371 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb8zm\" (UniqueName: \"kubernetes.io/projected/bf226f75-106f-4f53-b33b-59f9ebbbefc3-kube-api-access-rb8zm\") pod \"redhat-operators-jrjtp\" (UID: \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\") " pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:20 crc kubenswrapper[4826]: I0129 07:06:20.824627 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf226f75-106f-4f53-b33b-59f9ebbbefc3-utilities\") pod \"redhat-operators-jrjtp\" (UID: \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\") " pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:20 crc kubenswrapper[4826]: I0129 07:06:20.824700 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb8zm\" (UniqueName: \"kubernetes.io/projected/bf226f75-106f-4f53-b33b-59f9ebbbefc3-kube-api-access-rb8zm\") pod \"redhat-operators-jrjtp\" (UID: \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\") " pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:20 crc kubenswrapper[4826]: I0129 07:06:20.824851 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf226f75-106f-4f53-b33b-59f9ebbbefc3-catalog-content\") pod \"redhat-operators-jrjtp\" (UID: \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\") " pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:20 crc kubenswrapper[4826]: I0129 07:06:20.825480 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf226f75-106f-4f53-b33b-59f9ebbbefc3-catalog-content\") pod \"redhat-operators-jrjtp\" (UID: \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\") " pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:20 crc kubenswrapper[4826]: I0129 07:06:20.825559 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf226f75-106f-4f53-b33b-59f9ebbbefc3-utilities\") pod \"redhat-operators-jrjtp\" (UID: \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\") " pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:20 crc kubenswrapper[4826]: I0129 07:06:20.848209 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb8zm\" (UniqueName: \"kubernetes.io/projected/bf226f75-106f-4f53-b33b-59f9ebbbefc3-kube-api-access-rb8zm\") pod \"redhat-operators-jrjtp\" (UID: \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\") " pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:20 crc kubenswrapper[4826]: E0129 07:06:20.864828 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d44a2c_0465_48d4_96a0_e248904d3213.slice/crio-5df61b41361edf81ff23b5a55a94a9be818d468a21b4dd7e9da2a738010ded17\": RecentStats: unable to find data in memory cache]" Jan 29 07:06:20 crc kubenswrapper[4826]: I0129 07:06:20.994086 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:21 crc kubenswrapper[4826]: I0129 07:06:21.493616 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrjtp"] Jan 29 07:06:22 crc kubenswrapper[4826]: I0129 07:06:22.497879 4826 generic.go:334] "Generic (PLEG): container finished" podID="bf226f75-106f-4f53-b33b-59f9ebbbefc3" containerID="600e4e259dc70cf7553b00525bc8575413856859a15dc568c7f670972c1cf37a" exitCode=0 Jan 29 07:06:22 crc kubenswrapper[4826]: I0129 07:06:22.498019 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrjtp" event={"ID":"bf226f75-106f-4f53-b33b-59f9ebbbefc3","Type":"ContainerDied","Data":"600e4e259dc70cf7553b00525bc8575413856859a15dc568c7f670972c1cf37a"} Jan 29 07:06:22 crc kubenswrapper[4826]: I0129 07:06:22.498129 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrjtp" event={"ID":"bf226f75-106f-4f53-b33b-59f9ebbbefc3","Type":"ContainerStarted","Data":"cfaadf49ccb5cee55cd87bffa659423f7d3f0d5b098726a989003d913660c54c"} Jan 29 07:06:22 crc kubenswrapper[4826]: I0129 07:06:22.499795 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 07:06:23 crc kubenswrapper[4826]: I0129 07:06:23.510644 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrjtp" event={"ID":"bf226f75-106f-4f53-b33b-59f9ebbbefc3","Type":"ContainerStarted","Data":"7d37d3a61c1d81543cb62525459e1b9aa5156ff41900e162e2c6a48e863800d4"} Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.125185 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2f64-account-create-update-mndz2"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.126358 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2f64-account-create-update-mndz2" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.128348 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.150212 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2f64-account-create-update-mndz2"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.200430 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2f64-account-create-update-2n4t7"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.214809 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2f64-account-create-update-2n4t7"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.220529 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c9fk\" (UniqueName: \"kubernetes.io/projected/68d92d9a-3db4-4400-ac87-6334d2be6184-kube-api-access-7c9fk\") pod \"barbican-2f64-account-create-update-mndz2\" (UID: \"68d92d9a-3db4-4400-ac87-6334d2be6184\") " pod="openstack/barbican-2f64-account-create-update-mndz2" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.220608 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68d92d9a-3db4-4400-ac87-6334d2be6184-operator-scripts\") pod \"barbican-2f64-account-create-update-mndz2\" (UID: \"68d92d9a-3db4-4400-ac87-6334d2be6184\") " pod="openstack/barbican-2f64-account-create-update-mndz2" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.312349 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-4cd8-account-create-update-ngljz"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.323526 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4cd8-account-create-update-ngljz" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.330374 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68d92d9a-3db4-4400-ac87-6334d2be6184-operator-scripts\") pod \"barbican-2f64-account-create-update-mndz2\" (UID: \"68d92d9a-3db4-4400-ac87-6334d2be6184\") " pod="openstack/barbican-2f64-account-create-update-mndz2" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.330519 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c9fk\" (UniqueName: \"kubernetes.io/projected/68d92d9a-3db4-4400-ac87-6334d2be6184-kube-api-access-7c9fk\") pod \"barbican-2f64-account-create-update-mndz2\" (UID: \"68d92d9a-3db4-4400-ac87-6334d2be6184\") " pod="openstack/barbican-2f64-account-create-update-mndz2" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.336162 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.346963 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4cd8-account-create-update-ngljz"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.365518 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68d92d9a-3db4-4400-ac87-6334d2be6184-operator-scripts\") pod \"barbican-2f64-account-create-update-mndz2\" (UID: \"68d92d9a-3db4-4400-ac87-6334d2be6184\") " pod="openstack/barbican-2f64-account-create-update-mndz2" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.407086 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c9fk\" (UniqueName: \"kubernetes.io/projected/68d92d9a-3db4-4400-ac87-6334d2be6184-kube-api-access-7c9fk\") pod \"barbican-2f64-account-create-update-mndz2\" (UID: \"68d92d9a-3db4-4400-ac87-6334d2be6184\") " pod="openstack/barbican-2f64-account-create-update-mndz2" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.432386 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44fdt\" (UniqueName: \"kubernetes.io/projected/5edf8927-1593-48b4-b330-9413bcfc733d-kube-api-access-44fdt\") pod \"cinder-4cd8-account-create-update-ngljz\" (UID: \"5edf8927-1593-48b4-b330-9413bcfc733d\") " pod="openstack/cinder-4cd8-account-create-update-ngljz" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.432456 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edf8927-1593-48b4-b330-9413bcfc733d-operator-scripts\") pod \"cinder-4cd8-account-create-update-ngljz\" (UID: \"5edf8927-1593-48b4-b330-9413bcfc733d\") " pod="openstack/cinder-4cd8-account-create-update-ngljz" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.445482 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2cgbk"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.447034 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2cgbk" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.447879 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2f64-account-create-update-mndz2" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.463105 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.463323 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-4cd8-account-create-update-5swv2"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.547405 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44fdt\" (UniqueName: \"kubernetes.io/projected/5edf8927-1593-48b4-b330-9413bcfc733d-kube-api-access-44fdt\") pod \"cinder-4cd8-account-create-update-ngljz\" (UID: \"5edf8927-1593-48b4-b330-9413bcfc733d\") " pod="openstack/cinder-4cd8-account-create-update-ngljz" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.547466 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edf8927-1593-48b4-b330-9413bcfc733d-operator-scripts\") pod \"cinder-4cd8-account-create-update-ngljz\" (UID: \"5edf8927-1593-48b4-b330-9413bcfc733d\") " pod="openstack/cinder-4cd8-account-create-update-ngljz" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.547545 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1323800-5ab2-43e7-baa9-9a6d4922224f-operator-scripts\") pod \"root-account-create-update-2cgbk\" (UID: \"b1323800-5ab2-43e7-baa9-9a6d4922224f\") " pod="openstack/root-account-create-update-2cgbk" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.547595 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k957d\" (UniqueName: \"kubernetes.io/projected/b1323800-5ab2-43e7-baa9-9a6d4922224f-kube-api-access-k957d\") pod \"root-account-create-update-2cgbk\" (UID: \"b1323800-5ab2-43e7-baa9-9a6d4922224f\") " pod="openstack/root-account-create-update-2cgbk" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.548702 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edf8927-1593-48b4-b330-9413bcfc733d-operator-scripts\") pod \"cinder-4cd8-account-create-update-ngljz\" (UID: \"5edf8927-1593-48b4-b330-9413bcfc733d\") " pod="openstack/cinder-4cd8-account-create-update-ngljz" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.549646 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-4cd8-account-create-update-5swv2"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.568055 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2cgbk"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.637886 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3ce2-account-create-update-ls5ln"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.639143 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3ce2-account-create-update-ls5ln" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.640615 4826 generic.go:334] "Generic (PLEG): container finished" podID="bf226f75-106f-4f53-b33b-59f9ebbbefc3" containerID="7d37d3a61c1d81543cb62525459e1b9aa5156ff41900e162e2c6a48e863800d4" exitCode=0 Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.640657 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrjtp" event={"ID":"bf226f75-106f-4f53-b33b-59f9ebbbefc3","Type":"ContainerDied","Data":"7d37d3a61c1d81543cb62525459e1b9aa5156ff41900e162e2c6a48e863800d4"} Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.651573 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44fdt\" (UniqueName: \"kubernetes.io/projected/5edf8927-1593-48b4-b330-9413bcfc733d-kube-api-access-44fdt\") pod \"cinder-4cd8-account-create-update-ngljz\" (UID: \"5edf8927-1593-48b4-b330-9413bcfc733d\") " pod="openstack/cinder-4cd8-account-create-update-ngljz" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.656364 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.678922 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1323800-5ab2-43e7-baa9-9a6d4922224f-operator-scripts\") pod \"root-account-create-update-2cgbk\" (UID: \"b1323800-5ab2-43e7-baa9-9a6d4922224f\") " pod="openstack/root-account-create-update-2cgbk" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.679188 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k957d\" (UniqueName: \"kubernetes.io/projected/b1323800-5ab2-43e7-baa9-9a6d4922224f-kube-api-access-k957d\") pod \"root-account-create-update-2cgbk\" (UID: \"b1323800-5ab2-43e7-baa9-9a6d4922224f\") " pod="openstack/root-account-create-update-2cgbk" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.681139 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1323800-5ab2-43e7-baa9-9a6d4922224f-operator-scripts\") pod \"root-account-create-update-2cgbk\" (UID: \"b1323800-5ab2-43e7-baa9-9a6d4922224f\") " pod="openstack/root-account-create-update-2cgbk" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.698113 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.698350 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="071ae6a1-9762-465d-8499-69ec962f08b7" containerName="openstackclient" containerID="cri-o://65213368cc65f805bf38badc08c83a4c18280b76d4a7b5a339c6c9e7d8454ef3" gracePeriod=2 Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.754368 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.765897 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k957d\" (UniqueName: \"kubernetes.io/projected/b1323800-5ab2-43e7-baa9-9a6d4922224f-kube-api-access-k957d\") pod \"root-account-create-update-2cgbk\" (UID: \"b1323800-5ab2-43e7-baa9-9a6d4922224f\") " pod="openstack/root-account-create-update-2cgbk" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.784003 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dxzz\" (UniqueName: \"kubernetes.io/projected/489ffa8d-9021-4291-b68b-df3d3a146fe1-kube-api-access-6dxzz\") pod \"placement-3ce2-account-create-update-ls5ln\" (UID: \"489ffa8d-9021-4291-b68b-df3d3a146fe1\") " pod="openstack/placement-3ce2-account-create-update-ls5ln" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.784194 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/489ffa8d-9021-4291-b68b-df3d3a146fe1-operator-scripts\") pod \"placement-3ce2-account-create-update-ls5ln\" (UID: \"489ffa8d-9021-4291-b68b-df3d3a146fe1\") " pod="openstack/placement-3ce2-account-create-update-ls5ln" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.783669 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-54vbh"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.837898 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-97ad-account-create-update-qvhw8"] Jan 29 07:06:25 crc kubenswrapper[4826]: E0129 07:06:25.838253 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071ae6a1-9762-465d-8499-69ec962f08b7" containerName="openstackclient" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.838269 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="071ae6a1-9762-465d-8499-69ec962f08b7" containerName="openstackclient" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.838499 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="071ae6a1-9762-465d-8499-69ec962f08b7" containerName="openstackclient" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.839092 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-97ad-account-create-update-qvhw8" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.861570 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.878254 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3ce2-account-create-update-ls5ln"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.894867 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/489ffa8d-9021-4291-b68b-df3d3a146fe1-operator-scripts\") pod \"placement-3ce2-account-create-update-ls5ln\" (UID: \"489ffa8d-9021-4291-b68b-df3d3a146fe1\") " pod="openstack/placement-3ce2-account-create-update-ls5ln" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.895010 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dxzz\" (UniqueName: \"kubernetes.io/projected/489ffa8d-9021-4291-b68b-df3d3a146fe1-kube-api-access-6dxzz\") pod \"placement-3ce2-account-create-update-ls5ln\" (UID: \"489ffa8d-9021-4291-b68b-df3d3a146fe1\") " pod="openstack/placement-3ce2-account-create-update-ls5ln" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.904075 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/489ffa8d-9021-4291-b68b-df3d3a146fe1-operator-scripts\") pod \"placement-3ce2-account-create-update-ls5ln\" (UID: \"489ffa8d-9021-4291-b68b-df3d3a146fe1\") " pod="openstack/placement-3ce2-account-create-update-ls5ln" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.904174 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-54vbh"] Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.941938 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4cd8-account-create-update-ngljz" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.944546 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2cgbk" Jan 29 07:06:25 crc kubenswrapper[4826]: I0129 07:06:25.988829 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dxzz\" (UniqueName: \"kubernetes.io/projected/489ffa8d-9021-4291-b68b-df3d3a146fe1-kube-api-access-6dxzz\") pod \"placement-3ce2-account-create-update-ls5ln\" (UID: \"489ffa8d-9021-4291-b68b-df3d3a146fe1\") " pod="openstack/placement-3ce2-account-create-update-ls5ln" Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.000326 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61-operator-scripts\") pod \"glance-97ad-account-create-update-qvhw8\" (UID: \"3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61\") " pod="openstack/glance-97ad-account-create-update-qvhw8" Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.000423 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2kbx\" (UniqueName: \"kubernetes.io/projected/3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61-kube-api-access-h2kbx\") pod \"glance-97ad-account-create-update-qvhw8\" (UID: \"3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61\") " pod="openstack/glance-97ad-account-create-update-qvhw8" Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.000620 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-97ad-account-create-update-qvhw8"] Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.025343 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3ce2-account-create-update-ls5ln" Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.101729 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-84d9-account-create-update-w85wz"] Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.102716 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61-operator-scripts\") pod \"glance-97ad-account-create-update-qvhw8\" (UID: \"3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61\") " pod="openstack/glance-97ad-account-create-update-qvhw8" Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.102779 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2kbx\" (UniqueName: \"kubernetes.io/projected/3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61-kube-api-access-h2kbx\") pod \"glance-97ad-account-create-update-qvhw8\" (UID: \"3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61\") " pod="openstack/glance-97ad-account-create-update-qvhw8" Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.103085 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-84d9-account-create-update-w85wz" Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.104678 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61-operator-scripts\") pod \"glance-97ad-account-create-update-qvhw8\" (UID: \"3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61\") " pod="openstack/glance-97ad-account-create-update-qvhw8" Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.118280 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.136048 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2kbx\" (UniqueName: \"kubernetes.io/projected/3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61-kube-api-access-h2kbx\") pod \"glance-97ad-account-create-update-qvhw8\" (UID: \"3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61\") " pod="openstack/glance-97ad-account-create-update-qvhw8" Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.156618 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2845-account-create-update-r9ssg"] Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.159125 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2845-account-create-update-r9ssg" Jan 29 07:06:26 crc kubenswrapper[4826]: I0129 07:06:26.183944 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.197258 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-97ad-account-create-update-qvhw8" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.205002 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szbdc\" (UniqueName: \"kubernetes.io/projected/83c32a1d-d03e-4b49-8ac3-0c447212ce2a-kube-api-access-szbdc\") pod \"nova-api-84d9-account-create-update-w85wz\" (UID: \"83c32a1d-d03e-4b49-8ac3-0c447212ce2a\") " pod="openstack/nova-api-84d9-account-create-update-w85wz" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.205120 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c32a1d-d03e-4b49-8ac3-0c447212ce2a-operator-scripts\") pod \"nova-api-84d9-account-create-update-w85wz\" (UID: \"83c32a1d-d03e-4b49-8ac3-0c447212ce2a\") " pod="openstack/nova-api-84d9-account-create-update-w85wz" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.212409 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-984c-account-create-update-cwjvb"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.213798 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-984c-account-create-update-cwjvb" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.219169 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.259224 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-431a-account-create-update-wgcpd"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.271752 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-431a-account-create-update-wgcpd" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.281838 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.306372 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szbdc\" (UniqueName: \"kubernetes.io/projected/83c32a1d-d03e-4b49-8ac3-0c447212ce2a-kube-api-access-szbdc\") pod \"nova-api-84d9-account-create-update-w85wz\" (UID: \"83c32a1d-d03e-4b49-8ac3-0c447212ce2a\") " pod="openstack/nova-api-84d9-account-create-update-w85wz" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.306407 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skhqw\" (UniqueName: \"kubernetes.io/projected/4f239c3c-6b60-459b-a4a2-5deb5288161a-kube-api-access-skhqw\") pod \"neutron-2845-account-create-update-r9ssg\" (UID: \"4f239c3c-6b60-459b-a4a2-5deb5288161a\") " pod="openstack/neutron-2845-account-create-update-r9ssg" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.306445 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f239c3c-6b60-459b-a4a2-5deb5288161a-operator-scripts\") pod \"neutron-2845-account-create-update-r9ssg\" (UID: \"4f239c3c-6b60-459b-a4a2-5deb5288161a\") " pod="openstack/neutron-2845-account-create-update-r9ssg" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.306501 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9024c38-a986-4d40-be2a-9432e2101586-operator-scripts\") pod \"nova-cell0-984c-account-create-update-cwjvb\" (UID: \"b9024c38-a986-4d40-be2a-9432e2101586\") " pod="openstack/nova-cell0-984c-account-create-update-cwjvb" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.306519 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c32a1d-d03e-4b49-8ac3-0c447212ce2a-operator-scripts\") pod \"nova-api-84d9-account-create-update-w85wz\" (UID: \"83c32a1d-d03e-4b49-8ac3-0c447212ce2a\") " pod="openstack/nova-api-84d9-account-create-update-w85wz" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.306540 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ch8c\" (UniqueName: \"kubernetes.io/projected/b9024c38-a986-4d40-be2a-9432e2101586-kube-api-access-9ch8c\") pod \"nova-cell0-984c-account-create-update-cwjvb\" (UID: \"b9024c38-a986-4d40-be2a-9432e2101586\") " pod="openstack/nova-cell0-984c-account-create-update-cwjvb" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.308813 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c32a1d-d03e-4b49-8ac3-0c447212ce2a-operator-scripts\") pod \"nova-api-84d9-account-create-update-w85wz\" (UID: \"83c32a1d-d03e-4b49-8ac3-0c447212ce2a\") " pod="openstack/nova-api-84d9-account-create-update-w85wz" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.312669 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.313117 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" containerName="openstack-network-exporter" containerID="cri-o://566e075f705cbbe67baf7d3552d44286bc465cf556a9a4ef3b74748293dbc37a" gracePeriod=300 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.369596 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-84d9-account-create-update-w85wz"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.376866 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szbdc\" (UniqueName: \"kubernetes.io/projected/83c32a1d-d03e-4b49-8ac3-0c447212ce2a-kube-api-access-szbdc\") pod \"nova-api-84d9-account-create-update-w85wz\" (UID: \"83c32a1d-d03e-4b49-8ac3-0c447212ce2a\") " pod="openstack/nova-api-84d9-account-create-update-w85wz" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.424822 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skhqw\" (UniqueName: \"kubernetes.io/projected/4f239c3c-6b60-459b-a4a2-5deb5288161a-kube-api-access-skhqw\") pod \"neutron-2845-account-create-update-r9ssg\" (UID: \"4f239c3c-6b60-459b-a4a2-5deb5288161a\") " pod="openstack/neutron-2845-account-create-update-r9ssg" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.424893 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f239c3c-6b60-459b-a4a2-5deb5288161a-operator-scripts\") pod \"neutron-2845-account-create-update-r9ssg\" (UID: \"4f239c3c-6b60-459b-a4a2-5deb5288161a\") " pod="openstack/neutron-2845-account-create-update-r9ssg" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.424974 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/598e7579-2935-4fad-85de-031f0238611e-operator-scripts\") pod \"nova-cell1-431a-account-create-update-wgcpd\" (UID: \"598e7579-2935-4fad-85de-031f0238611e\") " pod="openstack/nova-cell1-431a-account-create-update-wgcpd" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.425002 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9024c38-a986-4d40-be2a-9432e2101586-operator-scripts\") pod \"nova-cell0-984c-account-create-update-cwjvb\" (UID: \"b9024c38-a986-4d40-be2a-9432e2101586\") " pod="openstack/nova-cell0-984c-account-create-update-cwjvb" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.425023 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ch8c\" (UniqueName: \"kubernetes.io/projected/b9024c38-a986-4d40-be2a-9432e2101586-kube-api-access-9ch8c\") pod \"nova-cell0-984c-account-create-update-cwjvb\" (UID: \"b9024c38-a986-4d40-be2a-9432e2101586\") " pod="openstack/nova-cell0-984c-account-create-update-cwjvb" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.425159 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kpc7\" (UniqueName: \"kubernetes.io/projected/598e7579-2935-4fad-85de-031f0238611e-kube-api-access-7kpc7\") pod \"nova-cell1-431a-account-create-update-wgcpd\" (UID: \"598e7579-2935-4fad-85de-031f0238611e\") " pod="openstack/nova-cell1-431a-account-create-update-wgcpd" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.426460 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f239c3c-6b60-459b-a4a2-5deb5288161a-operator-scripts\") pod \"neutron-2845-account-create-update-r9ssg\" (UID: \"4f239c3c-6b60-459b-a4a2-5deb5288161a\") " pod="openstack/neutron-2845-account-create-update-r9ssg" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.427131 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9024c38-a986-4d40-be2a-9432e2101586-operator-scripts\") pod \"nova-cell0-984c-account-create-update-cwjvb\" (UID: \"b9024c38-a986-4d40-be2a-9432e2101586\") " pod="openstack/nova-cell0-984c-account-create-update-cwjvb" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.446587 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-84d9-account-create-update-w85wz" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.466542 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2845-account-create-update-r9ssg"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.466927 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ch8c\" (UniqueName: \"kubernetes.io/projected/b9024c38-a986-4d40-be2a-9432e2101586-kube-api-access-9ch8c\") pod \"nova-cell0-984c-account-create-update-cwjvb\" (UID: \"b9024c38-a986-4d40-be2a-9432e2101586\") " pod="openstack/nova-cell0-984c-account-create-update-cwjvb" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.509188 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skhqw\" (UniqueName: \"kubernetes.io/projected/4f239c3c-6b60-459b-a4a2-5deb5288161a-kube-api-access-skhqw\") pod \"neutron-2845-account-create-update-r9ssg\" (UID: \"4f239c3c-6b60-459b-a4a2-5deb5288161a\") " pod="openstack/neutron-2845-account-create-update-r9ssg" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.515760 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-984c-account-create-update-cwjvb"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.526362 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/598e7579-2935-4fad-85de-031f0238611e-operator-scripts\") pod \"nova-cell1-431a-account-create-update-wgcpd\" (UID: \"598e7579-2935-4fad-85de-031f0238611e\") " pod="openstack/nova-cell1-431a-account-create-update-wgcpd" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.526922 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kpc7\" (UniqueName: \"kubernetes.io/projected/598e7579-2935-4fad-85de-031f0238611e-kube-api-access-7kpc7\") pod \"nova-cell1-431a-account-create-update-wgcpd\" (UID: \"598e7579-2935-4fad-85de-031f0238611e\") " pod="openstack/nova-cell1-431a-account-create-update-wgcpd" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.528017 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/598e7579-2935-4fad-85de-031f0238611e-operator-scripts\") pod \"nova-cell1-431a-account-create-update-wgcpd\" (UID: \"598e7579-2935-4fad-85de-031f0238611e\") " pod="openstack/nova-cell1-431a-account-create-update-wgcpd" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.570964 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-984c-account-create-update-cwjvb" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.616233 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-431a-account-create-update-wgcpd"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.687163 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3ce2-account-create-update-zgdts"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.718915 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kpc7\" (UniqueName: \"kubernetes.io/projected/598e7579-2935-4fad-85de-031f0238611e-kube-api-access-7kpc7\") pod \"nova-cell1-431a-account-create-update-wgcpd\" (UID: \"598e7579-2935-4fad-85de-031f0238611e\") " pod="openstack/nova-cell1-431a-account-create-update-wgcpd" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.778571 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" containerName="ovsdbserver-sb" containerID="cri-o://949d52a8a709e90860b9b0ef1491f8a9e2676ac4584b2115505be7c970fcced1" gracePeriod=300 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.798009 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3ce2-account-create-update-zgdts"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.798466 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2845-account-create-update-r9ssg" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.848607 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c89e1f-a128-4289-ad90-49c11f2c89cd" path="/var/lib/kubelet/pods/18c89e1f-a128-4289-ad90-49c11f2c89cd/volumes" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.849780 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812c49e0-db30-4cfe-9722-6ddb5931a2c0" path="/var/lib/kubelet/pods/812c49e0-db30-4cfe-9722-6ddb5931a2c0/volumes" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.852620 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e98edf07-b03d-47bd-bb1a-b942a1e66fc3" path="/var/lib/kubelet/pods/e98edf07-b03d-47bd-bb1a-b942a1e66fc3/volumes" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.853291 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97" path="/var/lib/kubelet/pods/f6c6f92b-fa9b-42bc-b43a-f4ee97d74c97/volumes" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.854024 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.886356 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-84d9-account-create-update-rrjq4"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.908739 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-431a-account-create-update-wgcpd" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.934339 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-8qxjd"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.957907 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2845-account-create-update-ll4xp"] Jan 29 07:06:27 crc kubenswrapper[4826]: E0129 07:06:26.957971 4826 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 07:06:27 crc kubenswrapper[4826]: E0129 07:06:26.958011 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data podName:1794f620-102a-4b9c-9097-713579ec55ad nodeName:}" failed. No retries permitted until 2026-01-29 07:06:27.457995691 +0000 UTC m=+1371.319788760 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data") pod "rabbitmq-server-0" (UID: "1794f620-102a-4b9c-9097-713579ec55ad") : configmap "rabbitmq-config-data" not found Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.976360 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.977112 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" containerName="openstack-network-exporter" containerID="cri-o://fbe7c7ae52eeb239791fe0ed601efbb5177d39b8ea9d0057b13b2f3c5c72b647" gracePeriod=300 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:26.998171 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-97ad-account-create-update-xx9t5"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.029182 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-8qxjd"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.095820 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-84d9-account-create-update-rrjq4"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.131734 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" containerName="ovsdbserver-nb" containerID="cri-o://358deb67c7029377050eca333cfd2c34f12e0e975144bb97ed20f885a41cdc54" gracePeriod=300 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.150222 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2845-account-create-update-ll4xp"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.174627 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-97ad-account-create-update-xx9t5"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.258026 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-984c-account-create-update-dpdkf"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.292202 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-984c-account-create-update-dpdkf"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.304091 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.304372 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="17dd6ec1-84fb-4bb3-8700-c8691f059937" containerName="ovn-northd" containerID="cri-o://7ba9ac04e0850886e890e608a55f11c50e9ca3d5994419279b8b8fc19be3fbd4" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.304771 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="17dd6ec1-84fb-4bb3-8700-c8691f059937" containerName="openstack-network-exporter" containerID="cri-o://2231f0fb6e7d9114a6e1ce628c3f775fddfa8bbe31c0d18c4ebf95440d9a1023" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.318123 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-jmdtd"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.318370 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-jmdtd" podUID="d6b8cca1-3968-49aa-b4ed-88d9d4075223" containerName="openstack-network-exporter" containerID="cri-o://e468868d139ea7d7683d5f9d96d634b091bed19d0c79b17142be195283c11a88" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.329636 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-431a-account-create-update-q9xq6"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.340369 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-m2d2v"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.350612 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4zwtm"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.392389 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-vgqpr"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.423221 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-431a-account-create-update-q9xq6"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.437387 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-vgqpr"] Jan 29 07:06:27 crc kubenswrapper[4826]: E0129 07:06:27.475997 4826 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 07:06:27 crc kubenswrapper[4826]: E0129 07:06:27.476050 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data podName:1794f620-102a-4b9c-9097-713579ec55ad nodeName:}" failed. No retries permitted until 2026-01-29 07:06:28.476036356 +0000 UTC m=+1372.337829425 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data") pod "rabbitmq-server-0" (UID: "1794f620-102a-4b9c-9097-713579ec55ad") : configmap "rabbitmq-config-data" not found Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.482864 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-lbrr5"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.505570 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-lbrr5"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.514942 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gnxn7"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.524364 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gnxn7"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.541347 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sqz8v"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.548740 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sqz8v"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.555347 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.555584 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ea529cf3-184e-446a-9c6a-759cf1bab14c" containerName="cinder-scheduler" containerID="cri-o://da6172474a5804740243a88717e98452c0421876963c421e7935fba689bdc058" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.555706 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ea529cf3-184e-446a-9c6a-759cf1bab14c" containerName="probe" containerID="cri-o://16b794547221f9aaaeba5d721b9f980ce5d5698a660a295e46a0036d996a08a9" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.563192 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.563567 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="082eb821-de0c-462e-9653-b1c80c8e1d2c" containerName="cinder-api-log" containerID="cri-o://a0acd49fff6ea3dad01d0e7ca858e9d1043795014ae1ed463aec93ec682db76b" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.563992 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="082eb821-de0c-462e-9653-b1c80c8e1d2c" containerName="cinder-api" containerID="cri-o://3a8934b6ad3dfc62e2d7a18edca3b16ec92c490c12b90aeaef855441c2f63a2c" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.592616 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8rj98"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.612278 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sb2qx"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.672092 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8rj98"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.696359 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.703716 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-server" containerID="cri-o://6f8be9e7ae68c9cdb46477974e5f9323232d52e97765253a593be96d962264d1" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704119 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="swift-recon-cron" containerID="cri-o://c55f19f9268bb7c5e036640126bfced1629e2a36cd93c83c41856348935b8605" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704201 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="rsync" containerID="cri-o://68c20d8c7fd999c8a7abcb16322015c670f6cd5b3f9b312e5fcdd9e5080bab19" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704233 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-expirer" containerID="cri-o://b8b353bcfe9ffab3fe12e51fc1add01cbe0b68e3df21ff9ba62958988fa40c6a" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704264 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-updater" containerID="cri-o://efc661e13d0a3ae031a931433c46d983a09a4a66496a4085e3c7846558e05913" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704312 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-auditor" containerID="cri-o://18c3f34b023f38c953270c50875c33014ac0c890dbc279a1a0f7ee0521285e95" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704345 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-replicator" containerID="cri-o://38125c7287003d38ece9a28cd533530a5bf461facb796759edb517b616c413a5" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704375 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-server" containerID="cri-o://b530291b6bb170ceb4af6e542a3feb436b2acdf8fb99e834d63a533292236ca3" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704624 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-server" containerID="cri-o://d88ad576d400eb2984c89463da7988fbb988847d02187cc4b64734122dc40271" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704776 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-updater" containerID="cri-o://4b4aa223c22b7eac65d63841c37b12133011befa1024ace90050e7ee1a72c510" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704821 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-auditor" containerID="cri-o://af48e0d4ae8e00f830fb238c9c15333c7f43281d0531755b7ffe26f6fbf4c8c6" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704894 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-replicator" containerID="cri-o://1aa4a9af19d077a22d1e3b460ff6f966acbc829b62c52982dbfc8cc5b918e542" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704930 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-auditor" containerID="cri-o://a26eddb89adfadac24b26e3ff2294f83b0c4d77a9551462e60d51f7c34fff67e" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704974 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-replicator" containerID="cri-o://3db554d1986fc0fb902da044bdce99172798bdf604c6dcbc79f7a7a8d3cf339a" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.704982 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-reaper" containerID="cri-o://44307ca66666a832b9005e4a358a9e508afcfb3595db492ed5623df991aeca7a" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.709005 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sb2qx"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.718982 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-nlxt2"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.738740 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jmdtd_d6b8cca1-3968-49aa-b4ed-88d9d4075223/openstack-network-exporter/0.log" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.738782 4826 generic.go:334] "Generic (PLEG): container finished" podID="d6b8cca1-3968-49aa-b4ed-88d9d4075223" containerID="e468868d139ea7d7683d5f9d96d634b091bed19d0c79b17142be195283c11a88" exitCode=2 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.738889 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jmdtd" event={"ID":"d6b8cca1-3968-49aa-b4ed-88d9d4075223","Type":"ContainerDied","Data":"e468868d139ea7d7683d5f9d96d634b091bed19d0c79b17142be195283c11a88"} Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.742340 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-nlxt2"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.752361 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-27drr"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.758511 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_af857248-0a50-4850-93dd-c8c4e5d8e5ea/ovsdbserver-nb/0.log" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.758551 4826 generic.go:334] "Generic (PLEG): container finished" podID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" containerID="fbe7c7ae52eeb239791fe0ed601efbb5177d39b8ea9d0057b13b2f3c5c72b647" exitCode=2 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.758566 4826 generic.go:334] "Generic (PLEG): container finished" podID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" containerID="358deb67c7029377050eca333cfd2c34f12e0e975144bb97ed20f885a41cdc54" exitCode=143 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.758642 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"af857248-0a50-4850-93dd-c8c4e5d8e5ea","Type":"ContainerDied","Data":"fbe7c7ae52eeb239791fe0ed601efbb5177d39b8ea9d0057b13b2f3c5c72b647"} Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.760962 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"af857248-0a50-4850-93dd-c8c4e5d8e5ea","Type":"ContainerDied","Data":"358deb67c7029377050eca333cfd2c34f12e0e975144bb97ed20f885a41cdc54"} Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.760978 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-27drr"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.786829 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3a20458e-fa0a-4aa2-a59a-70ebb523a3d9/ovsdbserver-sb/0.log" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.786869 4826 generic.go:334] "Generic (PLEG): container finished" podID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" containerID="566e075f705cbbe67baf7d3552d44286bc465cf556a9a4ef3b74748293dbc37a" exitCode=2 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.786884 4826 generic.go:334] "Generic (PLEG): container finished" podID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" containerID="949d52a8a709e90860b9b0ef1491f8a9e2676ac4584b2115505be7c970fcced1" exitCode=143 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.786925 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9","Type":"ContainerDied","Data":"566e075f705cbbe67baf7d3552d44286bc465cf556a9a4ef3b74748293dbc37a"} Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.786949 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9","Type":"ContainerDied","Data":"949d52a8a709e90860b9b0ef1491f8a9e2676ac4584b2115505be7c970fcced1"} Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.806664 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-4cd8-account-create-update-ngljz"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.871791 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7c8d5fc944-9m8wp"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.872021 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7c8d5fc944-9m8wp" podUID="1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" containerName="placement-log" containerID="cri-o://4d9bbe77efa1079486e055eb220a04a3be411395b46dcaaf31c558f3d4ccb6f8" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.872354 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7c8d5fc944-9m8wp" podUID="1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" containerName="placement-api" containerID="cri-o://f1db27671e7941a3fe5f409f368278faebc2c9c102ed39040c62e814c55b33f6" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.887643 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.887943 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5378dab4-ad0c-4259-a7d2-d3f7e784a142" containerName="glance-log" containerID="cri-o://3817b6b8e4ea595c7ced258dd6bfd2af338287753b307239e9914dc8d293a791" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.888106 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5378dab4-ad0c-4259-a7d2-d3f7e784a142" containerName="glance-httpd" containerID="cri-o://31b30318fc91eafcd3b97afe85a5e0965b844ee6e31ce721180f3fef71409d0b" gracePeriod=30 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.893400 4826 generic.go:334] "Generic (PLEG): container finished" podID="17dd6ec1-84fb-4bb3-8700-c8691f059937" containerID="2231f0fb6e7d9114a6e1ce628c3f775fddfa8bbe31c0d18c4ebf95440d9a1023" exitCode=2 Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.893441 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"17dd6ec1-84fb-4bb3-8700-c8691f059937","Type":"ContainerDied","Data":"2231f0fb6e7d9114a6e1ce628c3f775fddfa8bbe31c0d18c4ebf95440d9a1023"} Jan 29 07:06:27 crc kubenswrapper[4826]: E0129 07:06:27.925071 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 358deb67c7029377050eca333cfd2c34f12e0e975144bb97ed20f885a41cdc54 is running failed: container process not found" containerID="358deb67c7029377050eca333cfd2c34f12e0e975144bb97ed20f885a41cdc54" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 07:06:27 crc kubenswrapper[4826]: E0129 07:06:27.952663 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 358deb67c7029377050eca333cfd2c34f12e0e975144bb97ed20f885a41cdc54 is running failed: container process not found" containerID="358deb67c7029377050eca333cfd2c34f12e0e975144bb97ed20f885a41cdc54" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 07:06:27 crc kubenswrapper[4826]: E0129 07:06:27.966616 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 358deb67c7029377050eca333cfd2c34f12e0e975144bb97ed20f885a41cdc54 is running failed: container process not found" containerID="358deb67c7029377050eca333cfd2c34f12e0e975144bb97ed20f885a41cdc54" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 07:06:27 crc kubenswrapper[4826]: E0129 07:06:27.966683 4826 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 358deb67c7029377050eca333cfd2c34f12e0e975144bb97ed20f885a41cdc54 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" containerName="ovsdbserver-nb" Jan 29 07:06:27 crc kubenswrapper[4826]: I0129 07:06:27.983019 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3ce2-account-create-update-ls5ln"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.007428 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.020693 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-9wnpk"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.020985 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" podUID="b099774a-b73b-4803-9259-98b9e1b5933b" containerName="dnsmasq-dns" containerID="cri-o://85d5f42395b3063c15f60fc73e13a264e9f6ec4017758aed23359d39c4b8b103" gracePeriod=10 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.039806 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.040091 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c080b978-6895-4067-9dd5-2c23d4d68518" containerName="glance-log" containerID="cri-o://b086320849aa987d26d49d964fbefd0fcd5dd9b1184d3344ea085e5c42fc14d1" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.040560 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c080b978-6895-4067-9dd5-2c23d4d68518" containerName="glance-httpd" containerID="cri-o://18ae489dd61bb2195354b5d8f5f9b6e0f384329f3a4ac858dde0d3feffe4b202" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.051346 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-669bb5748f-zjsxt"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.051634 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-669bb5748f-zjsxt" podUID="258e4d75-ecca-4001-9f56-aeb39557b326" containerName="neutron-api" containerID="cri-o://5ef7b36d03b1ef462470c22e1ad57d72fd7e0094c00a4af390d706239a6a169c" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.051764 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-669bb5748f-zjsxt" podUID="258e4d75-ecca-4001-9f56-aeb39557b326" containerName="neutron-httpd" containerID="cri-o://9dfce637d6a3b6a7342be469a0635e6d8e5afd174f4e993540d7e2a69349016b" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.078383 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-d82nq"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.102736 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tjfkk"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.114997 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-d82nq"] Jan 29 07:06:28 crc kubenswrapper[4826]: E0129 07:06:28.141273 4826 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 07:06:28 crc kubenswrapper[4826]: E0129 07:06:28.141344 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data podName:0da3bc6b-99a0-4de9-9479-5aaef8bfd81c nodeName:}" failed. No retries permitted until 2026-01-29 07:06:28.641330437 +0000 UTC m=+1372.503123506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data") pod "rabbitmq-cell1-server-0" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c") : configmap "rabbitmq-cell1-config-data" not found Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.148643 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tjfkk"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.155377 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8bhcx"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.167141 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-97ad-account-create-update-qvhw8"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.176710 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8bhcx"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.192853 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.193268 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerName="nova-metadata-log" containerID="cri-o://768c302445504a9f1d3eff35fdd9007e37101a52925f9f686a83584515eeb5c2" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.194275 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerName="nova-metadata-metadata" containerID="cri-o://b50ac83dbd4ed6b1a94d8b1a7e79a0a2f0cbe1ecca7f07d779adbd94338a2040" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.229399 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-npv6r"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.251204 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-npv6r"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.259825 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2f64-account-create-update-mndz2"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.272962 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2845-account-create-update-r9ssg"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.311928 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.331515 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-55957b69d9-prlpm"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.331730 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-55957b69d9-prlpm" podUID="b0f5ad8c-072d-4994-acb3-e898c0981eef" containerName="proxy-httpd" containerID="cri-o://bbf08c9f4b99f6403fa0c9818497bc87b482aca717f1d209262d98a1ff4df06d" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.331806 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-55957b69d9-prlpm" podUID="b0f5ad8c-072d-4994-acb3-e898c0981eef" containerName="proxy-server" containerID="cri-o://b6663b83c1263d117ea6eb33e490b5c99d06f2282c774e74f6560f8ad6d5bc74" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.356988 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6797f89db9-wjtvh"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.357336 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6797f89db9-wjtvh" podUID="426fe450-4a4d-4048-8ea4-422d39482ceb" containerName="barbican-worker-log" containerID="cri-o://c3be21ae398eff67e22c85ef224b354ffbbd7d7ab54b85ce0a04e1584c9e8486" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.357749 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6797f89db9-wjtvh" podUID="426fe450-4a4d-4048-8ea4-422d39482ceb" containerName="barbican-worker" containerID="cri-o://b3e85ce7ed4e1d2758772a2b894824794e2f18c81bade5dce37e78c8548d5969" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.380932 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-c497f4886-n5gtr"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.381136 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" podUID="216e612b-abc2-4d7c-8b10-28a595de5302" containerName="barbican-keystone-listener-log" containerID="cri-o://0bbf6d62e960b5a682af4aa5e41584da96c6f57e7c9e6126855743ce70f35375" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.381474 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" podUID="216e612b-abc2-4d7c-8b10-28a595de5302" containerName="barbican-keystone-listener" containerID="cri-o://3bfc6875edaf5adbf81402c08f2ba55583e837eda4f1c32161d763c93ada8c24" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: E0129 07:06:28.390265 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ba9ac04e0850886e890e608a55f11c50e9ca3d5994419279b8b8fc19be3fbd4" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.405985 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-596b9c7d4-2m8gc"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.406403 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-596b9c7d4-2m8gc" podUID="ca39ae08-94df-4778-8203-bcff5806eff0" containerName="barbican-api-log" containerID="cri-o://639fc19de1ca53b22e5ad7ab867d42fad4024f7c5011feb4c18fae207a13d7e7" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.407114 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-596b9c7d4-2m8gc" podUID="ca39ae08-94df-4778-8203-bcff5806eff0" containerName="barbican-api" containerID="cri-o://f60ca07e9e25fa742e12d26cd7318095e6e3dc16fac1e585a5581d0cf9693fdd" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.424699 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-c48cv"] Jan 29 07:06:28 crc kubenswrapper[4826]: E0129 07:06:28.428311 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ba9ac04e0850886e890e608a55f11c50e9ca3d5994419279b8b8fc19be3fbd4" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.448867 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-c48cv"] Jan 29 07:06:28 crc kubenswrapper[4826]: E0129 07:06:28.453128 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ba9ac04e0850886e890e608a55f11c50e9ca3d5994419279b8b8fc19be3fbd4" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 07:06:28 crc kubenswrapper[4826]: E0129 07:06:28.453208 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="17dd6ec1-84fb-4bb3-8700-c8691f059937" containerName="ovn-northd" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.471078 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-596b9c7d4-2m8gc" podUID="ca39ae08-94df-4778-8203-bcff5806eff0" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": EOF" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.471184 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-596b9c7d4-2m8gc" podUID="ca39ae08-94df-4778-8203-bcff5806eff0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": EOF" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.514667 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.515400 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3de4a3bc-a01f-424a-8f17-60deaba1f189" containerName="nova-api-log" containerID="cri-o://d9368ce3c4b22eb7ed796c96a8b8b0a80f4ad3b81b2110fb2465fa6b3f09ec54" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.515929 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3de4a3bc-a01f-424a-8f17-60deaba1f189" containerName="nova-api-api" containerID="cri-o://38411dbe679ef8c89b951c34afc01e2b04f84066915dd175b2c3a4d60a5cccb1" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: E0129 07:06:28.561807 4826 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 07:06:28 crc kubenswrapper[4826]: E0129 07:06:28.561886 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data podName:1794f620-102a-4b9c-9097-713579ec55ad nodeName:}" failed. No retries permitted until 2026-01-29 07:06:30.561871126 +0000 UTC m=+1374.423664195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data") pod "rabbitmq-server-0" (UID: "1794f620-102a-4b9c-9097-713579ec55ad") : configmap "rabbitmq-config-data" not found Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.615440 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-84d9-account-create-update-w85wz"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.632433 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-dp8h9"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.646309 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-984c-account-create-update-cwjvb"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.654352 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-dp8h9"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.661174 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.661552 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://ba8beccf426f4dc7182586b8d6e4748afa9e641836ccaafd3e63a134a7f45556" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: E0129 07:06:28.664566 4826 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 07:06:28 crc kubenswrapper[4826]: E0129 07:06:28.664655 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data podName:0da3bc6b-99a0-4de9-9479-5aaef8bfd81c nodeName:}" failed. No retries permitted until 2026-01-29 07:06:29.664632981 +0000 UTC m=+1373.526426050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data") pod "rabbitmq-cell1-server-0" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c") : configmap "rabbitmq-cell1-config-data" not found Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.667601 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-431a-account-create-update-wgcpd"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.676884 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2cgbk"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.685889 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-45cdw"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.697527 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-45cdw"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.739386 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.766141 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovs-vswitchd" containerID="cri-o://7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" gracePeriod=29 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.772897 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.773221 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="960b6ae0-2577-444e-bc2a-bea4ec2917f9" containerName="nova-scheduler-scheduler" containerID="cri-o://aa923caa75a4ef623de542ee9505460d14e9077582229b30f18ddbd944849073" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.791505 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.791874 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="229ff3bd-fac5-4bb5-ba1e-9e829c30f45b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.804359 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-glp6k"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.815506 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="1794f620-102a-4b9c-9097-713579ec55ad" containerName="rabbitmq" containerID="cri-o://f0ac6dfd0c3c53f1c0e6b9f3709b00a0bca023456e5256221df78b7693b4c9bf" gracePeriod=604800 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.830058 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae2b787-9c3f-40ca-91e2-10a4257a1df1" path="/var/lib/kubelet/pods/0ae2b787-9c3f-40ca-91e2-10a4257a1df1/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.844175 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c292348-77b1-4f3e-9a58-aaecfbaa43e9" path="/var/lib/kubelet/pods/0c292348-77b1-4f3e-9a58-aaecfbaa43e9/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.845408 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c" path="/var/lib/kubelet/pods/1f6c9837-5cc8-41e1-9ffa-cddd6c3e9b2c/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.845896 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2950980d-48bd-4b72-964c-14e8d11d5b77" path="/var/lib/kubelet/pods/2950980d-48bd-4b72-964c-14e8d11d5b77/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.849571 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30cb6345-09e5-421e-8cac-7223ff25731a" path="/var/lib/kubelet/pods/30cb6345-09e5-421e-8cac-7223ff25731a/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.851771 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34dddfc9-db4b-48c0-9ec0-6eceb641aa26" path="/var/lib/kubelet/pods/34dddfc9-db4b-48c0-9ec0-6eceb641aa26/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.855469 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2" path="/var/lib/kubelet/pods/34ffc130-bd9e-4f9c-9d02-7f3cf93df9d2/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.856546 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="454b9218-d564-4664-b1dd-4435fa9c60b7" containerName="galera" containerID="cri-o://1937919f8dd64b752c871037ec07858c20e0540d1d4d7464eab4f0a0259be556" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.856653 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48445e15-3a7e-4476-a253-0adab28f920e" path="/var/lib/kubelet/pods/48445e15-3a7e-4476-a253-0adab28f920e/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.857334 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a1157a3-fc93-4d73-8200-b55bfa626a09" path="/var/lib/kubelet/pods/4a1157a3-fc93-4d73-8200-b55bfa626a09/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.873242 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d989bde-5508-4800-9910-76c04d308f3e" path="/var/lib/kubelet/pods/4d989bde-5508-4800-9910-76c04d308f3e/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.873994 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ddbe0a6-485b-4bde-b08a-8745e551bbf6" path="/var/lib/kubelet/pods/4ddbe0a6-485b-4bde-b08a-8745e551bbf6/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.874707 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06" path="/var/lib/kubelet/pods/5ca1f8a1-440d-40ba-9c5e-dd5c02dc3c06/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.875357 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="635336bb-af74-473f-88b2-77547b4a9ca1" path="/var/lib/kubelet/pods/635336bb-af74-473f-88b2-77547b4a9ca1/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.879204 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644e2cae-40d8-4a56-a2d8-941d1b61efa4" path="/var/lib/kubelet/pods/644e2cae-40d8-4a56-a2d8-941d1b61efa4/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.890102 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovsdb-server" containerID="cri-o://55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" gracePeriod=29 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.892152 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79bda2ef-7c8b-4970-b194-4f17c6c2c4cd" path="/var/lib/kubelet/pods/79bda2ef-7c8b-4970-b194-4f17c6c2c4cd/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.893107 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81a9564d-73ee-4a76-ab97-61caad992764" path="/var/lib/kubelet/pods/81a9564d-73ee-4a76-ab97-61caad992764/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.894720 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="882b3c41-2672-47df-a717-d2ccd9f7e2dc" path="/var/lib/kubelet/pods/882b3c41-2672-47df-a717-d2ccd9f7e2dc/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.895345 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c" path="/var/lib/kubelet/pods/b4a11db9-4057-4f8b-bd9e-6b92e1c9fd8c/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.897012 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcdb469f-8122-4cba-a7e1-9e2e9e829263" path="/var/lib/kubelet/pods/bcdb469f-8122-4cba-a7e1-9e2e9e829263/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.899042 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3ec8717-30a1-46a8-9224-4802c2b1c3e6" path="/var/lib/kubelet/pods/e3ec8717-30a1-46a8-9224-4802c2b1c3e6/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.899597 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff69218e-ec33-4818-86be-a9ff92d3f40d" path="/var/lib/kubelet/pods/ff69218e-ec33-4818-86be-a9ff92d3f40d/volumes" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.900125 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-glp6k"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.900149 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x8x4z"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.900161 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.900175 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x8x4z"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.900185 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.900512 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="e27383b1-aba6-4c25-9d4b-3b9cceb2b739" containerName="nova-cell1-conductor-conductor" containerID="cri-o://cafb58d89e7c6ef227f4fff8321a876edaab7b5f0a9dcbdb25a9840e49c6af78" gracePeriod=30 Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.929857 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_af857248-0a50-4850-93dd-c8c4e5d8e5ea/ovsdbserver-nb/0.log" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.930075 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.998171 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdvdl\" (UniqueName: \"kubernetes.io/projected/af857248-0a50-4850-93dd-c8c4e5d8e5ea-kube-api-access-tdvdl\") pod \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.998275 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/af857248-0a50-4850-93dd-c8c4e5d8e5ea-ovsdb-rundir\") pod \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.998320 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af857248-0a50-4850-93dd-c8c4e5d8e5ea-config\") pod \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.998444 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-metrics-certs-tls-certs\") pod \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.998631 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af857248-0a50-4850-93dd-c8c4e5d8e5ea-scripts\") pod \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.998697 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.998829 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-combined-ca-bundle\") pod \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " Jan 29 07:06:28 crc kubenswrapper[4826]: I0129 07:06:28.998873 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-ovsdbserver-nb-tls-certs\") pod \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\" (UID: \"af857248-0a50-4850-93dd-c8c4e5d8e5ea\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.000141 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af857248-0a50-4850-93dd-c8c4e5d8e5ea-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "af857248-0a50-4850-93dd-c8c4e5d8e5ea" (UID: "af857248-0a50-4850-93dd-c8c4e5d8e5ea"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.000640 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jmdtd_d6b8cca1-3968-49aa-b4ed-88d9d4075223/openstack-network-exporter/0.log" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.000703 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.002506 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af857248-0a50-4850-93dd-c8c4e5d8e5ea-config" (OuterVolumeSpecName: "config") pod "af857248-0a50-4850-93dd-c8c4e5d8e5ea" (UID: "af857248-0a50-4850-93dd-c8c4e5d8e5ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.004165 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3a20458e-fa0a-4aa2-a59a-70ebb523a3d9/ovsdbserver-sb/0.log" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.004376 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.006728 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "af857248-0a50-4850-93dd-c8c4e5d8e5ea" (UID: "af857248-0a50-4850-93dd-c8c4e5d8e5ea"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.007077 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af857248-0a50-4850-93dd-c8c4e5d8e5ea-scripts" (OuterVolumeSpecName: "scripts") pod "af857248-0a50-4850-93dd-c8c4e5d8e5ea" (UID: "af857248-0a50-4850-93dd-c8c4e5d8e5ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.012496 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af857248-0a50-4850-93dd-c8c4e5d8e5ea-kube-api-access-tdvdl" (OuterVolumeSpecName: "kube-api-access-tdvdl") pod "af857248-0a50-4850-93dd-c8c4e5d8e5ea" (UID: "af857248-0a50-4850-93dd-c8c4e5d8e5ea"). InnerVolumeSpecName "kube-api-access-tdvdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.012670 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.025023 4826 generic.go:334] "Generic (PLEG): container finished" podID="ea529cf3-184e-446a-9c6a-759cf1bab14c" containerID="16b794547221f9aaaeba5d721b9f980ce5d5698a660a295e46a0036d996a08a9" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.025173 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea529cf3-184e-446a-9c6a-759cf1bab14c","Type":"ContainerDied","Data":"16b794547221f9aaaeba5d721b9f980ce5d5698a660a295e46a0036d996a08a9"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.063070 4826 generic.go:334] "Generic (PLEG): container finished" podID="426fe450-4a4d-4048-8ea4-422d39482ceb" containerID="c3be21ae398eff67e22c85ef224b354ffbbd7d7ab54b85ce0a04e1584c9e8486" exitCode=143 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.063137 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6797f89db9-wjtvh" event={"ID":"426fe450-4a4d-4048-8ea4-422d39482ceb","Type":"ContainerDied","Data":"c3be21ae398eff67e22c85ef224b354ffbbd7d7ab54b85ce0a04e1584c9e8486"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.079122 4826 generic.go:334] "Generic (PLEG): container finished" podID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerID="768c302445504a9f1d3eff35fdd9007e37101a52925f9f686a83584515eeb5c2" exitCode=143 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.079232 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9349a8ff-2652-4dcf-89d9-6d440269be8c","Type":"ContainerDied","Data":"768c302445504a9f1d3eff35fdd9007e37101a52925f9f686a83584515eeb5c2"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.085489 4826 generic.go:334] "Generic (PLEG): container finished" podID="ca39ae08-94df-4778-8203-bcff5806eff0" containerID="639fc19de1ca53b22e5ad7ab867d42fad4024f7c5011feb4c18fae207a13d7e7" exitCode=143 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.085549 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-596b9c7d4-2m8gc" event={"ID":"ca39ae08-94df-4778-8203-bcff5806eff0","Type":"ContainerDied","Data":"639fc19de1ca53b22e5ad7ab867d42fad4024f7c5011feb4c18fae207a13d7e7"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.087845 4826 generic.go:334] "Generic (PLEG): container finished" podID="b099774a-b73b-4803-9259-98b9e1b5933b" containerID="85d5f42395b3063c15f60fc73e13a264e9f6ec4017758aed23359d39c4b8b103" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.087914 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" event={"ID":"b099774a-b73b-4803-9259-98b9e1b5933b","Type":"ContainerDied","Data":"85d5f42395b3063c15f60fc73e13a264e9f6ec4017758aed23359d39c4b8b103"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.094280 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af857248-0a50-4850-93dd-c8c4e5d8e5ea" (UID: "af857248-0a50-4850-93dd-c8c4e5d8e5ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095111 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="68c20d8c7fd999c8a7abcb16322015c670f6cd5b3f9b312e5fcdd9e5080bab19" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095136 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="b8b353bcfe9ffab3fe12e51fc1add01cbe0b68e3df21ff9ba62958988fa40c6a" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095144 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="efc661e13d0a3ae031a931433c46d983a09a4a66496a4085e3c7846558e05913" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095151 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="18c3f34b023f38c953270c50875c33014ac0c890dbc279a1a0f7ee0521285e95" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095159 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="38125c7287003d38ece9a28cd533530a5bf461facb796759edb517b616c413a5" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095165 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="b530291b6bb170ceb4af6e542a3feb436b2acdf8fb99e834d63a533292236ca3" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095172 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="4b4aa223c22b7eac65d63841c37b12133011befa1024ace90050e7ee1a72c510" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095178 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="af48e0d4ae8e00f830fb238c9c15333c7f43281d0531755b7ffe26f6fbf4c8c6" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095186 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="1aa4a9af19d077a22d1e3b460ff6f966acbc829b62c52982dbfc8cc5b918e542" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095192 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="d88ad576d400eb2984c89463da7988fbb988847d02187cc4b64734122dc40271" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095198 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="44307ca66666a832b9005e4a358a9e508afcfb3595db492ed5623df991aeca7a" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095204 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="a26eddb89adfadac24b26e3ff2294f83b0c4d77a9551462e60d51f7c34fff67e" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095210 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="3db554d1986fc0fb902da044bdce99172798bdf604c6dcbc79f7a7a8d3cf339a" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095216 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="6f8be9e7ae68c9cdb46477974e5f9323232d52e97765253a593be96d962264d1" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095258 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"68c20d8c7fd999c8a7abcb16322015c670f6cd5b3f9b312e5fcdd9e5080bab19"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095287 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"b8b353bcfe9ffab3fe12e51fc1add01cbe0b68e3df21ff9ba62958988fa40c6a"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095310 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"efc661e13d0a3ae031a931433c46d983a09a4a66496a4085e3c7846558e05913"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095320 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"18c3f34b023f38c953270c50875c33014ac0c890dbc279a1a0f7ee0521285e95"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095348 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"38125c7287003d38ece9a28cd533530a5bf461facb796759edb517b616c413a5"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095358 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"b530291b6bb170ceb4af6e542a3feb436b2acdf8fb99e834d63a533292236ca3"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095366 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"4b4aa223c22b7eac65d63841c37b12133011befa1024ace90050e7ee1a72c510"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095374 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"af48e0d4ae8e00f830fb238c9c15333c7f43281d0531755b7ffe26f6fbf4c8c6"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095384 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"1aa4a9af19d077a22d1e3b460ff6f966acbc829b62c52982dbfc8cc5b918e542"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095393 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"d88ad576d400eb2984c89463da7988fbb988847d02187cc4b64734122dc40271"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095402 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"44307ca66666a832b9005e4a358a9e508afcfb3595db492ed5623df991aeca7a"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095412 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"a26eddb89adfadac24b26e3ff2294f83b0c4d77a9551462e60d51f7c34fff67e"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095420 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"3db554d1986fc0fb902da044bdce99172798bdf604c6dcbc79f7a7a8d3cf339a"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.095457 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"6f8be9e7ae68c9cdb46477974e5f9323232d52e97765253a593be96d962264d1"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.097149 4826 generic.go:334] "Generic (PLEG): container finished" podID="216e612b-abc2-4d7c-8b10-28a595de5302" containerID="0bbf6d62e960b5a682af4aa5e41584da96c6f57e7c9e6126855743ce70f35375" exitCode=143 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.097186 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" event={"ID":"216e612b-abc2-4d7c-8b10-28a595de5302","Type":"ContainerDied","Data":"0bbf6d62e960b5a682af4aa5e41584da96c6f57e7c9e6126855743ce70f35375"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.098503 4826 generic.go:334] "Generic (PLEG): container finished" podID="258e4d75-ecca-4001-9f56-aeb39557b326" containerID="9dfce637d6a3b6a7342be469a0635e6d8e5afd174f4e993540d7e2a69349016b" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.098537 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-669bb5748f-zjsxt" event={"ID":"258e4d75-ecca-4001-9f56-aeb39557b326","Type":"ContainerDied","Data":"9dfce637d6a3b6a7342be469a0635e6d8e5afd174f4e993540d7e2a69349016b"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100213 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b8cca1-3968-49aa-b4ed-88d9d4075223-combined-ca-bundle\") pod \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100252 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b8cca1-3968-49aa-b4ed-88d9d4075223-metrics-certs-tls-certs\") pod \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100284 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-config\") pod \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100381 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-ovsdbserver-sb-tls-certs\") pod \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100403 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-combined-ca-bundle\") pod \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100441 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071ae6a1-9762-465d-8499-69ec962f08b7-combined-ca-bundle\") pod \"071ae6a1-9762-465d-8499-69ec962f08b7\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100487 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8jss\" (UniqueName: \"kubernetes.io/projected/d6b8cca1-3968-49aa-b4ed-88d9d4075223-kube-api-access-n8jss\") pod \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100505 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d6b8cca1-3968-49aa-b4ed-88d9d4075223-ovn-rundir\") pod \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100532 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/071ae6a1-9762-465d-8499-69ec962f08b7-openstack-config-secret\") pod \"071ae6a1-9762-465d-8499-69ec962f08b7\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100555 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-scripts\") pod \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100581 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b8cca1-3968-49aa-b4ed-88d9d4075223-config\") pod \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100598 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-ovsdb-rundir\") pod \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100611 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d6b8cca1-3968-49aa-b4ed-88d9d4075223-ovs-rundir\") pod \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\" (UID: \"d6b8cca1-3968-49aa-b4ed-88d9d4075223\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100643 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f25lp\" (UniqueName: \"kubernetes.io/projected/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-kube-api-access-f25lp\") pod \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100660 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsq28\" (UniqueName: \"kubernetes.io/projected/071ae6a1-9762-465d-8499-69ec962f08b7-kube-api-access-rsq28\") pod \"071ae6a1-9762-465d-8499-69ec962f08b7\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100681 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100707 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/071ae6a1-9762-465d-8499-69ec962f08b7-openstack-config\") pod \"071ae6a1-9762-465d-8499-69ec962f08b7\" (UID: \"071ae6a1-9762-465d-8499-69ec962f08b7\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.100794 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-metrics-certs-tls-certs\") pod \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\" (UID: \"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.101112 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af857248-0a50-4850-93dd-c8c4e5d8e5ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.101134 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.101143 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.101154 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdvdl\" (UniqueName: \"kubernetes.io/projected/af857248-0a50-4850-93dd-c8c4e5d8e5ea-kube-api-access-tdvdl\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.101164 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/af857248-0a50-4850-93dd-c8c4e5d8e5ea-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.101173 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af857248-0a50-4850-93dd-c8c4e5d8e5ea-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.111877 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-config" (OuterVolumeSpecName: "config") pod "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" (UID: "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.114701 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6b8cca1-3968-49aa-b4ed-88d9d4075223-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "d6b8cca1-3968-49aa-b4ed-88d9d4075223" (UID: "d6b8cca1-3968-49aa-b4ed-88d9d4075223"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.115849 4826 generic.go:334] "Generic (PLEG): container finished" podID="1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" containerID="4d9bbe77efa1079486e055eb220a04a3be411395b46dcaaf31c558f3d4ccb6f8" exitCode=143 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.115944 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8d5fc944-9m8wp" event={"ID":"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe","Type":"ContainerDied","Data":"4d9bbe77efa1079486e055eb220a04a3be411395b46dcaaf31c558f3d4ccb6f8"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.115947 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" (UID: "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.116075 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6b8cca1-3968-49aa-b4ed-88d9d4075223-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "d6b8cca1-3968-49aa-b4ed-88d9d4075223" (UID: "d6b8cca1-3968-49aa-b4ed-88d9d4075223"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.116643 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-scripts" (OuterVolumeSpecName: "scripts") pod "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" (UID: "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.116740 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b8cca1-3968-49aa-b4ed-88d9d4075223-config" (OuterVolumeSpecName: "config") pod "d6b8cca1-3968-49aa-b4ed-88d9d4075223" (UID: "d6b8cca1-3968-49aa-b4ed-88d9d4075223"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.117831 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "af857248-0a50-4850-93dd-c8c4e5d8e5ea" (UID: "af857248-0a50-4850-93dd-c8c4e5d8e5ea"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.123881 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c080b978-6895-4067-9dd5-2c23d4d68518","Type":"ContainerDied","Data":"b086320849aa987d26d49d964fbefd0fcd5dd9b1184d3344ea085e5c42fc14d1"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.121862 4826 generic.go:334] "Generic (PLEG): container finished" podID="c080b978-6895-4067-9dd5-2c23d4d68518" containerID="b086320849aa987d26d49d964fbefd0fcd5dd9b1184d3344ea085e5c42fc14d1" exitCode=143 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.130747 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-kube-api-access-f25lp" (OuterVolumeSpecName: "kube-api-access-f25lp") pod "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" (UID: "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9"). InnerVolumeSpecName "kube-api-access-f25lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.130882 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/071ae6a1-9762-465d-8499-69ec962f08b7-kube-api-access-rsq28" (OuterVolumeSpecName: "kube-api-access-rsq28") pod "071ae6a1-9762-465d-8499-69ec962f08b7" (UID: "071ae6a1-9762-465d-8499-69ec962f08b7"). InnerVolumeSpecName "kube-api-access-rsq28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.139472 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" (UID: "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.142817 4826 generic.go:334] "Generic (PLEG): container finished" podID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.142879 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m2d2v" event={"ID":"2d13cc8c-363d-4dcb-af5f-92318cf72a81","Type":"ContainerDied","Data":"55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.143022 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b8cca1-3968-49aa-b4ed-88d9d4075223-kube-api-access-n8jss" (OuterVolumeSpecName: "kube-api-access-n8jss") pod "d6b8cca1-3968-49aa-b4ed-88d9d4075223" (UID: "d6b8cca1-3968-49aa-b4ed-88d9d4075223"). InnerVolumeSpecName "kube-api-access-n8jss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.144926 4826 generic.go:334] "Generic (PLEG): container finished" podID="5378dab4-ad0c-4259-a7d2-d3f7e784a142" containerID="3817b6b8e4ea595c7ced258dd6bfd2af338287753b307239e9914dc8d293a791" exitCode=143 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.144973 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5378dab4-ad0c-4259-a7d2-d3f7e784a142","Type":"ContainerDied","Data":"3817b6b8e4ea595c7ced258dd6bfd2af338287753b307239e9914dc8d293a791"} Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.145354 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.156454 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.160434 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.160495 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="229ff3bd-fac5-4bb5-ba1e-9e829c30f45b" containerName="nova-cell0-conductor-conductor" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.162228 4826 generic.go:334] "Generic (PLEG): container finished" podID="b0f5ad8c-072d-4994-acb3-e898c0981eef" containerID="bbf08c9f4b99f6403fa0c9818497bc87b482aca717f1d209262d98a1ff4df06d" exitCode=0 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.162325 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55957b69d9-prlpm" event={"ID":"b0f5ad8c-072d-4994-acb3-e898c0981eef","Type":"ContainerDied","Data":"bbf08c9f4b99f6403fa0c9818497bc87b482aca717f1d209262d98a1ff4df06d"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.165833 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.172278 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_af857248-0a50-4850-93dd-c8c4e5d8e5ea/ovsdbserver-nb/0.log" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.172391 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"af857248-0a50-4850-93dd-c8c4e5d8e5ea","Type":"ContainerDied","Data":"1c15ca98f505d964a31fc8ea42371e21d244875489de574c16f53d5e1d645c32"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.172434 4826 scope.go:117] "RemoveContainer" containerID="fbe7c7ae52eeb239791fe0ed601efbb5177d39b8ea9d0057b13b2f3c5c72b647" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.172599 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.204491 4826 generic.go:334] "Generic (PLEG): container finished" podID="082eb821-de0c-462e-9653-b1c80c8e1d2c" containerID="a0acd49fff6ea3dad01d0e7ca858e9d1043795014ae1ed463aec93ec682db76b" exitCode=143 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.204590 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"082eb821-de0c-462e-9653-b1c80c8e1d2c","Type":"ContainerDied","Data":"a0acd49fff6ea3dad01d0e7ca858e9d1043795014ae1ed463aec93ec682db76b"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.205807 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.205829 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.205838 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.205848 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8jss\" (UniqueName: \"kubernetes.io/projected/d6b8cca1-3968-49aa-b4ed-88d9d4075223-kube-api-access-n8jss\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.205857 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d6b8cca1-3968-49aa-b4ed-88d9d4075223-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.205865 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.205875 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.205884 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6b8cca1-3968-49aa-b4ed-88d9d4075223-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.205893 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.205901 4826 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d6b8cca1-3968-49aa-b4ed-88d9d4075223-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.205909 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f25lp\" (UniqueName: \"kubernetes.io/projected/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-kube-api-access-f25lp\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.205917 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsq28\" (UniqueName: \"kubernetes.io/projected/071ae6a1-9762-465d-8499-69ec962f08b7-kube-api-access-rsq28\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.208920 4826 generic.go:334] "Generic (PLEG): container finished" podID="071ae6a1-9762-465d-8499-69ec962f08b7" containerID="65213368cc65f805bf38badc08c83a4c18280b76d4a7b5a339c6c9e7d8454ef3" exitCode=137 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.209029 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.211189 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3a20458e-fa0a-4aa2-a59a-70ebb523a3d9/ovsdbserver-sb/0.log" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.211237 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3a20458e-fa0a-4aa2-a59a-70ebb523a3d9","Type":"ContainerDied","Data":"a4ccaf8801cbd74ccfc269f74f1a84318ac464689082d95d4b7d1bf4743bde1a"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.211287 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.217962 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" (UID: "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.225577 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrjtp" event={"ID":"bf226f75-106f-4f53-b33b-59f9ebbbefc3","Type":"ContainerStarted","Data":"add0379d6310866f8debdb534faa7bf14685002e4ca0c63cd2f7b82d237d1c35"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.245740 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/071ae6a1-9762-465d-8499-69ec962f08b7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "071ae6a1-9762-465d-8499-69ec962f08b7" (UID: "071ae6a1-9762-465d-8499-69ec962f08b7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.246716 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jmdtd_d6b8cca1-3968-49aa-b4ed-88d9d4075223/openstack-network-exporter/0.log" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.246785 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jmdtd" event={"ID":"d6b8cca1-3968-49aa-b4ed-88d9d4075223","Type":"ContainerDied","Data":"98844d33f7081c3da1ac1cb223a390cca10ebb3f27f597cf8d6235f58cbd6d92"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.246855 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jmdtd" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.256211 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "af857248-0a50-4850-93dd-c8c4e5d8e5ea" (UID: "af857248-0a50-4850-93dd-c8c4e5d8e5ea"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.256752 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jrjtp" podStartSLOduration=4.61196602 podStartE2EDuration="9.256710035s" podCreationTimestamp="2026-01-29 07:06:20 +0000 UTC" firstStartedPulling="2026-01-29 07:06:22.499601631 +0000 UTC m=+1366.361394700" lastFinishedPulling="2026-01-29 07:06:27.144345656 +0000 UTC m=+1371.006138715" observedRunningTime="2026-01-29 07:06:29.246311571 +0000 UTC m=+1373.108104640" watchObservedRunningTime="2026-01-29 07:06:29.256710035 +0000 UTC m=+1373.118503094" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.257651 4826 generic.go:334] "Generic (PLEG): container finished" podID="3de4a3bc-a01f-424a-8f17-60deaba1f189" containerID="d9368ce3c4b22eb7ed796c96a8b8b0a80f4ad3b81b2110fb2465fa6b3f09ec54" exitCode=143 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.257892 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3de4a3bc-a01f-424a-8f17-60deaba1f189","Type":"ContainerDied","Data":"d9368ce3c4b22eb7ed796c96a8b8b0a80f4ad3b81b2110fb2465fa6b3f09ec54"} Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.270032 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" (UID: "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.289263 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/071ae6a1-9762-465d-8499-69ec962f08b7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "071ae6a1-9762-465d-8499-69ec962f08b7" (UID: "071ae6a1-9762-465d-8499-69ec962f08b7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.291423 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/071ae6a1-9762-465d-8499-69ec962f08b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "071ae6a1-9762-465d-8499-69ec962f08b7" (UID: "071ae6a1-9762-465d-8499-69ec962f08b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.293543 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b8cca1-3968-49aa-b4ed-88d9d4075223-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6b8cca1-3968-49aa-b4ed-88d9d4075223" (UID: "d6b8cca1-3968-49aa-b4ed-88d9d4075223"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.296712 4826 scope.go:117] "RemoveContainer" containerID="358deb67c7029377050eca333cfd2c34f12e0e975144bb97ed20f885a41cdc54" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.300391 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" (UID: "3a20458e-fa0a-4aa2-a59a-70ebb523a3d9"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.309006 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/071ae6a1-9762-465d-8499-69ec962f08b7-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.309049 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/af857248-0a50-4850-93dd-c8c4e5d8e5ea-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.309063 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.309074 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b8cca1-3968-49aa-b4ed-88d9d4075223-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.309084 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.309093 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.309105 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071ae6a1-9762-465d-8499-69ec962f08b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.309116 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/071ae6a1-9762-465d-8499-69ec962f08b7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.312057 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.316970 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.321657 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" containerName="rabbitmq" containerID="cri-o://561f44049eef8bcf9743aabf5fda4a13b2156ef6047f470fc4f0c9a570583cb1" gracePeriod=604800 Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.364574 4826 scope.go:117] "RemoveContainer" containerID="65213368cc65f805bf38badc08c83a4c18280b76d4a7b5a339c6c9e7d8454ef3" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.384845 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2f64-account-create-update-mndz2"] Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.389731 4826 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 07:06:29 crc kubenswrapper[4826]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: if [ -n "barbican" ]; then Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="barbican" Jan 29 07:06:29 crc kubenswrapper[4826]: else Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="*" Jan 29 07:06:29 crc kubenswrapper[4826]: fi Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: # going for maximum compatibility here: Jan 29 07:06:29 crc kubenswrapper[4826]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 07:06:29 crc kubenswrapper[4826]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 07:06:29 crc kubenswrapper[4826]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 07:06:29 crc kubenswrapper[4826]: # support updates Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: $MYSQL_CMD < logger="UnhandledError" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.391157 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-2f64-account-create-update-mndz2" podUID="68d92d9a-3db4-4400-ac87-6334d2be6184" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.397133 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b8cca1-3968-49aa-b4ed-88d9d4075223-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "d6b8cca1-3968-49aa-b4ed-88d9d4075223" (UID: "d6b8cca1-3968-49aa-b4ed-88d9d4075223"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.412456 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-config\") pod \"b099774a-b73b-4803-9259-98b9e1b5933b\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.413214 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-ovsdbserver-nb\") pod \"b099774a-b73b-4803-9259-98b9e1b5933b\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.413797 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b8cca1-3968-49aa-b4ed-88d9d4075223-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.413813 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.433563 4826 scope.go:117] "RemoveContainer" containerID="65213368cc65f805bf38badc08c83a4c18280b76d4a7b5a339c6c9e7d8454ef3" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.436538 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65213368cc65f805bf38badc08c83a4c18280b76d4a7b5a339c6c9e7d8454ef3\": container with ID starting with 65213368cc65f805bf38badc08c83a4c18280b76d4a7b5a339c6c9e7d8454ef3 not found: ID does not exist" containerID="65213368cc65f805bf38badc08c83a4c18280b76d4a7b5a339c6c9e7d8454ef3" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.436588 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65213368cc65f805bf38badc08c83a4c18280b76d4a7b5a339c6c9e7d8454ef3"} err="failed to get container status \"65213368cc65f805bf38badc08c83a4c18280b76d4a7b5a339c6c9e7d8454ef3\": rpc error: code = NotFound desc = could not find container \"65213368cc65f805bf38badc08c83a4c18280b76d4a7b5a339c6c9e7d8454ef3\": container with ID starting with 65213368cc65f805bf38badc08c83a4c18280b76d4a7b5a339c6c9e7d8454ef3 not found: ID does not exist" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.436614 4826 scope.go:117] "RemoveContainer" containerID="566e075f705cbbe67baf7d3552d44286bc465cf556a9a4ef3b74748293dbc37a" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.483335 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-config" (OuterVolumeSpecName: "config") pod "b099774a-b73b-4803-9259-98b9e1b5933b" (UID: "b099774a-b73b-4803-9259-98b9e1b5933b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.517342 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b099774a-b73b-4803-9259-98b9e1b5933b" (UID: "b099774a-b73b-4803-9259-98b9e1b5933b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.518275 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-ovsdbserver-sb\") pod \"b099774a-b73b-4803-9259-98b9e1b5933b\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.518363 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-dns-swift-storage-0\") pod \"b099774a-b73b-4803-9259-98b9e1b5933b\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.518447 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n25zz\" (UniqueName: \"kubernetes.io/projected/b099774a-b73b-4803-9259-98b9e1b5933b-kube-api-access-n25zz\") pod \"b099774a-b73b-4803-9259-98b9e1b5933b\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.518492 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-dns-svc\") pod \"b099774a-b73b-4803-9259-98b9e1b5933b\" (UID: \"b099774a-b73b-4803-9259-98b9e1b5933b\") " Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.519171 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.519199 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.526080 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b099774a-b73b-4803-9259-98b9e1b5933b-kube-api-access-n25zz" (OuterVolumeSpecName: "kube-api-access-n25zz") pod "b099774a-b73b-4803-9259-98b9e1b5933b" (UID: "b099774a-b73b-4803-9259-98b9e1b5933b"). InnerVolumeSpecName "kube-api-access-n25zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.590621 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b099774a-b73b-4803-9259-98b9e1b5933b" (UID: "b099774a-b73b-4803-9259-98b9e1b5933b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.597413 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b099774a-b73b-4803-9259-98b9e1b5933b" (UID: "b099774a-b73b-4803-9259-98b9e1b5933b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.622624 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.622650 4826 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.622674 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n25zz\" (UniqueName: \"kubernetes.io/projected/b099774a-b73b-4803-9259-98b9e1b5933b-kube-api-access-n25zz\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.673940 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b099774a-b73b-4803-9259-98b9e1b5933b" (UID: "b099774a-b73b-4803-9259-98b9e1b5933b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.678944 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3ce2-account-create-update-ls5ln"] Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.691209 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2cgbk"] Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.710619 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-4cd8-account-create-update-ngljz"] Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.719963 4826 scope.go:117] "RemoveContainer" containerID="949d52a8a709e90860b9b0ef1491f8a9e2676ac4584b2115505be7c970fcced1" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.726007 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b099774a-b73b-4803-9259-98b9e1b5933b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.726081 4826 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.726123 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data podName:0da3bc6b-99a0-4de9-9479-5aaef8bfd81c nodeName:}" failed. No retries permitted until 2026-01-29 07:06:31.72611089 +0000 UTC m=+1375.587903949 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data") pod "rabbitmq-cell1-server-0" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c") : configmap "rabbitmq-cell1-config-data" not found Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.732008 4826 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 07:06:29 crc kubenswrapper[4826]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: if [ -n "placement" ]; then Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="placement" Jan 29 07:06:29 crc kubenswrapper[4826]: else Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="*" Jan 29 07:06:29 crc kubenswrapper[4826]: fi Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: # going for maximum compatibility here: Jan 29 07:06:29 crc kubenswrapper[4826]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 07:06:29 crc kubenswrapper[4826]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 07:06:29 crc kubenswrapper[4826]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 07:06:29 crc kubenswrapper[4826]: # support updates Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: $MYSQL_CMD < logger="UnhandledError" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.735442 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-3ce2-account-create-update-ls5ln" podUID="489ffa8d-9021-4291-b68b-df3d3a146fe1" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.759165 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.759850 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.761678 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.761716 4826 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovsdb-server" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.773757 4826 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 07:06:29 crc kubenswrapper[4826]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: if [ -n "" ]; then Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="" Jan 29 07:06:29 crc kubenswrapper[4826]: else Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="*" Jan 29 07:06:29 crc kubenswrapper[4826]: fi Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: # going for maximum compatibility here: Jan 29 07:06:29 crc kubenswrapper[4826]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 07:06:29 crc kubenswrapper[4826]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 07:06:29 crc kubenswrapper[4826]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 07:06:29 crc kubenswrapper[4826]: # support updates Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: $MYSQL_CMD < logger="UnhandledError" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.775453 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-2cgbk" podUID="b1323800-5ab2-43e7-baa9-9a6d4922224f" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.828876 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.843666 4826 scope.go:117] "RemoveContainer" containerID="e468868d139ea7d7683d5f9d96d634b091bed19d0c79b17142be195283c11a88" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.845082 4826 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 07:06:29 crc kubenswrapper[4826]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: if [ -n "cinder" ]; then Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="cinder" Jan 29 07:06:29 crc kubenswrapper[4826]: else Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="*" Jan 29 07:06:29 crc kubenswrapper[4826]: fi Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: # going for maximum compatibility here: Jan 29 07:06:29 crc kubenswrapper[4826]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 07:06:29 crc kubenswrapper[4826]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 07:06:29 crc kubenswrapper[4826]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 07:06:29 crc kubenswrapper[4826]: # support updates Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: $MYSQL_CMD < logger="UnhandledError" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.847907 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.869838 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.873655 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-4cd8-account-create-update-ngljz" podUID="5edf8927-1593-48b4-b330-9413bcfc733d" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.881570 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.881901 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovs-vswitchd" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.891516 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.898348 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.918955 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-97ad-account-create-update-qvhw8"] Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.922060 4826 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 07:06:29 crc kubenswrapper[4826]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: if [ -n "nova_cell1" ]; then Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="nova_cell1" Jan 29 07:06:29 crc kubenswrapper[4826]: else Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="*" Jan 29 07:06:29 crc kubenswrapper[4826]: fi Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: # going for maximum compatibility here: Jan 29 07:06:29 crc kubenswrapper[4826]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 07:06:29 crc kubenswrapper[4826]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 07:06:29 crc kubenswrapper[4826]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 07:06:29 crc kubenswrapper[4826]: # support updates Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: $MYSQL_CMD < logger="UnhandledError" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.922547 4826 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 07:06:29 crc kubenswrapper[4826]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: if [ -n "glance" ]; then Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="glance" Jan 29 07:06:29 crc kubenswrapper[4826]: else Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="*" Jan 29 07:06:29 crc kubenswrapper[4826]: fi Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: # going for maximum compatibility here: Jan 29 07:06:29 crc kubenswrapper[4826]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 07:06:29 crc kubenswrapper[4826]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 07:06:29 crc kubenswrapper[4826]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 07:06:29 crc kubenswrapper[4826]: # support updates Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: $MYSQL_CMD < logger="UnhandledError" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.924058 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-431a-account-create-update-wgcpd" podUID="598e7579-2935-4fad-85de-031f0238611e" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.924102 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-97ad-account-create-update-qvhw8" podUID="3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.925914 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-431a-account-create-update-wgcpd"] Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.946926 4826 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 07:06:29 crc kubenswrapper[4826]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: if [ -n "nova_cell0" ]; then Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="nova_cell0" Jan 29 07:06:29 crc kubenswrapper[4826]: else Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="*" Jan 29 07:06:29 crc kubenswrapper[4826]: fi Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: # going for maximum compatibility here: Jan 29 07:06:29 crc kubenswrapper[4826]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 07:06:29 crc kubenswrapper[4826]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 07:06:29 crc kubenswrapper[4826]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 07:06:29 crc kubenswrapper[4826]: # support updates Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: $MYSQL_CMD < logger="UnhandledError" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.949014 4826 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 07:06:29 crc kubenswrapper[4826]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: if [ -n "nova_api" ]; then Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="nova_api" Jan 29 07:06:29 crc kubenswrapper[4826]: else Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="*" Jan 29 07:06:29 crc kubenswrapper[4826]: fi Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: # going for maximum compatibility here: Jan 29 07:06:29 crc kubenswrapper[4826]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 07:06:29 crc kubenswrapper[4826]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 07:06:29 crc kubenswrapper[4826]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 07:06:29 crc kubenswrapper[4826]: # support updates Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: $MYSQL_CMD < logger="UnhandledError" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.949086 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-984c-account-create-update-cwjvb" podUID="b9024c38-a986-4d40-be2a-9432e2101586" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.949889 4826 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 07:06:29 crc kubenswrapper[4826]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: if [ -n "neutron" ]; then Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="neutron" Jan 29 07:06:29 crc kubenswrapper[4826]: else Jan 29 07:06:29 crc kubenswrapper[4826]: GRANT_DATABASE="*" Jan 29 07:06:29 crc kubenswrapper[4826]: fi Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: # going for maximum compatibility here: Jan 29 07:06:29 crc kubenswrapper[4826]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 07:06:29 crc kubenswrapper[4826]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 07:06:29 crc kubenswrapper[4826]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 07:06:29 crc kubenswrapper[4826]: # support updates Jan 29 07:06:29 crc kubenswrapper[4826]: Jan 29 07:06:29 crc kubenswrapper[4826]: $MYSQL_CMD < logger="UnhandledError" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.950397 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-84d9-account-create-update-w85wz" podUID="83c32a1d-d03e-4b49-8ac3-0c447212ce2a" Jan 29 07:06:29 crc kubenswrapper[4826]: E0129 07:06:29.951342 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-2845-account-create-update-r9ssg" podUID="4f239c3c-6b60-459b-a4a2-5deb5288161a" Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.960985 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.965048 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.985797 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2845-account-create-update-r9ssg"] Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.985857 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-984c-account-create-update-cwjvb"] Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.989418 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-84d9-account-create-update-w85wz"] Jan 29 07:06:29 crc kubenswrapper[4826]: I0129 07:06:29.999329 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-jmdtd"] Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.004142 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-jmdtd"] Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.066467 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zrzbq"] Jan 29 07:06:30 crc kubenswrapper[4826]: E0129 07:06:30.066886 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" containerName="ovsdbserver-nb" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.066901 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" containerName="ovsdbserver-nb" Jan 29 07:06:30 crc kubenswrapper[4826]: E0129 07:06:30.066917 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b8cca1-3968-49aa-b4ed-88d9d4075223" containerName="openstack-network-exporter" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.066923 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b8cca1-3968-49aa-b4ed-88d9d4075223" containerName="openstack-network-exporter" Jan 29 07:06:30 crc kubenswrapper[4826]: E0129 07:06:30.066934 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" containerName="openstack-network-exporter" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.066940 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" containerName="openstack-network-exporter" Jan 29 07:06:30 crc kubenswrapper[4826]: E0129 07:06:30.066958 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" containerName="openstack-network-exporter" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.066963 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" containerName="openstack-network-exporter" Jan 29 07:06:30 crc kubenswrapper[4826]: E0129 07:06:30.066974 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b099774a-b73b-4803-9259-98b9e1b5933b" containerName="dnsmasq-dns" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.066979 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b099774a-b73b-4803-9259-98b9e1b5933b" containerName="dnsmasq-dns" Jan 29 07:06:30 crc kubenswrapper[4826]: E0129 07:06:30.066995 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b099774a-b73b-4803-9259-98b9e1b5933b" containerName="init" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.067001 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b099774a-b73b-4803-9259-98b9e1b5933b" containerName="init" Jan 29 07:06:30 crc kubenswrapper[4826]: E0129 07:06:30.067018 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" containerName="ovsdbserver-sb" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.067025 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" containerName="ovsdbserver-sb" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.067217 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b8cca1-3968-49aa-b4ed-88d9d4075223" containerName="openstack-network-exporter" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.067229 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" containerName="openstack-network-exporter" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.067242 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" containerName="openstack-network-exporter" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.067252 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b099774a-b73b-4803-9259-98b9e1b5933b" containerName="dnsmasq-dns" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.067266 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" containerName="ovsdbserver-nb" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.067274 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" containerName="ovsdbserver-sb" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.067888 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zrzbq" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.070074 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.091250 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zrzbq"] Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.104272 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aa6497d-3379-49b2-887e-6ed46928266e-operator-scripts\") pod \"root-account-create-update-zrzbq\" (UID: \"7aa6497d-3379-49b2-887e-6ed46928266e\") " pod="openstack/root-account-create-update-zrzbq" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.104319 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lspb9\" (UniqueName: \"kubernetes.io/projected/7aa6497d-3379-49b2-887e-6ed46928266e-kube-api-access-lspb9\") pod \"root-account-create-update-zrzbq\" (UID: \"7aa6497d-3379-49b2-887e-6ed46928266e\") " pod="openstack/root-account-create-update-zrzbq" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.117160 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.205534 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-public-tls-certs\") pod \"b0f5ad8c-072d-4994-acb3-e898c0981eef\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.205652 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b0f5ad8c-072d-4994-acb3-e898c0981eef-etc-swift\") pod \"b0f5ad8c-072d-4994-acb3-e898c0981eef\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.205705 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm8sf\" (UniqueName: \"kubernetes.io/projected/b0f5ad8c-072d-4994-acb3-e898c0981eef-kube-api-access-hm8sf\") pod \"b0f5ad8c-072d-4994-acb3-e898c0981eef\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.205789 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0f5ad8c-072d-4994-acb3-e898c0981eef-run-httpd\") pod \"b0f5ad8c-072d-4994-acb3-e898c0981eef\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.205813 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-combined-ca-bundle\") pod \"b0f5ad8c-072d-4994-acb3-e898c0981eef\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.205838 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0f5ad8c-072d-4994-acb3-e898c0981eef-log-httpd\") pod \"b0f5ad8c-072d-4994-acb3-e898c0981eef\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.205861 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-config-data\") pod \"b0f5ad8c-072d-4994-acb3-e898c0981eef\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.205908 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-internal-tls-certs\") pod \"b0f5ad8c-072d-4994-acb3-e898c0981eef\" (UID: \"b0f5ad8c-072d-4994-acb3-e898c0981eef\") " Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.206118 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aa6497d-3379-49b2-887e-6ed46928266e-operator-scripts\") pod \"root-account-create-update-zrzbq\" (UID: \"7aa6497d-3379-49b2-887e-6ed46928266e\") " pod="openstack/root-account-create-update-zrzbq" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.206152 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lspb9\" (UniqueName: \"kubernetes.io/projected/7aa6497d-3379-49b2-887e-6ed46928266e-kube-api-access-lspb9\") pod \"root-account-create-update-zrzbq\" (UID: \"7aa6497d-3379-49b2-887e-6ed46928266e\") " pod="openstack/root-account-create-update-zrzbq" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.208857 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0f5ad8c-072d-4994-acb3-e898c0981eef-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b0f5ad8c-072d-4994-acb3-e898c0981eef" (UID: "b0f5ad8c-072d-4994-acb3-e898c0981eef"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.210366 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0f5ad8c-072d-4994-acb3-e898c0981eef-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b0f5ad8c-072d-4994-acb3-e898c0981eef" (UID: "b0f5ad8c-072d-4994-acb3-e898c0981eef"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.210485 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aa6497d-3379-49b2-887e-6ed46928266e-operator-scripts\") pod \"root-account-create-update-zrzbq\" (UID: \"7aa6497d-3379-49b2-887e-6ed46928266e\") " pod="openstack/root-account-create-update-zrzbq" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.219096 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0f5ad8c-072d-4994-acb3-e898c0981eef-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b0f5ad8c-072d-4994-acb3-e898c0981eef" (UID: "b0f5ad8c-072d-4994-acb3-e898c0981eef"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.246467 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0f5ad8c-072d-4994-acb3-e898c0981eef-kube-api-access-hm8sf" (OuterVolumeSpecName: "kube-api-access-hm8sf") pod "b0f5ad8c-072d-4994-acb3-e898c0981eef" (UID: "b0f5ad8c-072d-4994-acb3-e898c0981eef"). InnerVolumeSpecName "kube-api-access-hm8sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.265119 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lspb9\" (UniqueName: \"kubernetes.io/projected/7aa6497d-3379-49b2-887e-6ed46928266e-kube-api-access-lspb9\") pod \"root-account-create-update-zrzbq\" (UID: \"7aa6497d-3379-49b2-887e-6ed46928266e\") " pod="openstack/root-account-create-update-zrzbq" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.285897 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b0f5ad8c-072d-4994-acb3-e898c0981eef" (UID: "b0f5ad8c-072d-4994-acb3-e898c0981eef"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.293431 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-config-data" (OuterVolumeSpecName: "config-data") pod "b0f5ad8c-072d-4994-acb3-e898c0981eef" (UID: "b0f5ad8c-072d-4994-acb3-e898c0981eef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.307987 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm8sf\" (UniqueName: \"kubernetes.io/projected/b0f5ad8c-072d-4994-acb3-e898c0981eef-kube-api-access-hm8sf\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.308024 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0f5ad8c-072d-4994-acb3-e898c0981eef-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.308036 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0f5ad8c-072d-4994-acb3-e898c0981eef-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.308044 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.308052 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.308063 4826 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b0f5ad8c-072d-4994-acb3-e898c0981eef-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.311618 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0f5ad8c-072d-4994-acb3-e898c0981eef" (UID: "b0f5ad8c-072d-4994-acb3-e898c0981eef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.324258 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2cgbk" event={"ID":"b1323800-5ab2-43e7-baa9-9a6d4922224f","Type":"ContainerStarted","Data":"ea2bbee80a3f82010b9155ffafdd58c48ebba73a124786d05f69a9a05b942d3f"} Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.338946 4826 generic.go:334] "Generic (PLEG): container finished" podID="b0f5ad8c-072d-4994-acb3-e898c0981eef" containerID="b6663b83c1263d117ea6eb33e490b5c99d06f2282c774e74f6560f8ad6d5bc74" exitCode=0 Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.339039 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55957b69d9-prlpm" event={"ID":"b0f5ad8c-072d-4994-acb3-e898c0981eef","Type":"ContainerDied","Data":"b6663b83c1263d117ea6eb33e490b5c99d06f2282c774e74f6560f8ad6d5bc74"} Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.339053 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55957b69d9-prlpm" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.339071 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55957b69d9-prlpm" event={"ID":"b0f5ad8c-072d-4994-acb3-e898c0981eef","Type":"ContainerDied","Data":"f9b7bf9412422f2b31d3bfeb5e4e73dc504a7b77c6e6f1d010e49868c7b50853"} Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.339091 4826 scope.go:117] "RemoveContainer" containerID="b6663b83c1263d117ea6eb33e490b5c99d06f2282c774e74f6560f8ad6d5bc74" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.363165 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-984c-account-create-update-cwjvb" event={"ID":"b9024c38-a986-4d40-be2a-9432e2101586","Type":"ContainerStarted","Data":"b217fcfefd3776d96c4fbef5d2ab8ed3d80892baca23fd4f605e775eecac6abb"} Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.373492 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2f64-account-create-update-mndz2" event={"ID":"68d92d9a-3db4-4400-ac87-6334d2be6184","Type":"ContainerStarted","Data":"c2ca14b620e334ac5faacd79336f8d8422419117bb8224997af27123f8bf06e2"} Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.385846 4826 scope.go:117] "RemoveContainer" containerID="bbf08c9f4b99f6403fa0c9818497bc87b482aca717f1d209262d98a1ff4df06d" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.399802 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" event={"ID":"b099774a-b73b-4803-9259-98b9e1b5933b","Type":"ContainerDied","Data":"89c52ecfc033b538d52eaf8a4b988c05f22e16c202532a2218a058c064a11eea"} Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.399923 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-9wnpk" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.416479 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="1794f620-102a-4b9c-9097-713579ec55ad" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.417255 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.425021 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zrzbq" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.438454 4826 generic.go:334] "Generic (PLEG): container finished" podID="bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3" containerID="ba8beccf426f4dc7182586b8d6e4748afa9e641836ccaafd3e63a134a7f45556" exitCode=0 Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.438535 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3","Type":"ContainerDied","Data":"ba8beccf426f4dc7182586b8d6e4748afa9e641836ccaafd3e63a134a7f45556"} Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.446052 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3ce2-account-create-update-ls5ln" event={"ID":"489ffa8d-9021-4291-b68b-df3d3a146fe1","Type":"ContainerStarted","Data":"e1831f16961b415cea983912bc4ddcbb010a7e5f2d6e771f2355edc4bd2eadd2"} Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.458008 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b0f5ad8c-072d-4994-acb3-e898c0981eef" (UID: "b0f5ad8c-072d-4994-acb3-e898c0981eef"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.471368 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4cd8-account-create-update-ngljz" event={"ID":"5edf8927-1593-48b4-b330-9413bcfc733d","Type":"ContainerStarted","Data":"492cbd01653d97dd5c6f244ea9545cbab23c89755a18406dc95a15232e5fecd5"} Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.486232 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-84d9-account-create-update-w85wz" event={"ID":"83c32a1d-d03e-4b49-8ac3-0c447212ce2a","Type":"ContainerStarted","Data":"41cd3ed66aee014f35c2d8a7d1a8c0340ed5b3ff392979cdf54789856e0e3a65"} Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.502720 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-97ad-account-create-update-qvhw8" event={"ID":"3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61","Type":"ContainerStarted","Data":"79080789e92364db65af9938a20f9af5497b777a31501eb2fa1f8636deda84bb"} Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.525456 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f5ad8c-072d-4994-acb3-e898c0981eef-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.544150 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-431a-account-create-update-wgcpd" event={"ID":"598e7579-2935-4fad-85de-031f0238611e","Type":"ContainerStarted","Data":"60b7beaa553c3fd9678868b6bae1fa86974ef55eb5589b3e9a8fe60f3e9d9fb3"} Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.610794 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2845-account-create-update-r9ssg" event={"ID":"4f239c3c-6b60-459b-a4a2-5deb5288161a","Type":"ContainerStarted","Data":"beeda0c59e8334b851ceac5d1c3d99f660feb88cc5276b3adcb764d435a1093e"} Jan 29 07:06:30 crc kubenswrapper[4826]: E0129 07:06:30.633789 4826 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 07:06:30 crc kubenswrapper[4826]: E0129 07:06:30.647914 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data podName:1794f620-102a-4b9c-9097-713579ec55ad nodeName:}" failed. No retries permitted until 2026-01-29 07:06:34.647880603 +0000 UTC m=+1378.509673672 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data") pod "rabbitmq-server-0" (UID: "1794f620-102a-4b9c-9097-713579ec55ad") : configmap "rabbitmq-config-data" not found Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.737959 4826 scope.go:117] "RemoveContainer" containerID="b6663b83c1263d117ea6eb33e490b5c99d06f2282c774e74f6560f8ad6d5bc74" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.740725 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:06:30 crc kubenswrapper[4826]: E0129 07:06:30.740814 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6663b83c1263d117ea6eb33e490b5c99d06f2282c774e74f6560f8ad6d5bc74\": container with ID starting with b6663b83c1263d117ea6eb33e490b5c99d06f2282c774e74f6560f8ad6d5bc74 not found: ID does not exist" containerID="b6663b83c1263d117ea6eb33e490b5c99d06f2282c774e74f6560f8ad6d5bc74" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.740839 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6663b83c1263d117ea6eb33e490b5c99d06f2282c774e74f6560f8ad6d5bc74"} err="failed to get container status \"b6663b83c1263d117ea6eb33e490b5c99d06f2282c774e74f6560f8ad6d5bc74\": rpc error: code = NotFound desc = could not find container \"b6663b83c1263d117ea6eb33e490b5c99d06f2282c774e74f6560f8ad6d5bc74\": container with ID starting with b6663b83c1263d117ea6eb33e490b5c99d06f2282c774e74f6560f8ad6d5bc74 not found: ID does not exist" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.740858 4826 scope.go:117] "RemoveContainer" containerID="bbf08c9f4b99f6403fa0c9818497bc87b482aca717f1d209262d98a1ff4df06d" Jan 29 07:06:30 crc kubenswrapper[4826]: E0129 07:06:30.746017 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf08c9f4b99f6403fa0c9818497bc87b482aca717f1d209262d98a1ff4df06d\": container with ID starting with bbf08c9f4b99f6403fa0c9818497bc87b482aca717f1d209262d98a1ff4df06d not found: ID does not exist" containerID="bbf08c9f4b99f6403fa0c9818497bc87b482aca717f1d209262d98a1ff4df06d" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.746068 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf08c9f4b99f6403fa0c9818497bc87b482aca717f1d209262d98a1ff4df06d"} err="failed to get container status \"bbf08c9f4b99f6403fa0c9818497bc87b482aca717f1d209262d98a1ff4df06d\": rpc error: code = NotFound desc = could not find container \"bbf08c9f4b99f6403fa0c9818497bc87b482aca717f1d209262d98a1ff4df06d\": container with ID starting with bbf08c9f4b99f6403fa0c9818497bc87b482aca717f1d209262d98a1ff4df06d not found: ID does not exist" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.746098 4826 scope.go:117] "RemoveContainer" containerID="85d5f42395b3063c15f60fc73e13a264e9f6ec4017758aed23359d39c4b8b103" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.759596 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-55957b69d9-prlpm"] Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.775548 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-55957b69d9-prlpm"] Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.823632 4826 scope.go:117] "RemoveContainer" containerID="3ce2a3dac061c88d9174949819ebe06f54e7dd354603e5038aedc5ba03f8bdd8" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.827414 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="071ae6a1-9762-465d-8499-69ec962f08b7" path="/var/lib/kubelet/pods/071ae6a1-9762-465d-8499-69ec962f08b7/volumes" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.829670 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a20458e-fa0a-4aa2-a59a-70ebb523a3d9" path="/var/lib/kubelet/pods/3a20458e-fa0a-4aa2-a59a-70ebb523a3d9/volumes" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.830369 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acc386ea-3778-41d9-9c92-3ebc8f96f700" path="/var/lib/kubelet/pods/acc386ea-3778-41d9-9c92-3ebc8f96f700/volumes" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.831028 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af857248-0a50-4850-93dd-c8c4e5d8e5ea" path="/var/lib/kubelet/pods/af857248-0a50-4850-93dd-c8c4e5d8e5ea/volumes" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.846461 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0f5ad8c-072d-4994-acb3-e898c0981eef" path="/var/lib/kubelet/pods/b0f5ad8c-072d-4994-acb3-e898c0981eef/volumes" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.847572 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b8cca1-3968-49aa-b4ed-88d9d4075223" path="/var/lib/kubelet/pods/d6b8cca1-3968-49aa-b4ed-88d9d4075223/volumes" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.848334 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec91ae50-7020-40a3-bc50-16d3360b0d10" path="/var/lib/kubelet/pods/ec91ae50-7020-40a3-bc50-16d3360b0d10/volumes" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.853724 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-config-data\") pod \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.853852 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkvx5\" (UniqueName: \"kubernetes.io/projected/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-kube-api-access-tkvx5\") pod \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.853907 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-combined-ca-bundle\") pod \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.853944 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-nova-novncproxy-tls-certs\") pod \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.854038 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-vencrypt-tls-certs\") pod \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\" (UID: \"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3\") " Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.859567 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-9wnpk"] Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.859624 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-9wnpk"] Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.882540 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-kube-api-access-tkvx5" (OuterVolumeSpecName: "kube-api-access-tkvx5") pod "bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3" (UID: "bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3"). InnerVolumeSpecName "kube-api-access-tkvx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.910986 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3" (UID: "bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.925732 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-config-data" (OuterVolumeSpecName: "config-data") pod "bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3" (UID: "bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.956169 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkvx5\" (UniqueName: \"kubernetes.io/projected/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-kube-api-access-tkvx5\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.956419 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.956523 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.965484 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3" (UID: "bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.977538 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="082eb821-de0c-462e-9653-b1c80c8e1d2c" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": read tcp 10.217.0.2:58798->10.217.0.164:8776: read: connection reset by peer" Jan 29 07:06:30 crc kubenswrapper[4826]: I0129 07:06:30.982945 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3" (UID: "bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:30.999916 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:30.999969 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.033026 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2cgbk" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.067492 4826 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.067535 4826 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.168785 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1323800-5ab2-43e7-baa9-9a6d4922224f-operator-scripts\") pod \"b1323800-5ab2-43e7-baa9-9a6d4922224f\" (UID: \"b1323800-5ab2-43e7-baa9-9a6d4922224f\") " Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.169372 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k957d\" (UniqueName: \"kubernetes.io/projected/b1323800-5ab2-43e7-baa9-9a6d4922224f-kube-api-access-k957d\") pod \"b1323800-5ab2-43e7-baa9-9a6d4922224f\" (UID: \"b1323800-5ab2-43e7-baa9-9a6d4922224f\") " Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.169055 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1323800-5ab2-43e7-baa9-9a6d4922224f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1323800-5ab2-43e7-baa9-9a6d4922224f" (UID: "b1323800-5ab2-43e7-baa9-9a6d4922224f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.170704 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1323800-5ab2-43e7-baa9-9a6d4922224f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.182471 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1323800-5ab2-43e7-baa9-9a6d4922224f-kube-api-access-k957d" (OuterVolumeSpecName: "kube-api-access-k957d") pod "b1323800-5ab2-43e7-baa9-9a6d4922224f" (UID: "b1323800-5ab2-43e7-baa9-9a6d4922224f"). InnerVolumeSpecName "kube-api-access-k957d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:31 crc kubenswrapper[4826]: E0129 07:06:31.199487 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02d44a2c_0465_48d4_96a0_e248904d3213.slice/crio-5df61b41361edf81ff23b5a55a94a9be818d468a21b4dd7e9da2a738010ded17\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod082eb821_de0c_462e_9653_b1c80c8e1d2c.slice/crio-3a8934b6ad3dfc62e2d7a18edca3b16ec92c490c12b90aeaef855441c2f63a2c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod082eb821_de0c_462e_9653_b1c80c8e1d2c.slice/crio-conmon-3a8934b6ad3dfc62e2d7a18edca3b16ec92c490c12b90aeaef855441c2f63a2c.scope\": RecentStats: unable to find data in memory cache]" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.282448 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k957d\" (UniqueName: \"kubernetes.io/projected/b1323800-5ab2-43e7-baa9-9a6d4922224f-kube-api-access-k957d\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.434927 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4cd8-account-create-update-ngljz" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.485704 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44fdt\" (UniqueName: \"kubernetes.io/projected/5edf8927-1593-48b4-b330-9413bcfc733d-kube-api-access-44fdt\") pod \"5edf8927-1593-48b4-b330-9413bcfc733d\" (UID: \"5edf8927-1593-48b4-b330-9413bcfc733d\") " Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.485801 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edf8927-1593-48b4-b330-9413bcfc733d-operator-scripts\") pod \"5edf8927-1593-48b4-b330-9413bcfc733d\" (UID: \"5edf8927-1593-48b4-b330-9413bcfc733d\") " Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.486861 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5edf8927-1593-48b4-b330-9413bcfc733d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5edf8927-1593-48b4-b330-9413bcfc733d" (UID: "5edf8927-1593-48b4-b330-9413bcfc733d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.490732 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5edf8927-1593-48b4-b330-9413bcfc733d-kube-api-access-44fdt" (OuterVolumeSpecName: "kube-api-access-44fdt") pod "5edf8927-1593-48b4-b330-9413bcfc733d" (UID: "5edf8927-1593-48b4-b330-9413bcfc733d"). InnerVolumeSpecName "kube-api-access-44fdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:31 crc kubenswrapper[4826]: E0129 07:06:31.571135 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cafb58d89e7c6ef227f4fff8321a876edaab7b5f0a9dcbdb25a9840e49c6af78 is running failed: container process not found" containerID="cafb58d89e7c6ef227f4fff8321a876edaab7b5f0a9dcbdb25a9840e49c6af78" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 07:06:31 crc kubenswrapper[4826]: E0129 07:06:31.571727 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cafb58d89e7c6ef227f4fff8321a876edaab7b5f0a9dcbdb25a9840e49c6af78 is running failed: container process not found" containerID="cafb58d89e7c6ef227f4fff8321a876edaab7b5f0a9dcbdb25a9840e49c6af78" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 07:06:31 crc kubenswrapper[4826]: E0129 07:06:31.572011 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cafb58d89e7c6ef227f4fff8321a876edaab7b5f0a9dcbdb25a9840e49c6af78 is running failed: container process not found" containerID="cafb58d89e7c6ef227f4fff8321a876edaab7b5f0a9dcbdb25a9840e49c6af78" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 07:06:31 crc kubenswrapper[4826]: E0129 07:06:31.572097 4826 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cafb58d89e7c6ef227f4fff8321a876edaab7b5f0a9dcbdb25a9840e49c6af78 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="e27383b1-aba6-4c25-9d4b-3b9cceb2b739" containerName="nova-cell1-conductor-conductor" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.588762 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44fdt\" (UniqueName: \"kubernetes.io/projected/5edf8927-1593-48b4-b330-9413bcfc733d-kube-api-access-44fdt\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.588931 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edf8927-1593-48b4-b330-9413bcfc733d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.671406 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.671768 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3","Type":"ContainerDied","Data":"743d704c4f345bd59582de57fcf653935af47cac09f32902456426b73fda301a"} Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.671823 4826 scope.go:117] "RemoveContainer" containerID="ba8beccf426f4dc7182586b8d6e4748afa9e641836ccaafd3e63a134a7f45556" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.688961 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3ce2-account-create-update-ls5ln" event={"ID":"489ffa8d-9021-4291-b68b-df3d3a146fe1","Type":"ContainerDied","Data":"e1831f16961b415cea983912bc4ddcbb010a7e5f2d6e771f2355edc4bd2eadd2"} Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.689019 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1831f16961b415cea983912bc4ddcbb010a7e5f2d6e771f2355edc4bd2eadd2" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.691934 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2845-account-create-update-r9ssg" event={"ID":"4f239c3c-6b60-459b-a4a2-5deb5288161a","Type":"ContainerDied","Data":"beeda0c59e8334b851ceac5d1c3d99f660feb88cc5276b3adcb764d435a1093e"} Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.691959 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beeda0c59e8334b851ceac5d1c3d99f660feb88cc5276b3adcb764d435a1093e" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.694311 4826 generic.go:334] "Generic (PLEG): container finished" podID="082eb821-de0c-462e-9653-b1c80c8e1d2c" containerID="3a8934b6ad3dfc62e2d7a18edca3b16ec92c490c12b90aeaef855441c2f63a2c" exitCode=0 Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.694351 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"082eb821-de0c-462e-9653-b1c80c8e1d2c","Type":"ContainerDied","Data":"3a8934b6ad3dfc62e2d7a18edca3b16ec92c490c12b90aeaef855441c2f63a2c"} Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.699500 4826 generic.go:334] "Generic (PLEG): container finished" podID="e27383b1-aba6-4c25-9d4b-3b9cceb2b739" containerID="cafb58d89e7c6ef227f4fff8321a876edaab7b5f0a9dcbdb25a9840e49c6af78" exitCode=0 Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.699545 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e27383b1-aba6-4c25-9d4b-3b9cceb2b739","Type":"ContainerDied","Data":"cafb58d89e7c6ef227f4fff8321a876edaab7b5f0a9dcbdb25a9840e49c6af78"} Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.702262 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2cgbk" event={"ID":"b1323800-5ab2-43e7-baa9-9a6d4922224f","Type":"ContainerDied","Data":"ea2bbee80a3f82010b9155ffafdd58c48ebba73a124786d05f69a9a05b942d3f"} Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.702382 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2cgbk" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.706638 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4cd8-account-create-update-ngljz" event={"ID":"5edf8927-1593-48b4-b330-9413bcfc733d","Type":"ContainerDied","Data":"492cbd01653d97dd5c6f244ea9545cbab23c89755a18406dc95a15232e5fecd5"} Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.706747 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4cd8-account-create-update-ngljz" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.715818 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:37990->10.217.0.203:8775: read: connection reset by peer" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.715864 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:37986->10.217.0.203:8775: read: connection reset by peer" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.775169 4826 generic.go:334] "Generic (PLEG): container finished" podID="454b9218-d564-4664-b1dd-4435fa9c60b7" containerID="1937919f8dd64b752c871037ec07858c20e0540d1d4d7464eab4f0a0259be556" exitCode=0 Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.775286 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"454b9218-d564-4664-b1dd-4435fa9c60b7","Type":"ContainerDied","Data":"1937919f8dd64b752c871037ec07858c20e0540d1d4d7464eab4f0a0259be556"} Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.775378 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"454b9218-d564-4664-b1dd-4435fa9c60b7","Type":"ContainerDied","Data":"b9150e6fb524e0cfb0c782223a739c11df1184d94285f30bcb6c9b659080949c"} Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.775391 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9150e6fb524e0cfb0c782223a739c11df1184d94285f30bcb6c9b659080949c" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.777010 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2f64-account-create-update-mndz2" event={"ID":"68d92d9a-3db4-4400-ac87-6334d2be6184","Type":"ContainerDied","Data":"c2ca14b620e334ac5faacd79336f8d8422419117bb8224997af27123f8bf06e2"} Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.777080 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ca14b620e334ac5faacd79336f8d8422419117bb8224997af27123f8bf06e2" Jan 29 07:06:31 crc kubenswrapper[4826]: E0129 07:06:31.798970 4826 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 07:06:31 crc kubenswrapper[4826]: E0129 07:06:31.799024 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data podName:0da3bc6b-99a0-4de9-9479-5aaef8bfd81c nodeName:}" failed. No retries permitted until 2026-01-29 07:06:35.799011991 +0000 UTC m=+1379.660805060 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data") pod "rabbitmq-cell1-server-0" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c") : configmap "rabbitmq-cell1-config-data" not found Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.934541 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-596b9c7d4-2m8gc" podUID="ca39ae08-94df-4778-8203-bcff5806eff0" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:52608->10.217.0.159:9311: read: connection reset by peer" Jan 29 07:06:31 crc kubenswrapper[4826]: I0129 07:06:31.934565 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-596b9c7d4-2m8gc" podUID="ca39ae08-94df-4778-8203-bcff5806eff0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:52622->10.217.0.159:9311: read: connection reset by peer" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.017195 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2845-account-create-update-r9ssg" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.053281 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3ce2-account-create-update-ls5ln" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.066334 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.093251 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.106417 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skhqw\" (UniqueName: \"kubernetes.io/projected/4f239c3c-6b60-459b-a4a2-5deb5288161a-kube-api-access-skhqw\") pod \"4f239c3c-6b60-459b-a4a2-5deb5288161a\" (UID: \"4f239c3c-6b60-459b-a4a2-5deb5288161a\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.106521 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/489ffa8d-9021-4291-b68b-df3d3a146fe1-operator-scripts\") pod \"489ffa8d-9021-4291-b68b-df3d3a146fe1\" (UID: \"489ffa8d-9021-4291-b68b-df3d3a146fe1\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.106561 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f239c3c-6b60-459b-a4a2-5deb5288161a-operator-scripts\") pod \"4f239c3c-6b60-459b-a4a2-5deb5288161a\" (UID: \"4f239c3c-6b60-459b-a4a2-5deb5288161a\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.106608 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dxzz\" (UniqueName: \"kubernetes.io/projected/489ffa8d-9021-4291-b68b-df3d3a146fe1-kube-api-access-6dxzz\") pod \"489ffa8d-9021-4291-b68b-df3d3a146fe1\" (UID: \"489ffa8d-9021-4291-b68b-df3d3a146fe1\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.107385 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2cgbk"] Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.108084 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/489ffa8d-9021-4291-b68b-df3d3a146fe1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "489ffa8d-9021-4291-b68b-df3d3a146fe1" (UID: "489ffa8d-9021-4291-b68b-df3d3a146fe1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.108635 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/489ffa8d-9021-4291-b68b-df3d3a146fe1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.108975 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f239c3c-6b60-459b-a4a2-5deb5288161a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f239c3c-6b60-459b-a4a2-5deb5288161a" (UID: "4f239c3c-6b60-459b-a4a2-5deb5288161a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.116326 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2cgbk"] Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.119648 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jrjtp" podUID="bf226f75-106f-4f53-b33b-59f9ebbbefc3" containerName="registry-server" probeResult="failure" output=< Jan 29 07:06:32 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 07:06:32 crc kubenswrapper[4826]: > Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.131179 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.145534 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489ffa8d-9021-4291-b68b-df3d3a146fe1-kube-api-access-6dxzz" (OuterVolumeSpecName: "kube-api-access-6dxzz") pod "489ffa8d-9021-4291-b68b-df3d3a146fe1" (UID: "489ffa8d-9021-4291-b68b-df3d3a146fe1"). InnerVolumeSpecName "kube-api-access-6dxzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.145651 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f239c3c-6b60-459b-a4a2-5deb5288161a-kube-api-access-skhqw" (OuterVolumeSpecName: "kube-api-access-skhqw") pod "4f239c3c-6b60-459b-a4a2-5deb5288161a" (UID: "4f239c3c-6b60-459b-a4a2-5deb5288161a"). InnerVolumeSpecName "kube-api-access-skhqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.150109 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-4cd8-account-create-update-ngljz"] Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.165510 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-4cd8-account-create-update-ngljz"] Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.171815 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2f64-account-create-update-mndz2" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.217156 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68d92d9a-3db4-4400-ac87-6334d2be6184-operator-scripts\") pod \"68d92d9a-3db4-4400-ac87-6334d2be6184\" (UID: \"68d92d9a-3db4-4400-ac87-6334d2be6184\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.217212 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/454b9218-d564-4664-b1dd-4435fa9c60b7-config-data-generated\") pod \"454b9218-d564-4664-b1dd-4435fa9c60b7\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.217272 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-operator-scripts\") pod \"454b9218-d564-4664-b1dd-4435fa9c60b7\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.217336 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-config-data-default\") pod \"454b9218-d564-4664-b1dd-4435fa9c60b7\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.217372 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-kolla-config\") pod \"454b9218-d564-4664-b1dd-4435fa9c60b7\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.217401 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25s4r\" (UniqueName: \"kubernetes.io/projected/454b9218-d564-4664-b1dd-4435fa9c60b7-kube-api-access-25s4r\") pod \"454b9218-d564-4664-b1dd-4435fa9c60b7\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.217430 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/454b9218-d564-4664-b1dd-4435fa9c60b7-galera-tls-certs\") pod \"454b9218-d564-4664-b1dd-4435fa9c60b7\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.217493 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454b9218-d564-4664-b1dd-4435fa9c60b7-combined-ca-bundle\") pod \"454b9218-d564-4664-b1dd-4435fa9c60b7\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.217525 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"454b9218-d564-4664-b1dd-4435fa9c60b7\" (UID: \"454b9218-d564-4664-b1dd-4435fa9c60b7\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.217547 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c9fk\" (UniqueName: \"kubernetes.io/projected/68d92d9a-3db4-4400-ac87-6334d2be6184-kube-api-access-7c9fk\") pod \"68d92d9a-3db4-4400-ac87-6334d2be6184\" (UID: \"68d92d9a-3db4-4400-ac87-6334d2be6184\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.217945 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skhqw\" (UniqueName: \"kubernetes.io/projected/4f239c3c-6b60-459b-a4a2-5deb5288161a-kube-api-access-skhqw\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.217956 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f239c3c-6b60-459b-a4a2-5deb5288161a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.217967 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dxzz\" (UniqueName: \"kubernetes.io/projected/489ffa8d-9021-4291-b68b-df3d3a146fe1-kube-api-access-6dxzz\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.219093 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "454b9218-d564-4664-b1dd-4435fa9c60b7" (UID: "454b9218-d564-4664-b1dd-4435fa9c60b7"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.219455 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68d92d9a-3db4-4400-ac87-6334d2be6184-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68d92d9a-3db4-4400-ac87-6334d2be6184" (UID: "68d92d9a-3db4-4400-ac87-6334d2be6184"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.219749 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/454b9218-d564-4664-b1dd-4435fa9c60b7-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "454b9218-d564-4664-b1dd-4435fa9c60b7" (UID: "454b9218-d564-4664-b1dd-4435fa9c60b7"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.220229 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "454b9218-d564-4664-b1dd-4435fa9c60b7" (UID: "454b9218-d564-4664-b1dd-4435fa9c60b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.220800 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "454b9218-d564-4664-b1dd-4435fa9c60b7" (UID: "454b9218-d564-4664-b1dd-4435fa9c60b7"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.222704 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-984c-account-create-update-cwjvb" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.241504 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/454b9218-d564-4664-b1dd-4435fa9c60b7-kube-api-access-25s4r" (OuterVolumeSpecName: "kube-api-access-25s4r") pod "454b9218-d564-4664-b1dd-4435fa9c60b7" (UID: "454b9218-d564-4664-b1dd-4435fa9c60b7"). InnerVolumeSpecName "kube-api-access-25s4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.241794 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68d92d9a-3db4-4400-ac87-6334d2be6184-kube-api-access-7c9fk" (OuterVolumeSpecName: "kube-api-access-7c9fk") pod "68d92d9a-3db4-4400-ac87-6334d2be6184" (UID: "68d92d9a-3db4-4400-ac87-6334d2be6184"). InnerVolumeSpecName "kube-api-access-7c9fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.256042 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "454b9218-d564-4664-b1dd-4435fa9c60b7" (UID: "454b9218-d564-4664-b1dd-4435fa9c60b7"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.256435 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-97ad-account-create-update-qvhw8" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.274011 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-84d9-account-create-update-w85wz" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.285511 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-431a-account-create-update-wgcpd" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.320741 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61-operator-scripts\") pod \"3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61\" (UID: \"3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.320808 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kpc7\" (UniqueName: \"kubernetes.io/projected/598e7579-2935-4fad-85de-031f0238611e-kube-api-access-7kpc7\") pod \"598e7579-2935-4fad-85de-031f0238611e\" (UID: \"598e7579-2935-4fad-85de-031f0238611e\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.320849 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c32a1d-d03e-4b49-8ac3-0c447212ce2a-operator-scripts\") pod \"83c32a1d-d03e-4b49-8ac3-0c447212ce2a\" (UID: \"83c32a1d-d03e-4b49-8ac3-0c447212ce2a\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.320866 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/598e7579-2935-4fad-85de-031f0238611e-operator-scripts\") pod \"598e7579-2935-4fad-85de-031f0238611e\" (UID: \"598e7579-2935-4fad-85de-031f0238611e\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.320933 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ch8c\" (UniqueName: \"kubernetes.io/projected/b9024c38-a986-4d40-be2a-9432e2101586-kube-api-access-9ch8c\") pod \"b9024c38-a986-4d40-be2a-9432e2101586\" (UID: \"b9024c38-a986-4d40-be2a-9432e2101586\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.320962 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9024c38-a986-4d40-be2a-9432e2101586-operator-scripts\") pod \"b9024c38-a986-4d40-be2a-9432e2101586\" (UID: \"b9024c38-a986-4d40-be2a-9432e2101586\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.320987 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szbdc\" (UniqueName: \"kubernetes.io/projected/83c32a1d-d03e-4b49-8ac3-0c447212ce2a-kube-api-access-szbdc\") pod \"83c32a1d-d03e-4b49-8ac3-0c447212ce2a\" (UID: \"83c32a1d-d03e-4b49-8ac3-0c447212ce2a\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.321023 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2kbx\" (UniqueName: \"kubernetes.io/projected/3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61-kube-api-access-h2kbx\") pod \"3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61\" (UID: \"3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.321384 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25s4r\" (UniqueName: \"kubernetes.io/projected/454b9218-d564-4664-b1dd-4435fa9c60b7-kube-api-access-25s4r\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.321403 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.321414 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c9fk\" (UniqueName: \"kubernetes.io/projected/68d92d9a-3db4-4400-ac87-6334d2be6184-kube-api-access-7c9fk\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.321424 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68d92d9a-3db4-4400-ac87-6334d2be6184-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.321432 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/454b9218-d564-4664-b1dd-4435fa9c60b7-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.321442 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.321453 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.321464 4826 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/454b9218-d564-4664-b1dd-4435fa9c60b7-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.323809 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/598e7579-2935-4fad-85de-031f0238611e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "598e7579-2935-4fad-85de-031f0238611e" (UID: "598e7579-2935-4fad-85de-031f0238611e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.324183 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61" (UID: "3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.327108 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c32a1d-d03e-4b49-8ac3-0c447212ce2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83c32a1d-d03e-4b49-8ac3-0c447212ce2a" (UID: "83c32a1d-d03e-4b49-8ac3-0c447212ce2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.327179 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9024c38-a986-4d40-be2a-9432e2101586-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9024c38-a986-4d40-be2a-9432e2101586" (UID: "b9024c38-a986-4d40-be2a-9432e2101586"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.340189 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/454b9218-d564-4664-b1dd-4435fa9c60b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "454b9218-d564-4664-b1dd-4435fa9c60b7" (UID: "454b9218-d564-4664-b1dd-4435fa9c60b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.340799 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9024c38-a986-4d40-be2a-9432e2101586-kube-api-access-9ch8c" (OuterVolumeSpecName: "kube-api-access-9ch8c") pod "b9024c38-a986-4d40-be2a-9432e2101586" (UID: "b9024c38-a986-4d40-be2a-9432e2101586"). InnerVolumeSpecName "kube-api-access-9ch8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.342625 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c32a1d-d03e-4b49-8ac3-0c447212ce2a-kube-api-access-szbdc" (OuterVolumeSpecName: "kube-api-access-szbdc") pod "83c32a1d-d03e-4b49-8ac3-0c447212ce2a" (UID: "83c32a1d-d03e-4b49-8ac3-0c447212ce2a"). InnerVolumeSpecName "kube-api-access-szbdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.345453 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61-kube-api-access-h2kbx" (OuterVolumeSpecName: "kube-api-access-h2kbx") pod "3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61" (UID: "3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61"). InnerVolumeSpecName "kube-api-access-h2kbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.353375 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.356403 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598e7579-2935-4fad-85de-031f0238611e-kube-api-access-7kpc7" (OuterVolumeSpecName: "kube-api-access-7kpc7") pod "598e7579-2935-4fad-85de-031f0238611e" (UID: "598e7579-2935-4fad-85de-031f0238611e"). InnerVolumeSpecName "kube-api-access-7kpc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.367605 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.373505 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.420461 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/454b9218-d564-4664-b1dd-4435fa9c60b7-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "454b9218-d564-4664-b1dd-4435fa9c60b7" (UID: "454b9218-d564-4664-b1dd-4435fa9c60b7"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.423809 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-config-data\") pod \"082eb821-de0c-462e-9653-b1c80c8e1d2c\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.423850 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-public-tls-certs\") pod \"082eb821-de0c-462e-9653-b1c80c8e1d2c\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.423883 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4lpp\" (UniqueName: \"kubernetes.io/projected/082eb821-de0c-462e-9653-b1c80c8e1d2c-kube-api-access-g4lpp\") pod \"082eb821-de0c-462e-9653-b1c80c8e1d2c\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.423995 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-scripts\") pod \"082eb821-de0c-462e-9653-b1c80c8e1d2c\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424017 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-combined-ca-bundle\") pod \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\" (UID: \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424046 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-combined-ca-bundle\") pod \"082eb821-de0c-462e-9653-b1c80c8e1d2c\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424075 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-internal-tls-certs\") pod \"082eb821-de0c-462e-9653-b1c80c8e1d2c\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424102 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-config-data-custom\") pod \"082eb821-de0c-462e-9653-b1c80c8e1d2c\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424122 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/082eb821-de0c-462e-9653-b1c80c8e1d2c-logs\") pod \"082eb821-de0c-462e-9653-b1c80c8e1d2c\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424142 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/082eb821-de0c-462e-9653-b1c80c8e1d2c-etc-machine-id\") pod \"082eb821-de0c-462e-9653-b1c80c8e1d2c\" (UID: \"082eb821-de0c-462e-9653-b1c80c8e1d2c\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424182 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l6fw\" (UniqueName: \"kubernetes.io/projected/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-kube-api-access-9l6fw\") pod \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\" (UID: \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424217 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-config-data\") pod \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\" (UID: \"e27383b1-aba6-4c25-9d4b-3b9cceb2b739\") " Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424564 4826 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/454b9218-d564-4664-b1dd-4435fa9c60b7-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424579 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/454b9218-d564-4664-b1dd-4435fa9c60b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424588 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424598 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424608 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kpc7\" (UniqueName: \"kubernetes.io/projected/598e7579-2935-4fad-85de-031f0238611e-kube-api-access-7kpc7\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424617 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c32a1d-d03e-4b49-8ac3-0c447212ce2a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424626 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/598e7579-2935-4fad-85de-031f0238611e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424635 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ch8c\" (UniqueName: \"kubernetes.io/projected/b9024c38-a986-4d40-be2a-9432e2101586-kube-api-access-9ch8c\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424643 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9024c38-a986-4d40-be2a-9432e2101586-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424651 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szbdc\" (UniqueName: \"kubernetes.io/projected/83c32a1d-d03e-4b49-8ac3-0c447212ce2a-kube-api-access-szbdc\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.424660 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2kbx\" (UniqueName: \"kubernetes.io/projected/3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61-kube-api-access-h2kbx\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.431774 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/082eb821-de0c-462e-9653-b1c80c8e1d2c-logs" (OuterVolumeSpecName: "logs") pod "082eb821-de0c-462e-9653-b1c80c8e1d2c" (UID: "082eb821-de0c-462e-9653-b1c80c8e1d2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.437516 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/082eb821-de0c-462e-9653-b1c80c8e1d2c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "082eb821-de0c-462e-9653-b1c80c8e1d2c" (UID: "082eb821-de0c-462e-9653-b1c80c8e1d2c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.449937 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "082eb821-de0c-462e-9653-b1c80c8e1d2c" (UID: "082eb821-de0c-462e-9653-b1c80c8e1d2c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.454601 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-scripts" (OuterVolumeSpecName: "scripts") pod "082eb821-de0c-462e-9653-b1c80c8e1d2c" (UID: "082eb821-de0c-462e-9653-b1c80c8e1d2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.455571 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/082eb821-de0c-462e-9653-b1c80c8e1d2c-kube-api-access-g4lpp" (OuterVolumeSpecName: "kube-api-access-g4lpp") pod "082eb821-de0c-462e-9653-b1c80c8e1d2c" (UID: "082eb821-de0c-462e-9653-b1c80c8e1d2c"). InnerVolumeSpecName "kube-api-access-g4lpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.497691 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-kube-api-access-9l6fw" (OuterVolumeSpecName: "kube-api-access-9l6fw") pod "e27383b1-aba6-4c25-9d4b-3b9cceb2b739" (UID: "e27383b1-aba6-4c25-9d4b-3b9cceb2b739"). InnerVolumeSpecName "kube-api-access-9l6fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.530534 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.530566 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.530576 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/082eb821-de0c-462e-9653-b1c80c8e1d2c-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.530584 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/082eb821-de0c-462e-9653-b1c80c8e1d2c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.530593 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l6fw\" (UniqueName: \"kubernetes.io/projected/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-kube-api-access-9l6fw\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.530601 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4lpp\" (UniqueName: \"kubernetes.io/projected/082eb821-de0c-462e-9653-b1c80c8e1d2c-kube-api-access-g4lpp\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.645782 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-config-data" (OuterVolumeSpecName: "config-data") pod "e27383b1-aba6-4c25-9d4b-3b9cceb2b739" (UID: "e27383b1-aba6-4c25-9d4b-3b9cceb2b739"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.673482 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.674008 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="ceilometer-central-agent" containerID="cri-o://7c7dba5d83accbaa998425c9ed45b4f96e2e00553764c182bf92c212452b3aee" gracePeriod=30 Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.674384 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="proxy-httpd" containerID="cri-o://b9493f3c4087383b633335c756de30375c0411684ed837b040dded2644f790f2" gracePeriod=30 Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.674426 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="sg-core" containerID="cri-o://196b723afa334558daf445ce5209a0b55ad9e7a17b9e293e6f9442cc86628664" gracePeriod=30 Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.674457 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="ceilometer-notification-agent" containerID="cri-o://212d41639781efa740d28e2f69b9a84d9805cc97ad8560cd6ff518592748ede0" gracePeriod=30 Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.712351 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.712543 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="42903a4e-8bdc-4c7b-bd44-b87199a848e6" containerName="kube-state-metrics" containerID="cri-o://bddf8e03297919dd378b31ed57d97c8e00f8aa6eb7eb5c177dd2b26d1146eb32" gracePeriod=30 Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.749594 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.760643 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "082eb821-de0c-462e-9653-b1c80c8e1d2c" (UID: "082eb821-de0c-462e-9653-b1c80c8e1d2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.794657 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e27383b1-aba6-4c25-9d4b-3b9cceb2b739" (UID: "e27383b1-aba6-4c25-9d4b-3b9cceb2b739"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.848019 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5edf8927-1593-48b4-b330-9413bcfc733d" path="/var/lib/kubelet/pods/5edf8927-1593-48b4-b330-9413bcfc733d/volumes" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.848405 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "082eb821-de0c-462e-9653-b1c80c8e1d2c" (UID: "082eb821-de0c-462e-9653-b1c80c8e1d2c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.850443 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b099774a-b73b-4803-9259-98b9e1b5933b" path="/var/lib/kubelet/pods/b099774a-b73b-4803-9259-98b9e1b5933b/volumes" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.851221 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27383b1-aba6-4c25-9d4b-3b9cceb2b739-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.851258 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.851266 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.857840 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "082eb821-de0c-462e-9653-b1c80c8e1d2c" (UID: "082eb821-de0c-462e-9653-b1c80c8e1d2c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.858119 4826 generic.go:334] "Generic (PLEG): container finished" podID="960b6ae0-2577-444e-bc2a-bea4ec2917f9" containerID="aa923caa75a4ef623de542ee9505460d14e9077582229b30f18ddbd944849073" exitCode=0 Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.903014 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-config-data" (OuterVolumeSpecName: "config-data") pod "082eb821-de0c-462e-9653-b1c80c8e1d2c" (UID: "082eb821-de0c-462e-9653-b1c80c8e1d2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.903219 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1323800-5ab2-43e7-baa9-9a6d4922224f" path="/var/lib/kubelet/pods/b1323800-5ab2-43e7-baa9-9a6d4922224f/volumes" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.903781 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3" path="/var/lib/kubelet/pods/bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3/volumes" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.965246 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"960b6ae0-2577-444e-bc2a-bea4ec2917f9","Type":"ContainerDied","Data":"aa923caa75a4ef623de542ee9505460d14e9077582229b30f18ddbd944849073"} Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.965362 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.965888 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="af36d2e1-464b-4ada-9b91-2c18c52502d1" containerName="memcached" containerID="cri-o://92577bcdcf244adebede7b28b6bb8f3affcb3adeed6e90572060e941115a1be5" gracePeriod=30 Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.983776 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.983832 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/082eb821-de0c-462e-9653-b1c80c8e1d2c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.993042 4826 generic.go:334] "Generic (PLEG): container finished" podID="3de4a3bc-a01f-424a-8f17-60deaba1f189" containerID="38411dbe679ef8c89b951c34afc01e2b04f84066915dd175b2c3a4d60a5cccb1" exitCode=0 Jan 29 07:06:32 crc kubenswrapper[4826]: I0129 07:06:32.993159 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3de4a3bc-a01f-424a-8f17-60deaba1f189","Type":"ContainerDied","Data":"38411dbe679ef8c89b951c34afc01e2b04f84066915dd175b2c3a4d60a5cccb1"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.032180 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6fcd-account-create-update-fgtgt"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.039337 4826 generic.go:334] "Generic (PLEG): container finished" podID="1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" containerID="f1db27671e7941a3fe5f409f368278faebc2c9c102ed39040c62e814c55b33f6" exitCode=0 Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.039426 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8d5fc944-9m8wp" event={"ID":"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe","Type":"ContainerDied","Data":"f1db27671e7941a3fe5f409f368278faebc2c9c102ed39040c62e814c55b33f6"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.054560 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6fcd-account-create-update-fgtgt"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.062805 4826 generic.go:334] "Generic (PLEG): container finished" podID="ca39ae08-94df-4778-8203-bcff5806eff0" containerID="f60ca07e9e25fa742e12d26cd7318095e6e3dc16fac1e585a5581d0cf9693fdd" exitCode=0 Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.062923 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-596b9c7d4-2m8gc" event={"ID":"ca39ae08-94df-4778-8203-bcff5806eff0","Type":"ContainerDied","Data":"f60ca07e9e25fa742e12d26cd7318095e6e3dc16fac1e585a5581d0cf9693fdd"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.065239 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6fcd-account-create-update-cht6q"] Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.066017 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0f5ad8c-072d-4994-acb3-e898c0981eef" containerName="proxy-httpd" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.066037 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0f5ad8c-072d-4994-acb3-e898c0981eef" containerName="proxy-httpd" Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.066070 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="082eb821-de0c-462e-9653-b1c80c8e1d2c" containerName="cinder-api" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.066076 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="082eb821-de0c-462e-9653-b1c80c8e1d2c" containerName="cinder-api" Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.066087 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0f5ad8c-072d-4994-acb3-e898c0981eef" containerName="proxy-server" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.066097 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0f5ad8c-072d-4994-acb3-e898c0981eef" containerName="proxy-server" Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.066147 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="082eb821-de0c-462e-9653-b1c80c8e1d2c" containerName="cinder-api-log" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.066153 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="082eb821-de0c-462e-9653-b1c80c8e1d2c" containerName="cinder-api-log" Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.066181 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454b9218-d564-4664-b1dd-4435fa9c60b7" containerName="mysql-bootstrap" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.066187 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="454b9218-d564-4664-b1dd-4435fa9c60b7" containerName="mysql-bootstrap" Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.066207 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454b9218-d564-4664-b1dd-4435fa9c60b7" containerName="galera" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.066213 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="454b9218-d564-4664-b1dd-4435fa9c60b7" containerName="galera" Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.066227 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.066233 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.066240 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27383b1-aba6-4c25-9d4b-3b9cceb2b739" containerName="nova-cell1-conductor-conductor" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.066247 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27383b1-aba6-4c25-9d4b-3b9cceb2b739" containerName="nova-cell1-conductor-conductor" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.091770 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfbd8f8f-3f8e-4fe2-a2e9-e3085d7bded3" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.091810 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27383b1-aba6-4c25-9d4b-3b9cceb2b739" containerName="nova-cell1-conductor-conductor" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.091833 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="082eb821-de0c-462e-9653-b1c80c8e1d2c" containerName="cinder-api-log" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.091848 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0f5ad8c-072d-4994-acb3-e898c0981eef" containerName="proxy-httpd" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.091864 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="454b9218-d564-4664-b1dd-4435fa9c60b7" containerName="galera" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.091876 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="082eb821-de0c-462e-9653-b1c80c8e1d2c" containerName="cinder-api" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.091884 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0f5ad8c-072d-4994-acb3-e898c0981eef" containerName="proxy-server" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.092554 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fcd-account-create-update-cht6q" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.094645 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.095751 4826 generic.go:334] "Generic (PLEG): container finished" podID="c080b978-6895-4067-9dd5-2c23d4d68518" containerID="18ae489dd61bb2195354b5d8f5f9b6e0f384329f3a4ac858dde0d3feffe4b202" exitCode=0 Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.095840 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c080b978-6895-4067-9dd5-2c23d4d68518","Type":"ContainerDied","Data":"18ae489dd61bb2195354b5d8f5f9b6e0f384329f3a4ac858dde0d3feffe4b202"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.098200 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4tptt"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.104023 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-hbzx2"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.112781 4826 generic.go:334] "Generic (PLEG): container finished" podID="216e612b-abc2-4d7c-8b10-28a595de5302" containerID="3bfc6875edaf5adbf81402c08f2ba55583e837eda4f1c32161d763c93ada8c24" exitCode=0 Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.112859 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" event={"ID":"216e612b-abc2-4d7c-8b10-28a595de5302","Type":"ContainerDied","Data":"3bfc6875edaf5adbf81402c08f2ba55583e837eda4f1c32161d763c93ada8c24"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.117628 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-hbzx2"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.122031 4826 generic.go:334] "Generic (PLEG): container finished" podID="426fe450-4a4d-4048-8ea4-422d39482ceb" containerID="b3e85ce7ed4e1d2758772a2b894824794e2f18c81bade5dce37e78c8548d5969" exitCode=0 Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.122107 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6797f89db9-wjtvh" event={"ID":"426fe450-4a4d-4048-8ea4-422d39482ceb","Type":"ContainerDied","Data":"b3e85ce7ed4e1d2758772a2b894824794e2f18c81bade5dce37e78c8548d5969"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.124613 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4tptt"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.125170 4826 generic.go:334] "Generic (PLEG): container finished" podID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerID="b50ac83dbd4ed6b1a94d8b1a7e79a0a2f0cbe1ecca7f07d779adbd94338a2040" exitCode=0 Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.125223 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9349a8ff-2652-4dcf-89d9-6d440269be8c","Type":"ContainerDied","Data":"b50ac83dbd4ed6b1a94d8b1a7e79a0a2f0cbe1ecca7f07d779adbd94338a2040"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.125240 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9349a8ff-2652-4dcf-89d9-6d440269be8c","Type":"ContainerDied","Data":"bdc684e76974439132ff0108524caf801319a4bee28decfd217a36973e91d2a1"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.125251 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc684e76974439132ff0108524caf801319a4bee28decfd217a36973e91d2a1" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.129228 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e27383b1-aba6-4c25-9d4b-3b9cceb2b739","Type":"ContainerDied","Data":"86ff871e0662a98006b7e62240f7736f72820eb368c483d39b6ce7198c2584ed"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.129329 4826 scope.go:117] "RemoveContainer" containerID="cafb58d89e7c6ef227f4fff8321a876edaab7b5f0a9dcbdb25a9840e49c6af78" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.129516 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.133241 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6fcd-account-create-update-cht6q"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.136600 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-984c-account-create-update-cwjvb" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.138760 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-984c-account-create-update-cwjvb" event={"ID":"b9024c38-a986-4d40-be2a-9432e2101586","Type":"ContainerDied","Data":"b217fcfefd3776d96c4fbef5d2ab8ed3d80892baca23fd4f605e775eecac6abb"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.139419 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-86487d6456-mmjgq"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.139578 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-86487d6456-mmjgq" podUID="d9016472-5ff0-4849-bc8a-c1d815d27931" containerName="keystone-api" containerID="cri-o://8a6671e350fa8c53b6824335f80df2067d73193d492de3abd1a20ab459f41264" gracePeriod=30 Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.148524 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-97ad-account-create-update-qvhw8" event={"ID":"3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61","Type":"ContainerDied","Data":"79080789e92364db65af9938a20f9af5497b777a31501eb2fa1f8636deda84bb"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.148679 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-97ad-account-create-update-qvhw8" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.152965 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-431a-account-create-update-wgcpd" event={"ID":"598e7579-2935-4fad-85de-031f0238611e","Type":"ContainerDied","Data":"60b7beaa553c3fd9678868b6bae1fa86974ef55eb5589b3e9a8fe60f3e9d9fb3"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.153512 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-431a-account-create-update-wgcpd" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.156870 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-84d9-account-create-update-w85wz" event={"ID":"83c32a1d-d03e-4b49-8ac3-0c447212ce2a","Type":"ContainerDied","Data":"41cd3ed66aee014f35c2d8a7d1a8c0340ed5b3ff392979cdf54789856e0e3a65"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.156984 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-84d9-account-create-update-w85wz" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.164440 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.172881 4826 generic.go:334] "Generic (PLEG): container finished" podID="5378dab4-ad0c-4259-a7d2-d3f7e784a142" containerID="31b30318fc91eafcd3b97afe85a5e0965b844ee6e31ce721180f3fef71409d0b" exitCode=0 Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.172953 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5378dab4-ad0c-4259-a7d2-d3f7e784a142","Type":"ContainerDied","Data":"31b30318fc91eafcd3b97afe85a5e0965b844ee6e31ce721180f3fef71409d0b"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.172989 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5378dab4-ad0c-4259-a7d2-d3f7e784a142","Type":"ContainerDied","Data":"ab263a483e51b5e0215aae333c6a2ec2607b598a5c44587311b9c8ae95994fc9"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.173001 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab263a483e51b5e0215aae333c6a2ec2607b598a5c44587311b9c8ae95994fc9" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.175984 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-r6wg8"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.187028 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6fcd-account-create-update-cht6q"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.187706 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2845-account-create-update-r9ssg" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.187801 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"082eb821-de0c-462e-9653-b1c80c8e1d2c","Type":"ContainerDied","Data":"8cf0d9aa18257f6410a4185a2aeaa709cc8c2c2e64eca6bc4a11b545c2d14b02"} Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.187982 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.190674 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3ce2-account-create-update-ls5ln" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.190761 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.190677 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2f64-account-create-update-mndz2" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.194192 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99eab789-1139-4666-9fb2-dfddd270bbf2-operator-scripts\") pod \"keystone-6fcd-account-create-update-cht6q\" (UID: \"99eab789-1139-4666-9fb2-dfddd270bbf2\") " pod="openstack/keystone-6fcd-account-create-update-cht6q" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.194240 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmhjd\" (UniqueName: \"kubernetes.io/projected/99eab789-1139-4666-9fb2-dfddd270bbf2-kube-api-access-gmhjd\") pod \"keystone-6fcd-account-create-update-cht6q\" (UID: \"99eab789-1139-4666-9fb2-dfddd270bbf2\") " pod="openstack/keystone-6fcd-account-create-update-cht6q" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.226725 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-r6wg8"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.245272 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zrzbq"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.297144 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99eab789-1139-4666-9fb2-dfddd270bbf2-operator-scripts\") pod \"keystone-6fcd-account-create-update-cht6q\" (UID: \"99eab789-1139-4666-9fb2-dfddd270bbf2\") " pod="openstack/keystone-6fcd-account-create-update-cht6q" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.297220 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmhjd\" (UniqueName: \"kubernetes.io/projected/99eab789-1139-4666-9fb2-dfddd270bbf2-kube-api-access-gmhjd\") pod \"keystone-6fcd-account-create-update-cht6q\" (UID: \"99eab789-1139-4666-9fb2-dfddd270bbf2\") " pod="openstack/keystone-6fcd-account-create-update-cht6q" Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.298102 4826 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.298587 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99eab789-1139-4666-9fb2-dfddd270bbf2-operator-scripts podName:99eab789-1139-4666-9fb2-dfddd270bbf2 nodeName:}" failed. No retries permitted until 2026-01-29 07:06:33.798183041 +0000 UTC m=+1377.659976110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/99eab789-1139-4666-9fb2-dfddd270bbf2-operator-scripts") pod "keystone-6fcd-account-create-update-cht6q" (UID: "99eab789-1139-4666-9fb2-dfddd270bbf2") : configmap "openstack-scripts" not found Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.311138 4826 projected.go:194] Error preparing data for projected volume kube-api-access-gmhjd for pod openstack/keystone-6fcd-account-create-update-cht6q: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.311256 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99eab789-1139-4666-9fb2-dfddd270bbf2-kube-api-access-gmhjd podName:99eab789-1139-4666-9fb2-dfddd270bbf2 nodeName:}" failed. No retries permitted until 2026-01-29 07:06:33.811228895 +0000 UTC m=+1377.673021964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gmhjd" (UniqueName: "kubernetes.io/projected/99eab789-1139-4666-9fb2-dfddd270bbf2-kube-api-access-gmhjd") pod "keystone-6fcd-account-create-update-cht6q" (UID: "99eab789-1139-4666-9fb2-dfddd270bbf2") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.320626 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa923caa75a4ef623de542ee9505460d14e9077582229b30f18ddbd944849073 is running failed: container process not found" containerID="aa923caa75a4ef623de542ee9505460d14e9077582229b30f18ddbd944849073" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.321153 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa923caa75a4ef623de542ee9505460d14e9077582229b30f18ddbd944849073 is running failed: container process not found" containerID="aa923caa75a4ef623de542ee9505460d14e9077582229b30f18ddbd944849073" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.322143 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa923caa75a4ef623de542ee9505460d14e9077582229b30f18ddbd944849073 is running failed: container process not found" containerID="aa923caa75a4ef623de542ee9505460d14e9077582229b30f18ddbd944849073" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.322214 4826 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa923caa75a4ef623de542ee9505460d14e9077582229b30f18ddbd944849073 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="960b6ae0-2577-444e-bc2a-bea4ec2917f9" containerName="nova-scheduler-scheduler" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.647520 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="af36d2e1-464b-4ada-9b91-2c18c52502d1" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.105:11211: connect: connection refused" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.647553 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="426997bd-6ba1-4ebb-b8d3-08be081add91" containerName="galera" containerID="cri-o://efa12df27e8f1e42dacfe2b602d83d6ae2370e8bb4cb8ce8ab4abb28c9e997af" gracePeriod=30 Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.771196 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zrzbq"] Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.830570 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99eab789-1139-4666-9fb2-dfddd270bbf2-operator-scripts\") pod \"keystone-6fcd-account-create-update-cht6q\" (UID: \"99eab789-1139-4666-9fb2-dfddd270bbf2\") " pod="openstack/keystone-6fcd-account-create-update-cht6q" Jan 29 07:06:33 crc kubenswrapper[4826]: I0129 07:06:33.830640 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmhjd\" (UniqueName: \"kubernetes.io/projected/99eab789-1139-4666-9fb2-dfddd270bbf2-kube-api-access-gmhjd\") pod \"keystone-6fcd-account-create-update-cht6q\" (UID: \"99eab789-1139-4666-9fb2-dfddd270bbf2\") " pod="openstack/keystone-6fcd-account-create-update-cht6q" Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.830721 4826 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.830791 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99eab789-1139-4666-9fb2-dfddd270bbf2-operator-scripts podName:99eab789-1139-4666-9fb2-dfddd270bbf2 nodeName:}" failed. No retries permitted until 2026-01-29 07:06:34.83077275 +0000 UTC m=+1378.692565819 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/99eab789-1139-4666-9fb2-dfddd270bbf2-operator-scripts") pod "keystone-6fcd-account-create-update-cht6q" (UID: "99eab789-1139-4666-9fb2-dfddd270bbf2") : configmap "openstack-scripts" not found Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.838223 4826 projected.go:194] Error preparing data for projected volume kube-api-access-gmhjd for pod openstack/keystone-6fcd-account-create-update-cht6q: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.838290 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99eab789-1139-4666-9fb2-dfddd270bbf2-kube-api-access-gmhjd podName:99eab789-1139-4666-9fb2-dfddd270bbf2 nodeName:}" failed. No retries permitted until 2026-01-29 07:06:34.838271897 +0000 UTC m=+1378.700064966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gmhjd" (UniqueName: "kubernetes.io/projected/99eab789-1139-4666-9fb2-dfddd270bbf2-kube-api-access-gmhjd") pod "keystone-6fcd-account-create-update-cht6q" (UID: "99eab789-1139-4666-9fb2-dfddd270bbf2") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.954283 4826 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 07:06:33 crc kubenswrapper[4826]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 07:06:33 crc kubenswrapper[4826]: Jan 29 07:06:33 crc kubenswrapper[4826]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 07:06:33 crc kubenswrapper[4826]: Jan 29 07:06:33 crc kubenswrapper[4826]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 07:06:33 crc kubenswrapper[4826]: Jan 29 07:06:33 crc kubenswrapper[4826]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 07:06:33 crc kubenswrapper[4826]: Jan 29 07:06:33 crc kubenswrapper[4826]: if [ -n "" ]; then Jan 29 07:06:33 crc kubenswrapper[4826]: GRANT_DATABASE="" Jan 29 07:06:33 crc kubenswrapper[4826]: else Jan 29 07:06:33 crc kubenswrapper[4826]: GRANT_DATABASE="*" Jan 29 07:06:33 crc kubenswrapper[4826]: fi Jan 29 07:06:33 crc kubenswrapper[4826]: Jan 29 07:06:33 crc kubenswrapper[4826]: # going for maximum compatibility here: Jan 29 07:06:33 crc kubenswrapper[4826]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 07:06:33 crc kubenswrapper[4826]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 07:06:33 crc kubenswrapper[4826]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 07:06:33 crc kubenswrapper[4826]: # support updates Jan 29 07:06:33 crc kubenswrapper[4826]: Jan 29 07:06:33 crc kubenswrapper[4826]: $MYSQL_CMD < logger="UnhandledError" Jan 29 07:06:33 crc kubenswrapper[4826]: E0129 07:06:33.956669 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-zrzbq" podUID="7aa6497d-3379-49b2-887e-6ed46928266e" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.113422 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.201:3000/\": dial tcp 10.217.0.201:3000: connect: connection refused" Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.141579 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.145572 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.146881 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.146925 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="229ff3bd-fac5-4bb5-ba1e-9e829c30f45b" containerName="nova-cell0-conductor-conductor" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.196934 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3de4a3bc-a01f-424a-8f17-60deaba1f189","Type":"ContainerDied","Data":"1da69d9dcf7373733a52d1259b040c5b6ebf38233b267a25e8066704b0c9e8c2"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.196975 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1da69d9dcf7373733a52d1259b040c5b6ebf38233b267a25e8066704b0c9e8c2" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.199479 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"960b6ae0-2577-444e-bc2a-bea4ec2917f9","Type":"ContainerDied","Data":"71ea196572757137a9598994006b2dff02dcfbf2bea44bcbdea3081e3e3ab34e"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.199511 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71ea196572757137a9598994006b2dff02dcfbf2bea44bcbdea3081e3e3ab34e" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.201648 4826 generic.go:334] "Generic (PLEG): container finished" podID="34f59971-b32b-4b19-950c-77af3de22fd6" containerID="b9493f3c4087383b633335c756de30375c0411684ed837b040dded2644f790f2" exitCode=0 Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.201671 4826 generic.go:334] "Generic (PLEG): container finished" podID="34f59971-b32b-4b19-950c-77af3de22fd6" containerID="196b723afa334558daf445ce5209a0b55ad9e7a17b9e293e6f9442cc86628664" exitCode=2 Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.201680 4826 generic.go:334] "Generic (PLEG): container finished" podID="34f59971-b32b-4b19-950c-77af3de22fd6" containerID="7c7dba5d83accbaa998425c9ed45b4f96e2e00553764c182bf92c212452b3aee" exitCode=0 Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.201709 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34f59971-b32b-4b19-950c-77af3de22fd6","Type":"ContainerDied","Data":"b9493f3c4087383b633335c756de30375c0411684ed837b040dded2644f790f2"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.201725 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34f59971-b32b-4b19-950c-77af3de22fd6","Type":"ContainerDied","Data":"196b723afa334558daf445ce5209a0b55ad9e7a17b9e293e6f9442cc86628664"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.201735 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34f59971-b32b-4b19-950c-77af3de22fd6","Type":"ContainerDied","Data":"7c7dba5d83accbaa998425c9ed45b4f96e2e00553764c182bf92c212452b3aee"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.203220 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zrzbq" event={"ID":"7aa6497d-3379-49b2-887e-6ed46928266e","Type":"ContainerStarted","Data":"fc7b03ec4b883ea5fd67e06d394ab7f364ad215d199fd63c20ca7266b4660657"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.204903 4826 generic.go:334] "Generic (PLEG): container finished" podID="af36d2e1-464b-4ada-9b91-2c18c52502d1" containerID="92577bcdcf244adebede7b28b6bb8f3affcb3adeed6e90572060e941115a1be5" exitCode=0 Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.204944 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"af36d2e1-464b-4ada-9b91-2c18c52502d1","Type":"ContainerDied","Data":"92577bcdcf244adebede7b28b6bb8f3affcb3adeed6e90572060e941115a1be5"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.204960 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"af36d2e1-464b-4ada-9b91-2c18c52502d1","Type":"ContainerDied","Data":"2e88ac09c178a6b2512ea0fdc58d5fe8516f78ba07cd97e7f3bfe552aece400f"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.204970 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e88ac09c178a6b2512ea0fdc58d5fe8516f78ba07cd97e7f3bfe552aece400f" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.210641 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" event={"ID":"216e612b-abc2-4d7c-8b10-28a595de5302","Type":"ContainerDied","Data":"2aacfd948fe1cf28fbe85300411e8fa41adc5b1b90180e6edc493406d891f327"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.210670 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aacfd948fe1cf28fbe85300411e8fa41adc5b1b90180e6edc493406d891f327" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.214764 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6797f89db9-wjtvh" event={"ID":"426fe450-4a4d-4048-8ea4-422d39482ceb","Type":"ContainerDied","Data":"06232c89b34b48939244698fda2e55d6ecd53ac19a0d80fadf84228fe6965098"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.214797 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06232c89b34b48939244698fda2e55d6ecd53ac19a0d80fadf84228fe6965098" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.218123 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-596b9c7d4-2m8gc" event={"ID":"ca39ae08-94df-4778-8203-bcff5806eff0","Type":"ContainerDied","Data":"147c8f4f8d013adf16e3865c378c30844d579a6c359678d682e8226a1f076502"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.218152 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="147c8f4f8d013adf16e3865c378c30844d579a6c359678d682e8226a1f076502" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.231139 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.231537 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c080b978-6895-4067-9dd5-2c23d4d68518","Type":"ContainerDied","Data":"7bea36e3d6f592921ffc6a06e593f52844df17f3699dbcefb0c6ce1458942911"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.231567 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bea36e3d6f592921ffc6a06e593f52844df17f3699dbcefb0c6ce1458942911" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.236224 4826 scope.go:117] "RemoveContainer" containerID="3a8934b6ad3dfc62e2d7a18edca3b16ec92c490c12b90aeaef855441c2f63a2c" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.237108 4826 generic.go:334] "Generic (PLEG): container finished" podID="ea529cf3-184e-446a-9c6a-759cf1bab14c" containerID="da6172474a5804740243a88717e98452c0421876963c421e7935fba689bdc058" exitCode=0 Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.237180 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea529cf3-184e-446a-9c6a-759cf1bab14c","Type":"ContainerDied","Data":"da6172474a5804740243a88717e98452c0421876963c421e7935fba689bdc058"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.237206 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea529cf3-184e-446a-9c6a-759cf1bab14c","Type":"ContainerDied","Data":"0ed54943a2f0d03803174d5aa0d9b5792403d7c5ac6c9867d99b149d33f7fb5b"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.237218 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ed54943a2f0d03803174d5aa0d9b5792403d7c5ac6c9867d99b149d33f7fb5b" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.249094 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8d5fc944-9m8wp" event={"ID":"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe","Type":"ContainerDied","Data":"31478edadab8f16a7bb95ef42169e72c4ddf2d292e43ee2ebbf72df8acd2d7c6"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.249130 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31478edadab8f16a7bb95ef42169e72c4ddf2d292e43ee2ebbf72df8acd2d7c6" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.270267 4826 generic.go:334] "Generic (PLEG): container finished" podID="42903a4e-8bdc-4c7b-bd44-b87199a848e6" containerID="bddf8e03297919dd378b31ed57d97c8e00f8aa6eb7eb5c177dd2b26d1146eb32" exitCode=2 Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.270411 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42903a4e-8bdc-4c7b-bd44-b87199a848e6","Type":"ContainerDied","Data":"bddf8e03297919dd378b31ed57d97c8e00f8aa6eb7eb5c177dd2b26d1146eb32"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.270436 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"42903a4e-8bdc-4c7b-bd44-b87199a848e6","Type":"ContainerDied","Data":"488a48e797ef76f33b6069e6c9b9a59c9f8e91084b26a9a80593c02d656a4d8f"} Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.270447 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="488a48e797ef76f33b6069e6c9b9a59c9f8e91084b26a9a80593c02d656a4d8f" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.270657 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.311046 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.326440 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.347244 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.348420 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-gmhjd operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-6fcd-account-create-update-cht6q" podUID="99eab789-1139-4666-9fb2-dfddd270bbf2" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.349695 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5378dab4-ad0c-4259-a7d2-d3f7e784a142-logs\") pod \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.349727 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9349a8ff-2652-4dcf-89d9-6d440269be8c-logs\") pod \"9349a8ff-2652-4dcf-89d9-6d440269be8c\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.349777 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txvt7\" (UniqueName: \"kubernetes.io/projected/9349a8ff-2652-4dcf-89d9-6d440269be8c-kube-api-access-txvt7\") pod \"9349a8ff-2652-4dcf-89d9-6d440269be8c\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.349798 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqslr\" (UniqueName: \"kubernetes.io/projected/5378dab4-ad0c-4259-a7d2-d3f7e784a142-kube-api-access-gqslr\") pod \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.349825 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-combined-ca-bundle\") pod \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.349841 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-config-data\") pod \"9349a8ff-2652-4dcf-89d9-6d440269be8c\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.349869 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.349897 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-internal-tls-certs\") pod \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.349931 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-combined-ca-bundle\") pod \"9349a8ff-2652-4dcf-89d9-6d440269be8c\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.349950 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-config-data\") pod \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.349965 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-nova-metadata-tls-certs\") pod \"9349a8ff-2652-4dcf-89d9-6d440269be8c\" (UID: \"9349a8ff-2652-4dcf-89d9-6d440269be8c\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.349981 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-scripts\") pod \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.350009 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5378dab4-ad0c-4259-a7d2-d3f7e784a142-httpd-run\") pod \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\" (UID: \"5378dab4-ad0c-4259-a7d2-d3f7e784a142\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.362178 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "5378dab4-ad0c-4259-a7d2-d3f7e784a142" (UID: "5378dab4-ad0c-4259-a7d2-d3f7e784a142"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.362282 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5378dab4-ad0c-4259-a7d2-d3f7e784a142-logs" (OuterVolumeSpecName: "logs") pod "5378dab4-ad0c-4259-a7d2-d3f7e784a142" (UID: "5378dab4-ad0c-4259-a7d2-d3f7e784a142"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.363079 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9349a8ff-2652-4dcf-89d9-6d440269be8c-logs" (OuterVolumeSpecName: "logs") pod "9349a8ff-2652-4dcf-89d9-6d440269be8c" (UID: "9349a8ff-2652-4dcf-89d9-6d440269be8c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.363187 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.363804 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5378dab4-ad0c-4259-a7d2-d3f7e784a142-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5378dab4-ad0c-4259-a7d2-d3f7e784a142" (UID: "5378dab4-ad0c-4259-a7d2-d3f7e784a142"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.366401 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9349a8ff-2652-4dcf-89d9-6d440269be8c-kube-api-access-txvt7" (OuterVolumeSpecName: "kube-api-access-txvt7") pod "9349a8ff-2652-4dcf-89d9-6d440269be8c" (UID: "9349a8ff-2652-4dcf-89d9-6d440269be8c"). InnerVolumeSpecName "kube-api-access-txvt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.393936 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9349a8ff-2652-4dcf-89d9-6d440269be8c" (UID: "9349a8ff-2652-4dcf-89d9-6d440269be8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.404700 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5378dab4-ad0c-4259-a7d2-d3f7e784a142-kube-api-access-gqslr" (OuterVolumeSpecName: "kube-api-access-gqslr") pod "5378dab4-ad0c-4259-a7d2-d3f7e784a142" (UID: "5378dab4-ad0c-4259-a7d2-d3f7e784a142"). InnerVolumeSpecName "kube-api-access-gqslr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.417138 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-scripts" (OuterVolumeSpecName: "scripts") pod "5378dab4-ad0c-4259-a7d2-d3f7e784a142" (UID: "5378dab4-ad0c-4259-a7d2-d3f7e784a142"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.421539 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-config-data" (OuterVolumeSpecName: "config-data") pod "9349a8ff-2652-4dcf-89d9-6d440269be8c" (UID: "9349a8ff-2652-4dcf-89d9-6d440269be8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.422170 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.424617 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5378dab4-ad0c-4259-a7d2-d3f7e784a142" (UID: "5378dab4-ad0c-4259-a7d2-d3f7e784a142"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.430803 4826 scope.go:117] "RemoveContainer" containerID="a0acd49fff6ea3dad01d0e7ca858e9d1043795014ae1ed463aec93ec682db76b" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.438966 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-config-data" (OuterVolumeSpecName: "config-data") pod "5378dab4-ad0c-4259-a7d2-d3f7e784a142" (UID: "5378dab4-ad0c-4259-a7d2-d3f7e784a142"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.445291 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.463156 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3ce2-account-create-update-ls5ln"] Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.463875 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468350 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-scripts\") pod \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468402 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-internal-tls-certs\") pod \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468435 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-public-tls-certs\") pod \"3de4a3bc-a01f-424a-8f17-60deaba1f189\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468455 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-combined-ca-bundle\") pod \"3de4a3bc-a01f-424a-8f17-60deaba1f189\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468479 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q2fs\" (UniqueName: \"kubernetes.io/projected/3de4a3bc-a01f-424a-8f17-60deaba1f189-kube-api-access-4q2fs\") pod \"3de4a3bc-a01f-424a-8f17-60deaba1f189\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468508 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-config-data\") pod \"3de4a3bc-a01f-424a-8f17-60deaba1f189\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468548 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmg5b\" (UniqueName: \"kubernetes.io/projected/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-kube-api-access-kmg5b\") pod \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468580 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-internal-tls-certs\") pod \"3de4a3bc-a01f-424a-8f17-60deaba1f189\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468605 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-combined-ca-bundle\") pod \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468629 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-config-data\") pod \"ca39ae08-94df-4778-8203-bcff5806eff0\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468652 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"c080b978-6895-4067-9dd5-2c23d4d68518\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468675 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pqlw\" (UniqueName: \"kubernetes.io/projected/ca39ae08-94df-4778-8203-bcff5806eff0-kube-api-access-6pqlw\") pod \"ca39ae08-94df-4778-8203-bcff5806eff0\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468699 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-config-data\") pod \"c080b978-6895-4067-9dd5-2c23d4d68518\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468721 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-combined-ca-bundle\") pod \"ca39ae08-94df-4778-8203-bcff5806eff0\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468754 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-config-data-custom\") pod \"ca39ae08-94df-4778-8203-bcff5806eff0\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468777 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-internal-tls-certs\") pod \"ca39ae08-94df-4778-8203-bcff5806eff0\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468818 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-scripts\") pod \"c080b978-6895-4067-9dd5-2c23d4d68518\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468847 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-combined-ca-bundle\") pod \"c080b978-6895-4067-9dd5-2c23d4d68518\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468873 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-config-data\") pod \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468897 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-logs\") pod \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468920 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-public-tls-certs\") pod \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\" (UID: \"1beb9e09-4039-4ce6-a33f-0d34e10b1cfe\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.468970 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c080b978-6895-4067-9dd5-2c23d4d68518-httpd-run\") pod \"c080b978-6895-4067-9dd5-2c23d4d68518\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.469000 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-public-tls-certs\") pod \"c080b978-6895-4067-9dd5-2c23d4d68518\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.469020 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c080b978-6895-4067-9dd5-2c23d4d68518-logs\") pod \"c080b978-6895-4067-9dd5-2c23d4d68518\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.469039 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3de4a3bc-a01f-424a-8f17-60deaba1f189-logs\") pod \"3de4a3bc-a01f-424a-8f17-60deaba1f189\" (UID: \"3de4a3bc-a01f-424a-8f17-60deaba1f189\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.469069 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swjn5\" (UniqueName: \"kubernetes.io/projected/c080b978-6895-4067-9dd5-2c23d4d68518-kube-api-access-swjn5\") pod \"c080b978-6895-4067-9dd5-2c23d4d68518\" (UID: \"c080b978-6895-4067-9dd5-2c23d4d68518\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.469100 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-public-tls-certs\") pod \"ca39ae08-94df-4778-8203-bcff5806eff0\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.469127 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca39ae08-94df-4778-8203-bcff5806eff0-logs\") pod \"ca39ae08-94df-4778-8203-bcff5806eff0\" (UID: \"ca39ae08-94df-4778-8203-bcff5806eff0\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.472745 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-logs" (OuterVolumeSpecName: "logs") pod "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" (UID: "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.473807 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5378dab4-ad0c-4259-a7d2-d3f7e784a142-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.473857 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9349a8ff-2652-4dcf-89d9-6d440269be8c-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.473874 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txvt7\" (UniqueName: \"kubernetes.io/projected/9349a8ff-2652-4dcf-89d9-6d440269be8c-kube-api-access-txvt7\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.473889 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqslr\" (UniqueName: \"kubernetes.io/projected/5378dab4-ad0c-4259-a7d2-d3f7e784a142-kube-api-access-gqslr\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.473933 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.473943 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.473965 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.473997 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.474010 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.474018 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.474027 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.474040 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5378dab4-ad0c-4259-a7d2-d3f7e784a142-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.486625 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3de4a3bc-a01f-424a-8f17-60deaba1f189-logs" (OuterVolumeSpecName: "logs") pod "3de4a3bc-a01f-424a-8f17-60deaba1f189" (UID: "3de4a3bc-a01f-424a-8f17-60deaba1f189"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.487558 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c080b978-6895-4067-9dd5-2c23d4d68518-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c080b978-6895-4067-9dd5-2c23d4d68518" (UID: "c080b978-6895-4067-9dd5-2c23d4d68518"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.490632 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3ce2-account-create-update-ls5ln"] Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.499820 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c080b978-6895-4067-9dd5-2c23d4d68518-logs" (OuterVolumeSpecName: "logs") pod "c080b978-6895-4067-9dd5-2c23d4d68518" (UID: "c080b978-6895-4067-9dd5-2c23d4d68518"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.511400 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ca39ae08-94df-4778-8203-bcff5806eff0" (UID: "ca39ae08-94df-4778-8203-bcff5806eff0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.512389 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca39ae08-94df-4778-8203-bcff5806eff0-logs" (OuterVolumeSpecName: "logs") pod "ca39ae08-94df-4778-8203-bcff5806eff0" (UID: "ca39ae08-94df-4778-8203-bcff5806eff0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.519467 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-scripts" (OuterVolumeSpecName: "scripts") pod "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" (UID: "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.532774 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.533032 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-984c-account-create-update-cwjvb"] Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.534707 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.546045 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9349a8ff-2652-4dcf-89d9-6d440269be8c" (UID: "9349a8ff-2652-4dcf-89d9-6d440269be8c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.558034 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.576926 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-combined-ca-bundle\") pod \"426fe450-4a4d-4048-8ea4-422d39482ceb\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.577197 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-config-data\") pod \"426fe450-4a4d-4048-8ea4-422d39482ceb\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.577348 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-combined-ca-bundle\") pod \"216e612b-abc2-4d7c-8b10-28a595de5302\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.577775 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d92xq\" (UniqueName: \"kubernetes.io/projected/426fe450-4a4d-4048-8ea4-422d39482ceb-kube-api-access-d92xq\") pod \"426fe450-4a4d-4048-8ea4-422d39482ceb\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.578011 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwthd\" (UniqueName: \"kubernetes.io/projected/216e612b-abc2-4d7c-8b10-28a595de5302-kube-api-access-vwthd\") pod \"216e612b-abc2-4d7c-8b10-28a595de5302\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.578123 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data\") pod \"216e612b-abc2-4d7c-8b10-28a595de5302\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.578178 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-config-data\") pod \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.579684 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data-custom\") pod \"216e612b-abc2-4d7c-8b10-28a595de5302\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.586277 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-config-data-custom\") pod \"426fe450-4a4d-4048-8ea4-422d39482ceb\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.589069 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426fe450-4a4d-4048-8ea4-422d39482ceb-logs\") pod \"426fe450-4a4d-4048-8ea4-422d39482ceb\" (UID: \"426fe450-4a4d-4048-8ea4-422d39482ceb\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.589159 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-combined-ca-bundle\") pod \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.589211 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216e612b-abc2-4d7c-8b10-28a595de5302-logs\") pod \"216e612b-abc2-4d7c-8b10-28a595de5302\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.589231 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbs7p\" (UniqueName: \"kubernetes.io/projected/960b6ae0-2577-444e-bc2a-bea4ec2917f9-kube-api-access-qbs7p\") pod \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.589449 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-scripts" (OuterVolumeSpecName: "scripts") pod "c080b978-6895-4067-9dd5-2c23d4d68518" (UID: "c080b978-6895-4067-9dd5-2c23d4d68518"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.589770 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-984c-account-create-update-cwjvb"] Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.590334 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de4a3bc-a01f-424a-8f17-60deaba1f189-kube-api-access-4q2fs" (OuterVolumeSpecName: "kube-api-access-4q2fs") pod "3de4a3bc-a01f-424a-8f17-60deaba1f189" (UID: "3de4a3bc-a01f-424a-8f17-60deaba1f189"). InnerVolumeSpecName "kube-api-access-4q2fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.591264 4826 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9349a8ff-2652-4dcf-89d9-6d440269be8c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.591305 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c080b978-6895-4067-9dd5-2c23d4d68518-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.591319 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c080b978-6895-4067-9dd5-2c23d4d68518-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.591328 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3de4a3bc-a01f-424a-8f17-60deaba1f189-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.591337 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca39ae08-94df-4778-8203-bcff5806eff0-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.591347 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.591341 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/426fe450-4a4d-4048-8ea4-422d39482ceb-logs" (OuterVolumeSpecName: "logs") pod "426fe450-4a4d-4048-8ea4-422d39482ceb" (UID: "426fe450-4a4d-4048-8ea4-422d39482ceb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.591357 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q2fs\" (UniqueName: \"kubernetes.io/projected/3de4a3bc-a01f-424a-8f17-60deaba1f189-kube-api-access-4q2fs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.591413 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.591429 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.591781 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216e612b-abc2-4d7c-8b10-28a595de5302-logs" (OuterVolumeSpecName: "logs") pod "216e612b-abc2-4d7c-8b10-28a595de5302" (UID: "216e612b-abc2-4d7c-8b10-28a595de5302"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.591920 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c080b978-6895-4067-9dd5-2c23d4d68518-kube-api-access-swjn5" (OuterVolumeSpecName: "kube-api-access-swjn5") pod "c080b978-6895-4067-9dd5-2c23d4d68518" (UID: "c080b978-6895-4067-9dd5-2c23d4d68518"). InnerVolumeSpecName "kube-api-access-swjn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.592477 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-kube-api-access-kmg5b" (OuterVolumeSpecName: "kube-api-access-kmg5b") pod "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" (UID: "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe"). InnerVolumeSpecName "kube-api-access-kmg5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.595844 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/426fe450-4a4d-4048-8ea4-422d39482ceb-kube-api-access-d92xq" (OuterVolumeSpecName: "kube-api-access-d92xq") pod "426fe450-4a4d-4048-8ea4-422d39482ceb" (UID: "426fe450-4a4d-4048-8ea4-422d39482ceb"). InnerVolumeSpecName "kube-api-access-d92xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.596481 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "c080b978-6895-4067-9dd5-2c23d4d68518" (UID: "c080b978-6895-4067-9dd5-2c23d4d68518"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.599768 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca39ae08-94df-4778-8203-bcff5806eff0-kube-api-access-6pqlw" (OuterVolumeSpecName: "kube-api-access-6pqlw") pod "ca39ae08-94df-4778-8203-bcff5806eff0" (UID: "ca39ae08-94df-4778-8203-bcff5806eff0"). InnerVolumeSpecName "kube-api-access-6pqlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.611807 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-97ad-account-create-update-qvhw8"] Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.617089 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-97ad-account-create-update-qvhw8"] Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.627592 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2845-account-create-update-r9ssg"] Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.650612 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2845-account-create-update-r9ssg"] Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.662624 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "426fe450-4a4d-4048-8ea4-422d39482ceb" (UID: "426fe450-4a4d-4048-8ea4-422d39482ceb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.668716 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216e612b-abc2-4d7c-8b10-28a595de5302-kube-api-access-vwthd" (OuterVolumeSpecName: "kube-api-access-vwthd") pod "216e612b-abc2-4d7c-8b10-28a595de5302" (UID: "216e612b-abc2-4d7c-8b10-28a595de5302"). InnerVolumeSpecName "kube-api-access-vwthd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.688478 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "216e612b-abc2-4d7c-8b10-28a595de5302" (UID: "216e612b-abc2-4d7c-8b10-28a595de5302"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.711508 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-431a-account-create-update-wgcpd"] Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.712260 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpwkq\" (UniqueName: \"kubernetes.io/projected/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-api-access-qpwkq\") pod \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.719062 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.723005 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-431a-account-create-update-wgcpd"] Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.733424 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960b6ae0-2577-444e-bc2a-bea4ec2917f9-kube-api-access-qbs7p" (OuterVolumeSpecName: "kube-api-access-qbs7p") pod "960b6ae0-2577-444e-bc2a-bea4ec2917f9" (UID: "960b6ae0-2577-444e-bc2a-bea4ec2917f9"). InnerVolumeSpecName "kube-api-access-qbs7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.733623 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-state-metrics-tls-certs\") pod \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.733788 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbs7p\" (UniqueName: \"kubernetes.io/projected/960b6ae0-2577-444e-bc2a-bea4ec2917f9-kube-api-access-qbs7p\") pod \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.733860 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbq9q\" (UniqueName: \"kubernetes.io/projected/ea529cf3-184e-446a-9c6a-759cf1bab14c-kube-api-access-pbq9q\") pod \"ea529cf3-184e-446a-9c6a-759cf1bab14c\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.734001 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-config-data-custom\") pod \"ea529cf3-184e-446a-9c6a-759cf1bab14c\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.734045 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/af36d2e1-464b-4ada-9b91-2c18c52502d1-memcached-tls-certs\") pod \"af36d2e1-464b-4ada-9b91-2c18c52502d1\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.734075 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/af36d2e1-464b-4ada-9b91-2c18c52502d1-kolla-config\") pod \"af36d2e1-464b-4ada-9b91-2c18c52502d1\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.734111 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-combined-ca-bundle\") pod \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.734132 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-combined-ca-bundle\") pod \"ea529cf3-184e-446a-9c6a-759cf1bab14c\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.734157 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-scripts\") pod \"ea529cf3-184e-446a-9c6a-759cf1bab14c\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.734183 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxb7l\" (UniqueName: \"kubernetes.io/projected/af36d2e1-464b-4ada-9b91-2c18c52502d1-kube-api-access-pxb7l\") pod \"af36d2e1-464b-4ada-9b91-2c18c52502d1\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.734224 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af36d2e1-464b-4ada-9b91-2c18c52502d1-config-data\") pod \"af36d2e1-464b-4ada-9b91-2c18c52502d1\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.734340 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af36d2e1-464b-4ada-9b91-2c18c52502d1-combined-ca-bundle\") pod \"af36d2e1-464b-4ada-9b91-2c18c52502d1\" (UID: \"af36d2e1-464b-4ada-9b91-2c18c52502d1\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.734427 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-config-data\") pod \"ea529cf3-184e-446a-9c6a-759cf1bab14c\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.734460 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-state-metrics-tls-config\") pod \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\" (UID: \"42903a4e-8bdc-4c7b-bd44-b87199a848e6\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.734532 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea529cf3-184e-446a-9c6a-759cf1bab14c-etc-machine-id\") pod \"ea529cf3-184e-446a-9c6a-759cf1bab14c\" (UID: \"ea529cf3-184e-446a-9c6a-759cf1bab14c\") " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.735376 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmg5b\" (UniqueName: \"kubernetes.io/projected/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-kube-api-access-kmg5b\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.735409 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.735421 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pqlw\" (UniqueName: \"kubernetes.io/projected/ca39ae08-94df-4778-8203-bcff5806eff0-kube-api-access-6pqlw\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.735432 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d92xq\" (UniqueName: \"kubernetes.io/projected/426fe450-4a4d-4048-8ea4-422d39482ceb-kube-api-access-d92xq\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.735441 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwthd\" (UniqueName: \"kubernetes.io/projected/216e612b-abc2-4d7c-8b10-28a595de5302-kube-api-access-vwthd\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.735450 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.735462 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.735472 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.735483 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swjn5\" (UniqueName: \"kubernetes.io/projected/c080b978-6895-4067-9dd5-2c23d4d68518-kube-api-access-swjn5\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.735493 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426fe450-4a4d-4048-8ea4-422d39482ceb-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.735503 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/216e612b-abc2-4d7c-8b10-28a595de5302-logs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: W0129 07:06:34.739417 4826 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/960b6ae0-2577-444e-bc2a-bea4ec2917f9/volumes/kubernetes.io~projected/kube-api-access-qbs7p Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.739472 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960b6ae0-2577-444e-bc2a-bea4ec2917f9-kube-api-access-qbs7p" (OuterVolumeSpecName: "kube-api-access-qbs7p") pod "960b6ae0-2577-444e-bc2a-bea4ec2917f9" (UID: "960b6ae0-2577-444e-bc2a-bea4ec2917f9"). InnerVolumeSpecName "kube-api-access-qbs7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.751532 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea529cf3-184e-446a-9c6a-759cf1bab14c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ea529cf3-184e-446a-9c6a-759cf1bab14c" (UID: "ea529cf3-184e-446a-9c6a-759cf1bab14c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.752228 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af36d2e1-464b-4ada-9b91-2c18c52502d1-config-data" (OuterVolumeSpecName: "config-data") pod "af36d2e1-464b-4ada-9b91-2c18c52502d1" (UID: "af36d2e1-464b-4ada-9b91-2c18c52502d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.752620 4826 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.752682 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data podName:1794f620-102a-4b9c-9097-713579ec55ad nodeName:}" failed. No retries permitted until 2026-01-29 07:06:42.752661855 +0000 UTC m=+1386.614454914 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data") pod "rabbitmq-server-0" (UID: "1794f620-102a-4b9c-9097-713579ec55ad") : configmap "rabbitmq-config-data" not found Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.753194 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af36d2e1-464b-4ada-9b91-2c18c52502d1-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "af36d2e1-464b-4ada-9b91-2c18c52502d1" (UID: "af36d2e1-464b-4ada-9b91-2c18c52502d1"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.761308 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.772891 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.773084 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.773507 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-api-access-qpwkq" (OuterVolumeSpecName: "kube-api-access-qpwkq") pod "42903a4e-8bdc-4c7b-bd44-b87199a848e6" (UID: "42903a4e-8bdc-4c7b-bd44-b87199a848e6"). InnerVolumeSpecName "kube-api-access-qpwkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.782623 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.782713 4826 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovsdb-server" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.783546 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af36d2e1-464b-4ada-9b91-2c18c52502d1-kube-api-access-pxb7l" (OuterVolumeSpecName: "kube-api-access-pxb7l") pod "af36d2e1-464b-4ada-9b91-2c18c52502d1" (UID: "af36d2e1-464b-4ada-9b91-2c18c52502d1"). InnerVolumeSpecName "kube-api-access-pxb7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.784343 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-84d9-account-create-update-w85wz"] Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.784901 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea529cf3-184e-446a-9c6a-759cf1bab14c-kube-api-access-pbq9q" (OuterVolumeSpecName: "kube-api-access-pbq9q") pod "ea529cf3-184e-446a-9c6a-759cf1bab14c" (UID: "ea529cf3-184e-446a-9c6a-759cf1bab14c"). InnerVolumeSpecName "kube-api-access-pbq9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.794816 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.836544 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.836647 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovs-vswitchd" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.839931 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99eab789-1139-4666-9fb2-dfddd270bbf2-operator-scripts\") pod \"keystone-6fcd-account-create-update-cht6q\" (UID: \"99eab789-1139-4666-9fb2-dfddd270bbf2\") " pod="openstack/keystone-6fcd-account-create-update-cht6q" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.839993 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmhjd\" (UniqueName: \"kubernetes.io/projected/99eab789-1139-4666-9fb2-dfddd270bbf2-kube-api-access-gmhjd\") pod \"keystone-6fcd-account-create-update-cht6q\" (UID: \"99eab789-1139-4666-9fb2-dfddd270bbf2\") " pod="openstack/keystone-6fcd-account-create-update-cht6q" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.840151 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea529cf3-184e-446a-9c6a-759cf1bab14c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.840162 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpwkq\" (UniqueName: \"kubernetes.io/projected/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-api-access-qpwkq\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.840172 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbs7p\" (UniqueName: \"kubernetes.io/projected/960b6ae0-2577-444e-bc2a-bea4ec2917f9-kube-api-access-qbs7p\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.840181 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbq9q\" (UniqueName: \"kubernetes.io/projected/ea529cf3-184e-446a-9c6a-759cf1bab14c-kube-api-access-pbq9q\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.840190 4826 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/af36d2e1-464b-4ada-9b91-2c18c52502d1-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.840198 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxb7l\" (UniqueName: \"kubernetes.io/projected/af36d2e1-464b-4ada-9b91-2c18c52502d1-kube-api-access-pxb7l\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.840206 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af36d2e1-464b-4ada-9b91-2c18c52502d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.844769 4826 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.844891 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99eab789-1139-4666-9fb2-dfddd270bbf2-operator-scripts podName:99eab789-1139-4666-9fb2-dfddd270bbf2 nodeName:}" failed. No retries permitted until 2026-01-29 07:06:36.844847452 +0000 UTC m=+1380.706640521 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/99eab789-1139-4666-9fb2-dfddd270bbf2-operator-scripts") pod "keystone-6fcd-account-create-update-cht6q" (UID: "99eab789-1139-4666-9fb2-dfddd270bbf2") : configmap "openstack-scripts" not found Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.845421 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ea529cf3-184e-446a-9c6a-759cf1bab14c" (UID: "ea529cf3-184e-446a-9c6a-759cf1bab14c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.847789 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-scripts" (OuterVolumeSpecName: "scripts") pod "ea529cf3-184e-446a-9c6a-759cf1bab14c" (UID: "ea529cf3-184e-446a-9c6a-759cf1bab14c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.862704 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61" path="/var/lib/kubelet/pods/3d7a8a86-77c4-4e06-bd13-3c2ba67c1d61/volumes" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.863109 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489ffa8d-9021-4291-b68b-df3d3a146fe1" path="/var/lib/kubelet/pods/489ffa8d-9021-4291-b68b-df3d3a146fe1/volumes" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.863470 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f239c3c-6b60-459b-a4a2-5deb5288161a" path="/var/lib/kubelet/pods/4f239c3c-6b60-459b-a4a2-5deb5288161a/volumes" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.863791 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598e7579-2935-4fad-85de-031f0238611e" path="/var/lib/kubelet/pods/598e7579-2935-4fad-85de-031f0238611e/volumes" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.864131 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9024c38-a986-4d40-be2a-9432e2101586" path="/var/lib/kubelet/pods/b9024c38-a986-4d40-be2a-9432e2101586/volumes" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.864535 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17bd300-9b3b-4b17-a668-ac2038915fc8" path="/var/lib/kubelet/pods/c17bd300-9b3b-4b17-a668-ac2038915fc8/volumes" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.865513 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b5abc4-73e1-4908-a776-0162b26a6d30" path="/var/lib/kubelet/pods/d4b5abc4-73e1-4908-a776-0162b26a6d30/volumes" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.866020 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2621642-f600-4c3e-b641-9665ec72e213" path="/var/lib/kubelet/pods/f2621642-f600-4c3e-b641-9665ec72e213/volumes" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.866543 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde1114c-0d7b-4f87-9203-2dff8fd98201" path="/var/lib/kubelet/pods/fde1114c-0d7b-4f87-9203-2dff8fd98201/volumes" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.867646 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4zwtm" podUID="bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.868803 4826 projected.go:194] Error preparing data for projected volume kube-api-access-gmhjd for pod openstack/keystone-6fcd-account-create-update-cht6q: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 07:06:34 crc kubenswrapper[4826]: E0129 07:06:34.868874 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99eab789-1139-4666-9fb2-dfddd270bbf2-kube-api-access-gmhjd podName:99eab789-1139-4666-9fb2-dfddd270bbf2 nodeName:}" failed. No retries permitted until 2026-01-29 07:06:36.868847623 +0000 UTC m=+1380.730640682 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gmhjd" (UniqueName: "kubernetes.io/projected/99eab789-1139-4666-9fb2-dfddd270bbf2-kube-api-access-gmhjd") pod "keystone-6fcd-account-create-update-cht6q" (UID: "99eab789-1139-4666-9fb2-dfddd270bbf2") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.938443 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-55957b69d9-prlpm" podUID="b0f5ad8c-072d-4994-acb3-e898c0981eef" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.167:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.938589 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-55957b69d9-prlpm" podUID="b0f5ad8c-072d-4994-acb3-e898c0981eef" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.167:8080/healthcheck\": context deadline exceeded" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.942980 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:34 crc kubenswrapper[4826]: I0129 07:06:34.943006 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.006511 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zrzbq" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.014799 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4zwtm" podUID="bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" containerName="ovn-controller" probeResult="failure" output=< Jan 29 07:06:35 crc kubenswrapper[4826]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 29 07:06:35 crc kubenswrapper[4826]: > Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.023960 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "426fe450-4a4d-4048-8ea4-422d39482ceb" (UID: "426fe450-4a4d-4048-8ea4-422d39482ceb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.044596 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.050629 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca39ae08-94df-4778-8203-bcff5806eff0" (UID: "ca39ae08-94df-4778-8203-bcff5806eff0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.088339 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c080b978-6895-4067-9dd5-2c23d4d68518" (UID: "c080b978-6895-4067-9dd5-2c23d4d68518"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.099197 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.145749 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lspb9\" (UniqueName: \"kubernetes.io/projected/7aa6497d-3379-49b2-887e-6ed46928266e-kube-api-access-lspb9\") pod \"7aa6497d-3379-49b2-887e-6ed46928266e\" (UID: \"7aa6497d-3379-49b2-887e-6ed46928266e\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.146005 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aa6497d-3379-49b2-887e-6ed46928266e-operator-scripts\") pod \"7aa6497d-3379-49b2-887e-6ed46928266e\" (UID: \"7aa6497d-3379-49b2-887e-6ed46928266e\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.146356 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.146373 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.146384 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.146846 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aa6497d-3379-49b2-887e-6ed46928266e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7aa6497d-3379-49b2-887e-6ed46928266e" (UID: "7aa6497d-3379-49b2-887e-6ed46928266e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.146959 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42903a4e-8bdc-4c7b-bd44-b87199a848e6" (UID: "42903a4e-8bdc-4c7b-bd44-b87199a848e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.171718 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa6497d-3379-49b2-887e-6ed46928266e-kube-api-access-lspb9" (OuterVolumeSpecName: "kube-api-access-lspb9") pod "7aa6497d-3379-49b2-887e-6ed46928266e" (UID: "7aa6497d-3379-49b2-887e-6ed46928266e"). InnerVolumeSpecName "kube-api-access-lspb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.184396 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-84d9-account-create-update-w85wz"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.184704 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.184720 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.184734 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2f64-account-create-update-mndz2"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.184743 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2f64-account-create-update-mndz2"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.184754 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.184764 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.184774 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.184784 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.203715 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "42903a4e-8bdc-4c7b-bd44-b87199a848e6" (UID: "42903a4e-8bdc-4c7b-bd44-b87199a848e6"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.207265 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "216e612b-abc2-4d7c-8b10-28a595de5302" (UID: "216e612b-abc2-4d7c-8b10-28a595de5302"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.240403 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3de4a3bc-a01f-424a-8f17-60deaba1f189" (UID: "3de4a3bc-a01f-424a-8f17-60deaba1f189"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.249683 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lspb9\" (UniqueName: \"kubernetes.io/projected/7aa6497d-3379-49b2-887e-6ed46928266e-kube-api-access-lspb9\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.249713 4826 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.249746 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.249756 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.249767 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.249776 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aa6497d-3379-49b2-887e-6ed46928266e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.274644 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af36d2e1-464b-4ada-9b91-2c18c52502d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af36d2e1-464b-4ada-9b91-2c18c52502d1" (UID: "af36d2e1-464b-4ada-9b91-2c18c52502d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.293857 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5378dab4-ad0c-4259-a7d2-d3f7e784a142" (UID: "5378dab4-ad0c-4259-a7d2-d3f7e784a142"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.295404 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-config-data" (OuterVolumeSpecName: "config-data") pod "3de4a3bc-a01f-424a-8f17-60deaba1f189" (UID: "3de4a3bc-a01f-424a-8f17-60deaba1f189"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.299601 4826 generic.go:334] "Generic (PLEG): container finished" podID="1794f620-102a-4b9c-9097-713579ec55ad" containerID="f0ac6dfd0c3c53f1c0e6b9f3709b00a0bca023456e5256221df78b7693b4c9bf" exitCode=0 Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.299671 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1794f620-102a-4b9c-9097-713579ec55ad","Type":"ContainerDied","Data":"f0ac6dfd0c3c53f1c0e6b9f3709b00a0bca023456e5256221df78b7693b4c9bf"} Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.315609 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af36d2e1-464b-4ada-9b91-2c18c52502d1-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "af36d2e1-464b-4ada-9b91-2c18c52502d1" (UID: "af36d2e1-464b-4ada-9b91-2c18c52502d1"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.315714 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.315788 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.315844 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zrzbq" event={"ID":"7aa6497d-3379-49b2-887e-6ed46928266e","Type":"ContainerDied","Data":"fc7b03ec4b883ea5fd67e06d394ab7f364ad215d199fd63c20ca7266b4660657"} Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.315915 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.315955 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c497f4886-n5gtr" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.315984 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.316016 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.316047 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zrzbq" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.316080 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.316111 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fcd-account-create-update-cht6q" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.316101 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.316314 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-config-data" (OuterVolumeSpecName: "config-data") pod "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" (UID: "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.316458 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6797f89db9-wjtvh" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.316538 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c8d5fc944-9m8wp" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.316653 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-596b9c7d4-2m8gc" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.316755 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.338518 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "960b6ae0-2577-444e-bc2a-bea4ec2917f9" (UID: "960b6ae0-2577-444e-bc2a-bea4ec2917f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.339858 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fcd-account-create-update-cht6q" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.353710 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data" (OuterVolumeSpecName: "config-data") pod "216e612b-abc2-4d7c-8b10-28a595de5302" (UID: "216e612b-abc2-4d7c-8b10-28a595de5302"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.356601 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data\") pod \"216e612b-abc2-4d7c-8b10-28a595de5302\" (UID: \"216e612b-abc2-4d7c-8b10-28a595de5302\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.364362 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.364489 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.364547 4826 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/af36d2e1-464b-4ada-9b91-2c18c52502d1-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.364567 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5378dab4-ad0c-4259-a7d2-d3f7e784a142-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.364577 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.364587 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af36d2e1-464b-4ada-9b91-2c18c52502d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: W0129 07:06:35.365576 4826 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/216e612b-abc2-4d7c-8b10-28a595de5302/volumes/kubernetes.io~secret/config-data Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.365599 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data" (OuterVolumeSpecName: "config-data") pod "216e612b-abc2-4d7c-8b10-28a595de5302" (UID: "216e612b-abc2-4d7c-8b10-28a595de5302"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.401874 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ca39ae08-94df-4778-8203-bcff5806eff0" (UID: "ca39ae08-94df-4778-8203-bcff5806eff0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.407142 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.423708 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-config-data" (OuterVolumeSpecName: "config-data") pod "426fe450-4a4d-4048-8ea4-422d39482ceb" (UID: "426fe450-4a4d-4048-8ea4-422d39482ceb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.466312 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c080b978-6895-4067-9dd5-2c23d4d68518" (UID: "c080b978-6895-4067-9dd5-2c23d4d68518"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.466352 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-config-data" (OuterVolumeSpecName: "config-data") pod "ca39ae08-94df-4778-8203-bcff5806eff0" (UID: "ca39ae08-94df-4778-8203-bcff5806eff0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.473940 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea529cf3-184e-446a-9c6a-759cf1bab14c" (UID: "ea529cf3-184e-446a-9c6a-759cf1bab14c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.476702 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" (UID: "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.476769 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.480494 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-config-data" (OuterVolumeSpecName: "config-data") pod "960b6ae0-2577-444e-bc2a-bea4ec2917f9" (UID: "960b6ae0-2577-444e-bc2a-bea4ec2917f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.480673 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-config-data\") pod \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\" (UID: \"960b6ae0-2577-444e-bc2a-bea4ec2917f9\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.482189 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.482220 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426fe450-4a4d-4048-8ea4-422d39482ceb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.482232 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.482250 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.482262 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.482273 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.482291 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/216e612b-abc2-4d7c-8b10-28a595de5302-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: W0129 07:06:35.482406 4826 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/960b6ae0-2577-444e-bc2a-bea4ec2917f9/volumes/kubernetes.io~secret/config-data Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.482422 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-config-data" (OuterVolumeSpecName: "config-data") pod "960b6ae0-2577-444e-bc2a-bea4ec2917f9" (UID: "960b6ae0-2577-444e-bc2a-bea4ec2917f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.492666 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "42903a4e-8bdc-4c7b-bd44-b87199a848e6" (UID: "42903a4e-8bdc-4c7b-bd44-b87199a848e6"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.502618 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3de4a3bc-a01f-424a-8f17-60deaba1f189" (UID: "3de4a3bc-a01f-424a-8f17-60deaba1f189"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.531734 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ca39ae08-94df-4778-8203-bcff5806eff0" (UID: "ca39ae08-94df-4778-8203-bcff5806eff0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.537076 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zrzbq"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.543655 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" (UID: "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.554449 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-config-data" (OuterVolumeSpecName: "config-data") pod "c080b978-6895-4067-9dd5-2c23d4d68518" (UID: "c080b978-6895-4067-9dd5-2c23d4d68518"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.554899 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3de4a3bc-a01f-424a-8f17-60deaba1f189" (UID: "3de4a3bc-a01f-424a-8f17-60deaba1f189"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.584948 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zrzbq"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.601203 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/960b6ae0-2577-444e-bc2a-bea4ec2917f9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.601233 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.601249 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca39ae08-94df-4778-8203-bcff5806eff0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.601271 4826 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/42903a4e-8bdc-4c7b-bd44-b87199a848e6-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.601281 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.601290 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3de4a3bc-a01f-424a-8f17-60deaba1f189-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.601312 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c080b978-6895-4067-9dd5-2c23d4d68518-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.609975 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.613476 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" (UID: "1beb9e09-4039-4ce6-a33f-0d34e10b1cfe"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.614210 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-config-data" (OuterVolumeSpecName: "config-data") pod "ea529cf3-184e-446a-9c6a-759cf1bab14c" (UID: "ea529cf3-184e-446a-9c6a-759cf1bab14c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.658831 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.661035 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.661080 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.681538 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.702196 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea529cf3-184e-446a-9c6a-759cf1bab14c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.702230 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.704872 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.775670 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.805740 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"1794f620-102a-4b9c-9097-713579ec55ad\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.805789 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-plugins-conf\") pod \"1794f620-102a-4b9c-9097-713579ec55ad\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.805833 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-server-conf\") pod \"1794f620-102a-4b9c-9097-713579ec55ad\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.805862 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhqxf\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-kube-api-access-rhqxf\") pod \"1794f620-102a-4b9c-9097-713579ec55ad\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.805901 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1794f620-102a-4b9c-9097-713579ec55ad-pod-info\") pod \"1794f620-102a-4b9c-9097-713579ec55ad\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.805923 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-tls\") pod \"1794f620-102a-4b9c-9097-713579ec55ad\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.805958 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-erlang-cookie\") pod \"1794f620-102a-4b9c-9097-713579ec55ad\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.805976 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-plugins\") pod \"1794f620-102a-4b9c-9097-713579ec55ad\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.806006 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1794f620-102a-4b9c-9097-713579ec55ad-erlang-cookie-secret\") pod \"1794f620-102a-4b9c-9097-713579ec55ad\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.806046 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-confd\") pod \"1794f620-102a-4b9c-9097-713579ec55ad\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.806158 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data\") pod \"1794f620-102a-4b9c-9097-713579ec55ad\" (UID: \"1794f620-102a-4b9c-9097-713579ec55ad\") " Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.806557 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1794f620-102a-4b9c-9097-713579ec55ad" (UID: "1794f620-102a-4b9c-9097-713579ec55ad"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: E0129 07:06:35.806575 4826 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 07:06:35 crc kubenswrapper[4826]: E0129 07:06:35.806649 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data podName:0da3bc6b-99a0-4de9-9479-5aaef8bfd81c nodeName:}" failed. No retries permitted until 2026-01-29 07:06:43.806630006 +0000 UTC m=+1387.668423075 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data") pod "rabbitmq-cell1-server-0" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c") : configmap "rabbitmq-cell1-config-data" not found Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.806883 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1794f620-102a-4b9c-9097-713579ec55ad" (UID: "1794f620-102a-4b9c-9097-713579ec55ad"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.808906 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1794f620-102a-4b9c-9097-713579ec55ad" (UID: "1794f620-102a-4b9c-9097-713579ec55ad"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.839413 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1794f620-102a-4b9c-9097-713579ec55ad" (UID: "1794f620-102a-4b9c-9097-713579ec55ad"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:35 crc kubenswrapper[4826]: I0129 07:06:35.840038 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "1794f620-102a-4b9c-9097-713579ec55ad" (UID: "1794f620-102a-4b9c-9097-713579ec55ad"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.842454 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1794f620-102a-4b9c-9097-713579ec55ad-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1794f620-102a-4b9c-9097-713579ec55ad" (UID: "1794f620-102a-4b9c-9097-713579ec55ad"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.855507 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-kube-api-access-rhqxf" (OuterVolumeSpecName: "kube-api-access-rhqxf") pod "1794f620-102a-4b9c-9097-713579ec55ad" (UID: "1794f620-102a-4b9c-9097-713579ec55ad"). InnerVolumeSpecName "kube-api-access-rhqxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.857453 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1794f620-102a-4b9c-9097-713579ec55ad-pod-info" (OuterVolumeSpecName: "pod-info") pod "1794f620-102a-4b9c-9097-713579ec55ad" (UID: "1794f620-102a-4b9c-9097-713579ec55ad"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.861370 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.866870 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.901183 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.914286 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.918855 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhqxf\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-kube-api-access-rhqxf\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.918885 4826 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1794f620-102a-4b9c-9097-713579ec55ad-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.918895 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.918904 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.918913 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.918928 4826 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1794f620-102a-4b9c-9097-713579ec55ad-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.918952 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.918962 4826 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: E0129 07:06:35.929018 4826 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 29 07:06:36 crc kubenswrapper[4826]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-29T07:06:28Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 29 07:06:36 crc kubenswrapper[4826]: /etc/init.d/functions: line 589: 393 Alarm clock "$@" Jan 29 07:06:36 crc kubenswrapper[4826]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-4zwtm" message=< Jan 29 07:06:36 crc kubenswrapper[4826]: Exiting ovn-controller (1) [FAILED] Jan 29 07:06:36 crc kubenswrapper[4826]: Killing ovn-controller (1) [ OK ] Jan 29 07:06:36 crc kubenswrapper[4826]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 29 07:06:36 crc kubenswrapper[4826]: 2026-01-29T07:06:28Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 29 07:06:36 crc kubenswrapper[4826]: /etc/init.d/functions: line 589: 393 Alarm clock "$@" Jan 29 07:06:36 crc kubenswrapper[4826]: > Jan 29 07:06:36 crc kubenswrapper[4826]: E0129 07:06:35.929067 4826 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 29 07:06:36 crc kubenswrapper[4826]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-29T07:06:28Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 29 07:06:36 crc kubenswrapper[4826]: /etc/init.d/functions: line 589: 393 Alarm clock "$@" Jan 29 07:06:36 crc kubenswrapper[4826]: > pod="openstack/ovn-controller-4zwtm" podUID="bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" containerName="ovn-controller" containerID="cri-o://9015e9cab42dfd44fe7092ceb6eb5f305eac76d9b101c255e55adbb653135a3c" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.929119 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-4zwtm" podUID="bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" containerName="ovn-controller" containerID="cri-o://9015e9cab42dfd44fe7092ceb6eb5f305eac76d9b101c255e55adbb653135a3c" gracePeriod=22 Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.931450 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.938525 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.980471 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data" (OuterVolumeSpecName: "config-data") pod "1794f620-102a-4b9c-9097-713579ec55ad" (UID: "1794f620-102a-4b9c-9097-713579ec55ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:35.981632 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.019508 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-c497f4886-n5gtr"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.020997 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.021023 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.047281 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-c497f4886-n5gtr"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.076637 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7c8d5fc944-9m8wp"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.077958 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-server-conf" (OuterVolumeSpecName: "server-conf") pod "1794f620-102a-4b9c-9097-713579ec55ad" (UID: "1794f620-102a-4b9c-9097-713579ec55ad"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.080930 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7c8d5fc944-9m8wp"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.089981 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.099782 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.107866 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-596b9c7d4-2m8gc"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.110070 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1794f620-102a-4b9c-9097-713579ec55ad" (UID: "1794f620-102a-4b9c-9097-713579ec55ad"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.116373 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-596b9c7d4-2m8gc"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.122925 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1794f620-102a-4b9c-9097-713579ec55ad-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.122974 4826 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1794f620-102a-4b9c-9097-713579ec55ad-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.123664 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.132279 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.137011 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6797f89db9-wjtvh"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.145846 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6797f89db9-wjtvh"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.331161 4826 generic.go:334] "Generic (PLEG): container finished" podID="34f59971-b32b-4b19-950c-77af3de22fd6" containerID="212d41639781efa740d28e2f69b9a84d9805cc97ad8560cd6ff518592748ede0" exitCode=0 Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.331282 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34f59971-b32b-4b19-950c-77af3de22fd6","Type":"ContainerDied","Data":"212d41639781efa740d28e2f69b9a84d9805cc97ad8560cd6ff518592748ede0"} Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.345715 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_17dd6ec1-84fb-4bb3-8700-c8691f059937/ovn-northd/0.log" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.345783 4826 generic.go:334] "Generic (PLEG): container finished" podID="17dd6ec1-84fb-4bb3-8700-c8691f059937" containerID="7ba9ac04e0850886e890e608a55f11c50e9ca3d5994419279b8b8fc19be3fbd4" exitCode=139 Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.345899 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"17dd6ec1-84fb-4bb3-8700-c8691f059937","Type":"ContainerDied","Data":"7ba9ac04e0850886e890e608a55f11c50e9ca3d5994419279b8b8fc19be3fbd4"} Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.358848 4826 generic.go:334] "Generic (PLEG): container finished" podID="0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" containerID="561f44049eef8bcf9743aabf5fda4a13b2156ef6047f470fc4f0c9a570583cb1" exitCode=0 Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.358982 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c","Type":"ContainerDied","Data":"561f44049eef8bcf9743aabf5fda4a13b2156ef6047f470fc4f0c9a570583cb1"} Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.371778 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4zwtm_bbbbec70-be7f-4a31-9f97-76d5c78b1cd0/ovn-controller/0.log" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.371817 4826 generic.go:334] "Generic (PLEG): container finished" podID="bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" containerID="9015e9cab42dfd44fe7092ceb6eb5f305eac76d9b101c255e55adbb653135a3c" exitCode=137 Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.371915 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4zwtm" event={"ID":"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0","Type":"ContainerDied","Data":"9015e9cab42dfd44fe7092ceb6eb5f305eac76d9b101c255e55adbb653135a3c"} Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.385229 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fcd-account-create-update-cht6q" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.385258 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1794f620-102a-4b9c-9097-713579ec55ad","Type":"ContainerDied","Data":"ec3729f30c53e6fd1d8004de6ecf626045de1ac8f9fb0ede2094e6e4cd1dba8c"} Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.385345 4826 scope.go:117] "RemoveContainer" containerID="f0ac6dfd0c3c53f1c0e6b9f3709b00a0bca023456e5256221df78b7693b4c9bf" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.387131 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.420626 4826 scope.go:117] "RemoveContainer" containerID="a285c0e82f869c096c5852cbe3ebb71f48bfdd919cd5f2aa2550ecf47c3da59f" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.471482 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6fcd-account-create-update-cht6q"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.472219 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-86487d6456-mmjgq" podUID="d9016472-5ff0-4849-bc8a-c1d815d27931" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.148:5000/v3\": read tcp 10.217.0.2:41776->10.217.0.148:5000: read: connection reset by peer" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.484518 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4zwtm_bbbbec70-be7f-4a31-9f97-76d5c78b1cd0/ovn-controller/0.log" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.484610 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4zwtm" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.522664 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6fcd-account-create-update-cht6q"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.546888 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.557701 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.562828 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-ovn-controller-tls-certs\") pod \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.562886 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-log-ovn\") pod \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.562932 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-combined-ca-bundle\") pod \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.562987 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-run-ovn\") pod \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.563047 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4h6z\" (UniqueName: \"kubernetes.io/projected/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-kube-api-access-n4h6z\") pod \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.563076 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-run\") pod \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.563107 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-scripts\") pod \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\" (UID: \"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.563471 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmhjd\" (UniqueName: \"kubernetes.io/projected/99eab789-1139-4666-9fb2-dfddd270bbf2-kube-api-access-gmhjd\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.563489 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99eab789-1139-4666-9fb2-dfddd270bbf2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.564843 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-scripts" (OuterVolumeSpecName: "scripts") pod "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" (UID: "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.564889 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-run" (OuterVolumeSpecName: "var-run") pod "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" (UID: "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.564910 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" (UID: "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.564927 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" (UID: "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.569852 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-kube-api-access-n4h6z" (OuterVolumeSpecName: "kube-api-access-n4h6z") pod "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" (UID: "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0"). InnerVolumeSpecName "kube-api-access-n4h6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.583128 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.602691 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" (UID: "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.667182 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-pod-info\") pod \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.667336 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-confd\") pod \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.667538 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-plugins-conf\") pod \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.667595 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-tls\") pod \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.667684 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data\") pod \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.667747 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.667775 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-erlang-cookie-secret\") pod \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.667811 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww4xx\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-kube-api-access-ww4xx\") pod \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.667858 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-erlang-cookie\") pod \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.667914 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-plugins\") pod \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.667939 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-server-conf\") pod \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\" (UID: \"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.669666 4826 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.669703 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.669726 4826 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.669739 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4h6z\" (UniqueName: \"kubernetes.io/projected/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-kube-api-access-n4h6z\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.669750 4826 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.669762 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.671652 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.674772 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.675391 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.676147 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.685038 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.685425 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.696326 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-pod-info" (OuterVolumeSpecName: "pod-info") pod "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.701367 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" (UID: "bbbbec70-be7f-4a31-9f97-76d5c78b1cd0"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.714646 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data" (OuterVolumeSpecName: "config-data") pod "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.715494 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-kube-api-access-ww4xx" (OuterVolumeSpecName: "kube-api-access-ww4xx") pod "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c"). InnerVolumeSpecName "kube-api-access-ww4xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.754407 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-server-conf" (OuterVolumeSpecName: "server-conf") pod "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.772248 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.772283 4826 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.772311 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww4xx\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-kube-api-access-ww4xx\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.772320 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.772331 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.772339 4826 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.772346 4826 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.772354 4826 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.772362 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.772371 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.772380 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.798722 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" (UID: "0da3bc6b-99a0-4de9-9479-5aaef8bfd81c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.808266 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.836684 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="082eb821-de0c-462e-9653-b1c80c8e1d2c" path="/var/lib/kubelet/pods/082eb821-de0c-462e-9653-b1c80c8e1d2c/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.838643 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1794f620-102a-4b9c-9097-713579ec55ad" path="/var/lib/kubelet/pods/1794f620-102a-4b9c-9097-713579ec55ad/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.839324 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" path="/var/lib/kubelet/pods/1beb9e09-4039-4ce6-a33f-0d34e10b1cfe/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.840868 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216e612b-abc2-4d7c-8b10-28a595de5302" path="/var/lib/kubelet/pods/216e612b-abc2-4d7c-8b10-28a595de5302/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.841620 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de4a3bc-a01f-424a-8f17-60deaba1f189" path="/var/lib/kubelet/pods/3de4a3bc-a01f-424a-8f17-60deaba1f189/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.846388 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="426fe450-4a4d-4048-8ea4-422d39482ceb" path="/var/lib/kubelet/pods/426fe450-4a4d-4048-8ea4-422d39482ceb/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.846935 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42903a4e-8bdc-4c7b-bd44-b87199a848e6" path="/var/lib/kubelet/pods/42903a4e-8bdc-4c7b-bd44-b87199a848e6/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.847746 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="454b9218-d564-4664-b1dd-4435fa9c60b7" path="/var/lib/kubelet/pods/454b9218-d564-4664-b1dd-4435fa9c60b7/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.849823 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5378dab4-ad0c-4259-a7d2-d3f7e784a142" path="/var/lib/kubelet/pods/5378dab4-ad0c-4259-a7d2-d3f7e784a142/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.852825 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68d92d9a-3db4-4400-ac87-6334d2be6184" path="/var/lib/kubelet/pods/68d92d9a-3db4-4400-ac87-6334d2be6184/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.853132 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aa6497d-3379-49b2-887e-6ed46928266e" path="/var/lib/kubelet/pods/7aa6497d-3379-49b2-887e-6ed46928266e/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.853588 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c32a1d-d03e-4b49-8ac3-0c447212ce2a" path="/var/lib/kubelet/pods/83c32a1d-d03e-4b49-8ac3-0c447212ce2a/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.853946 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9349a8ff-2652-4dcf-89d9-6d440269be8c" path="/var/lib/kubelet/pods/9349a8ff-2652-4dcf-89d9-6d440269be8c/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.855452 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960b6ae0-2577-444e-bc2a-bea4ec2917f9" path="/var/lib/kubelet/pods/960b6ae0-2577-444e-bc2a-bea4ec2917f9/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.856433 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99eab789-1139-4666-9fb2-dfddd270bbf2" path="/var/lib/kubelet/pods/99eab789-1139-4666-9fb2-dfddd270bbf2/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.858743 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af36d2e1-464b-4ada-9b91-2c18c52502d1" path="/var/lib/kubelet/pods/af36d2e1-464b-4ada-9b91-2c18c52502d1/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.859678 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c080b978-6895-4067-9dd5-2c23d4d68518" path="/var/lib/kubelet/pods/c080b978-6895-4067-9dd5-2c23d4d68518/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.860999 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca39ae08-94df-4778-8203-bcff5806eff0" path="/var/lib/kubelet/pods/ca39ae08-94df-4778-8203-bcff5806eff0/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.861646 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27383b1-aba6-4c25-9d4b-3b9cceb2b739" path="/var/lib/kubelet/pods/e27383b1-aba6-4c25-9d4b-3b9cceb2b739/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.862203 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea529cf3-184e-446a-9c6a-759cf1bab14c" path="/var/lib/kubelet/pods/ea529cf3-184e-446a-9c6a-759cf1bab14c/volumes" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.872899 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.872923 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.946794 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.973871 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-scripts\") pod \"34f59971-b32b-4b19-950c-77af3de22fd6\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.973920 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-combined-ca-bundle\") pod \"34f59971-b32b-4b19-950c-77af3de22fd6\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.973980 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-config-data\") pod \"34f59971-b32b-4b19-950c-77af3de22fd6\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.974008 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk5kl\" (UniqueName: \"kubernetes.io/projected/34f59971-b32b-4b19-950c-77af3de22fd6-kube-api-access-qk5kl\") pod \"34f59971-b32b-4b19-950c-77af3de22fd6\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.974055 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34f59971-b32b-4b19-950c-77af3de22fd6-run-httpd\") pod \"34f59971-b32b-4b19-950c-77af3de22fd6\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.974074 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-ceilometer-tls-certs\") pod \"34f59971-b32b-4b19-950c-77af3de22fd6\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.974091 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-sg-core-conf-yaml\") pod \"34f59971-b32b-4b19-950c-77af3de22fd6\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.974150 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34f59971-b32b-4b19-950c-77af3de22fd6-log-httpd\") pod \"34f59971-b32b-4b19-950c-77af3de22fd6\" (UID: \"34f59971-b32b-4b19-950c-77af3de22fd6\") " Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.975217 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34f59971-b32b-4b19-950c-77af3de22fd6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "34f59971-b32b-4b19-950c-77af3de22fd6" (UID: "34f59971-b32b-4b19-950c-77af3de22fd6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.975947 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34f59971-b32b-4b19-950c-77af3de22fd6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "34f59971-b32b-4b19-950c-77af3de22fd6" (UID: "34f59971-b32b-4b19-950c-77af3de22fd6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.981561 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-scripts" (OuterVolumeSpecName: "scripts") pod "34f59971-b32b-4b19-950c-77af3de22fd6" (UID: "34f59971-b32b-4b19-950c-77af3de22fd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:36 crc kubenswrapper[4826]: I0129 07:06:36.981638 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34f59971-b32b-4b19-950c-77af3de22fd6-kube-api-access-qk5kl" (OuterVolumeSpecName: "kube-api-access-qk5kl") pod "34f59971-b32b-4b19-950c-77af3de22fd6" (UID: "34f59971-b32b-4b19-950c-77af3de22fd6"). InnerVolumeSpecName "kube-api-access-qk5kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.037283 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "34f59971-b32b-4b19-950c-77af3de22fd6" (UID: "34f59971-b32b-4b19-950c-77af3de22fd6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.065167 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "34f59971-b32b-4b19-950c-77af3de22fd6" (UID: "34f59971-b32b-4b19-950c-77af3de22fd6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.068004 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34f59971-b32b-4b19-950c-77af3de22fd6" (UID: "34f59971-b32b-4b19-950c-77af3de22fd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.078673 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.078704 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.078723 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk5kl\" (UniqueName: \"kubernetes.io/projected/34f59971-b32b-4b19-950c-77af3de22fd6-kube-api-access-qk5kl\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.078738 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34f59971-b32b-4b19-950c-77af3de22fd6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.078747 4826 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.078756 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.078763 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34f59971-b32b-4b19-950c-77af3de22fd6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.089656 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_17dd6ec1-84fb-4bb3-8700-c8691f059937/ovn-northd/0.log" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.089762 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.109909 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.152349 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.193129 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-config-data" (OuterVolumeSpecName: "config-data") pod "34f59971-b32b-4b19-950c-77af3de22fd6" (UID: "34f59971-b32b-4b19-950c-77af3de22fd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283197 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-combined-ca-bundle\") pod \"d9016472-5ff0-4849-bc8a-c1d815d27931\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283334 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbxlf\" (UniqueName: \"kubernetes.io/projected/17dd6ec1-84fb-4bb3-8700-c8691f059937-kube-api-access-cbxlf\") pod \"17dd6ec1-84fb-4bb3-8700-c8691f059937\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283407 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-combined-ca-bundle\") pod \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\" (UID: \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283437 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17dd6ec1-84fb-4bb3-8700-c8691f059937-ovn-rundir\") pod \"17dd6ec1-84fb-4bb3-8700-c8691f059937\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283474 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-credential-keys\") pod \"d9016472-5ff0-4849-bc8a-c1d815d27931\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283515 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-config-data\") pod \"d9016472-5ff0-4849-bc8a-c1d815d27931\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283550 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-internal-tls-certs\") pod \"d9016472-5ff0-4849-bc8a-c1d815d27931\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283573 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-combined-ca-bundle\") pod \"17dd6ec1-84fb-4bb3-8700-c8691f059937\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283594 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-scripts\") pod \"d9016472-5ff0-4849-bc8a-c1d815d27931\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283630 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz7sh\" (UniqueName: \"kubernetes.io/projected/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-kube-api-access-zz7sh\") pod \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\" (UID: \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283655 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17dd6ec1-84fb-4bb3-8700-c8691f059937-scripts\") pod \"17dd6ec1-84fb-4bb3-8700-c8691f059937\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283670 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd6ec1-84fb-4bb3-8700-c8691f059937-config\") pod \"17dd6ec1-84fb-4bb3-8700-c8691f059937\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283690 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98tzx\" (UniqueName: \"kubernetes.io/projected/d9016472-5ff0-4849-bc8a-c1d815d27931-kube-api-access-98tzx\") pod \"d9016472-5ff0-4849-bc8a-c1d815d27931\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283715 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-metrics-certs-tls-certs\") pod \"17dd6ec1-84fb-4bb3-8700-c8691f059937\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283732 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-fernet-keys\") pod \"d9016472-5ff0-4849-bc8a-c1d815d27931\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283764 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-public-tls-certs\") pod \"d9016472-5ff0-4849-bc8a-c1d815d27931\" (UID: \"d9016472-5ff0-4849-bc8a-c1d815d27931\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283780 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-ovn-northd-tls-certs\") pod \"17dd6ec1-84fb-4bb3-8700-c8691f059937\" (UID: \"17dd6ec1-84fb-4bb3-8700-c8691f059937\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.283798 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-config-data\") pod \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\" (UID: \"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b\") " Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.284250 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f59971-b32b-4b19-950c-77af3de22fd6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.285906 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17dd6ec1-84fb-4bb3-8700-c8691f059937-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "17dd6ec1-84fb-4bb3-8700-c8691f059937" (UID: "17dd6ec1-84fb-4bb3-8700-c8691f059937"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.288694 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-scripts" (OuterVolumeSpecName: "scripts") pod "d9016472-5ff0-4849-bc8a-c1d815d27931" (UID: "d9016472-5ff0-4849-bc8a-c1d815d27931"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.289527 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17dd6ec1-84fb-4bb3-8700-c8691f059937-scripts" (OuterVolumeSpecName: "scripts") pod "17dd6ec1-84fb-4bb3-8700-c8691f059937" (UID: "17dd6ec1-84fb-4bb3-8700-c8691f059937"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.297126 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d9016472-5ff0-4849-bc8a-c1d815d27931" (UID: "d9016472-5ff0-4849-bc8a-c1d815d27931"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.297621 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17dd6ec1-84fb-4bb3-8700-c8691f059937-config" (OuterVolumeSpecName: "config") pod "17dd6ec1-84fb-4bb3-8700-c8691f059937" (UID: "17dd6ec1-84fb-4bb3-8700-c8691f059937"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.298702 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9016472-5ff0-4849-bc8a-c1d815d27931-kube-api-access-98tzx" (OuterVolumeSpecName: "kube-api-access-98tzx") pod "d9016472-5ff0-4849-bc8a-c1d815d27931" (UID: "d9016472-5ff0-4849-bc8a-c1d815d27931"). InnerVolumeSpecName "kube-api-access-98tzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.298775 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17dd6ec1-84fb-4bb3-8700-c8691f059937-kube-api-access-cbxlf" (OuterVolumeSpecName: "kube-api-access-cbxlf") pod "17dd6ec1-84fb-4bb3-8700-c8691f059937" (UID: "17dd6ec1-84fb-4bb3-8700-c8691f059937"). InnerVolumeSpecName "kube-api-access-cbxlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.299928 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-kube-api-access-zz7sh" (OuterVolumeSpecName: "kube-api-access-zz7sh") pod "229ff3bd-fac5-4bb5-ba1e-9e829c30f45b" (UID: "229ff3bd-fac5-4bb5-ba1e-9e829c30f45b"). InnerVolumeSpecName "kube-api-access-zz7sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.302939 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d9016472-5ff0-4849-bc8a-c1d815d27931" (UID: "d9016472-5ff0-4849-bc8a-c1d815d27931"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.329359 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-config-data" (OuterVolumeSpecName: "config-data") pod "229ff3bd-fac5-4bb5-ba1e-9e829c30f45b" (UID: "229ff3bd-fac5-4bb5-ba1e-9e829c30f45b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.340519 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9016472-5ff0-4849-bc8a-c1d815d27931" (UID: "d9016472-5ff0-4849-bc8a-c1d815d27931"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.344747 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-config-data" (OuterVolumeSpecName: "config-data") pod "d9016472-5ff0-4849-bc8a-c1d815d27931" (UID: "d9016472-5ff0-4849-bc8a-c1d815d27931"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.353547 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "229ff3bd-fac5-4bb5-ba1e-9e829c30f45b" (UID: "229ff3bd-fac5-4bb5-ba1e-9e829c30f45b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.354798 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17dd6ec1-84fb-4bb3-8700-c8691f059937" (UID: "17dd6ec1-84fb-4bb3-8700-c8691f059937"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.369228 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d9016472-5ff0-4849-bc8a-c1d815d27931" (UID: "d9016472-5ff0-4849-bc8a-c1d815d27931"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.372474 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d9016472-5ff0-4849-bc8a-c1d815d27931" (UID: "d9016472-5ff0-4849-bc8a-c1d815d27931"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387483 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "17dd6ec1-84fb-4bb3-8700-c8691f059937" (UID: "17dd6ec1-84fb-4bb3-8700-c8691f059937"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387644 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17dd6ec1-84fb-4bb3-8700-c8691f059937-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387658 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd6ec1-84fb-4bb3-8700-c8691f059937-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387667 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98tzx\" (UniqueName: \"kubernetes.io/projected/d9016472-5ff0-4849-bc8a-c1d815d27931-kube-api-access-98tzx\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387676 4826 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387685 4826 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387696 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387704 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387711 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387719 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbxlf\" (UniqueName: \"kubernetes.io/projected/17dd6ec1-84fb-4bb3-8700-c8691f059937-kube-api-access-cbxlf\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387727 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387735 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/17dd6ec1-84fb-4bb3-8700-c8691f059937-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387743 4826 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387751 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387759 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387768 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387776 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9016472-5ff0-4849-bc8a-c1d815d27931-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.387783 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz7sh\" (UniqueName: \"kubernetes.io/projected/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b-kube-api-access-zz7sh\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.388451 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "17dd6ec1-84fb-4bb3-8700-c8691f059937" (UID: "17dd6ec1-84fb-4bb3-8700-c8691f059937"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.395975 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_17dd6ec1-84fb-4bb3-8700-c8691f059937/ovn-northd/0.log" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.396210 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"17dd6ec1-84fb-4bb3-8700-c8691f059937","Type":"ContainerDied","Data":"1095560d66bb32e4b21ddb8c2db45cc2e6b9c9229ce2399fd7ef5d5440cc36be"} Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.396246 4826 scope.go:117] "RemoveContainer" containerID="2231f0fb6e7d9114a6e1ce628c3f775fddfa8bbe31c0d18c4ebf95440d9a1023" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.396477 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.408227 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.408329 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0da3bc6b-99a0-4de9-9479-5aaef8bfd81c","Type":"ContainerDied","Data":"d7b05a926283f388e3d5789ea2a0b8a4cd6c939d6df91d3d288ef854c9ded18e"} Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.411586 4826 generic.go:334] "Generic (PLEG): container finished" podID="229ff3bd-fac5-4bb5-ba1e-9e829c30f45b" containerID="50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2" exitCode=0 Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.411661 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b","Type":"ContainerDied","Data":"50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2"} Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.411691 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"229ff3bd-fac5-4bb5-ba1e-9e829c30f45b","Type":"ContainerDied","Data":"7b42d0f18b68219a5e0a2872c3674527f3e5e36f99156a6a740edeb9484f5f8e"} Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.411754 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.421030 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4zwtm_bbbbec70-be7f-4a31-9f97-76d5c78b1cd0/ovn-controller/0.log" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.421486 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4zwtm" event={"ID":"bbbbec70-be7f-4a31-9f97-76d5c78b1cd0","Type":"ContainerDied","Data":"0e2b877e63b4206a7b3f9783b41ecf6a4a7a1003aa85c4707b7aed7613891ad7"} Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.421968 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4zwtm" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.426604 4826 generic.go:334] "Generic (PLEG): container finished" podID="d9016472-5ff0-4849-bc8a-c1d815d27931" containerID="8a6671e350fa8c53b6824335f80df2067d73193d492de3abd1a20ab459f41264" exitCode=0 Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.426656 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-86487d6456-mmjgq" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.426650 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-86487d6456-mmjgq" event={"ID":"d9016472-5ff0-4849-bc8a-c1d815d27931","Type":"ContainerDied","Data":"8a6671e350fa8c53b6824335f80df2067d73193d492de3abd1a20ab459f41264"} Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.426712 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-86487d6456-mmjgq" event={"ID":"d9016472-5ff0-4849-bc8a-c1d815d27931","Type":"ContainerDied","Data":"efb8fd134a87551d54e3fa93081ddbe23f586ddcad219eda821f4bc217938f12"} Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.434782 4826 scope.go:117] "RemoveContainer" containerID="7ba9ac04e0850886e890e608a55f11c50e9ca3d5994419279b8b8fc19be3fbd4" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.455537 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.460708 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.489341 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/17dd6ec1-84fb-4bb3-8700-c8691f059937-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.494973 4826 scope.go:117] "RemoveContainer" containerID="561f44049eef8bcf9743aabf5fda4a13b2156ef6047f470fc4f0c9a570583cb1" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.506210 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34f59971-b32b-4b19-950c-77af3de22fd6","Type":"ContainerDied","Data":"074537ed7501f1b8046e37ea9a5ed2b8b7b61370a032b224f0056899cf352002"} Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.506316 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.521750 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.539708 4826 scope.go:117] "RemoveContainer" containerID="a618aff47b6a4f080c12ed436cdd2e152b8a7acc4f88c6d531c82c61bbd02d8c" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.543774 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.563565 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.570442 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.572855 4826 scope.go:117] "RemoveContainer" containerID="50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.576786 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4zwtm"] Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.584281 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4zwtm"] Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.590406 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-86487d6456-mmjgq"] Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.594595 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-86487d6456-mmjgq"] Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.600223 4826 scope.go:117] "RemoveContainer" containerID="50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2" Jan 29 07:06:37 crc kubenswrapper[4826]: E0129 07:06:37.600758 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2\": container with ID starting with 50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2 not found: ID does not exist" containerID="50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.600798 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2"} err="failed to get container status \"50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2\": rpc error: code = NotFound desc = could not find container \"50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2\": container with ID starting with 50af99804abe9cf6e6f83b558ffc855d23611fff8a7850a26d5eccd9f4c9d2b2 not found: ID does not exist" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.600827 4826 scope.go:117] "RemoveContainer" containerID="9015e9cab42dfd44fe7092ceb6eb5f305eac76d9b101c255e55adbb653135a3c" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.603956 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.610091 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.710504 4826 scope.go:117] "RemoveContainer" containerID="8a6671e350fa8c53b6824335f80df2067d73193d492de3abd1a20ab459f41264" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.730634 4826 scope.go:117] "RemoveContainer" containerID="8a6671e350fa8c53b6824335f80df2067d73193d492de3abd1a20ab459f41264" Jan 29 07:06:37 crc kubenswrapper[4826]: E0129 07:06:37.731283 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a6671e350fa8c53b6824335f80df2067d73193d492de3abd1a20ab459f41264\": container with ID starting with 8a6671e350fa8c53b6824335f80df2067d73193d492de3abd1a20ab459f41264 not found: ID does not exist" containerID="8a6671e350fa8c53b6824335f80df2067d73193d492de3abd1a20ab459f41264" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.731343 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a6671e350fa8c53b6824335f80df2067d73193d492de3abd1a20ab459f41264"} err="failed to get container status \"8a6671e350fa8c53b6824335f80df2067d73193d492de3abd1a20ab459f41264\": rpc error: code = NotFound desc = could not find container \"8a6671e350fa8c53b6824335f80df2067d73193d492de3abd1a20ab459f41264\": container with ID starting with 8a6671e350fa8c53b6824335f80df2067d73193d492de3abd1a20ab459f41264 not found: ID does not exist" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.731362 4826 scope.go:117] "RemoveContainer" containerID="b9493f3c4087383b633335c756de30375c0411684ed837b040dded2644f790f2" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.749787 4826 scope.go:117] "RemoveContainer" containerID="196b723afa334558daf445ce5209a0b55ad9e7a17b9e293e6f9442cc86628664" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.766335 4826 scope.go:117] "RemoveContainer" containerID="212d41639781efa740d28e2f69b9a84d9805cc97ad8560cd6ff518592748ede0" Jan 29 07:06:37 crc kubenswrapper[4826]: I0129 07:06:37.784068 4826 scope.go:117] "RemoveContainer" containerID="7c7dba5d83accbaa998425c9ed45b4f96e2e00553764c182bf92c212452b3aee" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.495089 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.522122 4826 generic.go:334] "Generic (PLEG): container finished" podID="426997bd-6ba1-4ebb-b8d3-08be081add91" containerID="efa12df27e8f1e42dacfe2b602d83d6ae2370e8bb4cb8ce8ab4abb28c9e997af" exitCode=0 Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.522268 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.522349 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"426997bd-6ba1-4ebb-b8d3-08be081add91","Type":"ContainerDied","Data":"efa12df27e8f1e42dacfe2b602d83d6ae2370e8bb4cb8ce8ab4abb28c9e997af"} Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.522378 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"426997bd-6ba1-4ebb-b8d3-08be081add91","Type":"ContainerDied","Data":"ef305694fe6ce4067b3cc218917f89bee5278082ec7690a8c116f633d18ceb35"} Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.522461 4826 scope.go:117] "RemoveContainer" containerID="efa12df27e8f1e42dacfe2b602d83d6ae2370e8bb4cb8ce8ab4abb28c9e997af" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.557627 4826 scope.go:117] "RemoveContainer" containerID="5507b9d03083bbcbf3f503fcf50eaf096b7739e47fb5b4546534bf1849f59544" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.587529 4826 scope.go:117] "RemoveContainer" containerID="efa12df27e8f1e42dacfe2b602d83d6ae2370e8bb4cb8ce8ab4abb28c9e997af" Jan 29 07:06:38 crc kubenswrapper[4826]: E0129 07:06:38.587886 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa12df27e8f1e42dacfe2b602d83d6ae2370e8bb4cb8ce8ab4abb28c9e997af\": container with ID starting with efa12df27e8f1e42dacfe2b602d83d6ae2370e8bb4cb8ce8ab4abb28c9e997af not found: ID does not exist" containerID="efa12df27e8f1e42dacfe2b602d83d6ae2370e8bb4cb8ce8ab4abb28c9e997af" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.587935 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa12df27e8f1e42dacfe2b602d83d6ae2370e8bb4cb8ce8ab4abb28c9e997af"} err="failed to get container status \"efa12df27e8f1e42dacfe2b602d83d6ae2370e8bb4cb8ce8ab4abb28c9e997af\": rpc error: code = NotFound desc = could not find container \"efa12df27e8f1e42dacfe2b602d83d6ae2370e8bb4cb8ce8ab4abb28c9e997af\": container with ID starting with efa12df27e8f1e42dacfe2b602d83d6ae2370e8bb4cb8ce8ab4abb28c9e997af not found: ID does not exist" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.587962 4826 scope.go:117] "RemoveContainer" containerID="5507b9d03083bbcbf3f503fcf50eaf096b7739e47fb5b4546534bf1849f59544" Jan 29 07:06:38 crc kubenswrapper[4826]: E0129 07:06:38.588173 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5507b9d03083bbcbf3f503fcf50eaf096b7739e47fb5b4546534bf1849f59544\": container with ID starting with 5507b9d03083bbcbf3f503fcf50eaf096b7739e47fb5b4546534bf1849f59544 not found: ID does not exist" containerID="5507b9d03083bbcbf3f503fcf50eaf096b7739e47fb5b4546534bf1849f59544" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.588196 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5507b9d03083bbcbf3f503fcf50eaf096b7739e47fb5b4546534bf1849f59544"} err="failed to get container status \"5507b9d03083bbcbf3f503fcf50eaf096b7739e47fb5b4546534bf1849f59544\": rpc error: code = NotFound desc = could not find container \"5507b9d03083bbcbf3f503fcf50eaf096b7739e47fb5b4546534bf1849f59544\": container with ID starting with 5507b9d03083bbcbf3f503fcf50eaf096b7739e47fb5b4546534bf1849f59544 not found: ID does not exist" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.603522 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426997bd-6ba1-4ebb-b8d3-08be081add91-combined-ca-bundle\") pod \"426997bd-6ba1-4ebb-b8d3-08be081add91\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.603566 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-operator-scripts\") pod \"426997bd-6ba1-4ebb-b8d3-08be081add91\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.603599 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-config-data-default\") pod \"426997bd-6ba1-4ebb-b8d3-08be081add91\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.603651 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sqd5\" (UniqueName: \"kubernetes.io/projected/426997bd-6ba1-4ebb-b8d3-08be081add91-kube-api-access-9sqd5\") pod \"426997bd-6ba1-4ebb-b8d3-08be081add91\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.603704 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"426997bd-6ba1-4ebb-b8d3-08be081add91\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.603729 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/426997bd-6ba1-4ebb-b8d3-08be081add91-galera-tls-certs\") pod \"426997bd-6ba1-4ebb-b8d3-08be081add91\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.603746 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/426997bd-6ba1-4ebb-b8d3-08be081add91-config-data-generated\") pod \"426997bd-6ba1-4ebb-b8d3-08be081add91\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.603767 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-kolla-config\") pod \"426997bd-6ba1-4ebb-b8d3-08be081add91\" (UID: \"426997bd-6ba1-4ebb-b8d3-08be081add91\") " Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.604880 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/426997bd-6ba1-4ebb-b8d3-08be081add91-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "426997bd-6ba1-4ebb-b8d3-08be081add91" (UID: "426997bd-6ba1-4ebb-b8d3-08be081add91"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.605674 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "426997bd-6ba1-4ebb-b8d3-08be081add91" (UID: "426997bd-6ba1-4ebb-b8d3-08be081add91"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.607738 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "426997bd-6ba1-4ebb-b8d3-08be081add91" (UID: "426997bd-6ba1-4ebb-b8d3-08be081add91"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.613501 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "426997bd-6ba1-4ebb-b8d3-08be081add91" (UID: "426997bd-6ba1-4ebb-b8d3-08be081add91"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.614620 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/426997bd-6ba1-4ebb-b8d3-08be081add91-kube-api-access-9sqd5" (OuterVolumeSpecName: "kube-api-access-9sqd5") pod "426997bd-6ba1-4ebb-b8d3-08be081add91" (UID: "426997bd-6ba1-4ebb-b8d3-08be081add91"). InnerVolumeSpecName "kube-api-access-9sqd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.647565 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "mysql-db") pod "426997bd-6ba1-4ebb-b8d3-08be081add91" (UID: "426997bd-6ba1-4ebb-b8d3-08be081add91"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.661476 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426997bd-6ba1-4ebb-b8d3-08be081add91-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "426997bd-6ba1-4ebb-b8d3-08be081add91" (UID: "426997bd-6ba1-4ebb-b8d3-08be081add91"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.673555 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/426997bd-6ba1-4ebb-b8d3-08be081add91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "426997bd-6ba1-4ebb-b8d3-08be081add91" (UID: "426997bd-6ba1-4ebb-b8d3-08be081add91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.706230 4826 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.706272 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426997bd-6ba1-4ebb-b8d3-08be081add91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.706289 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.706312 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/426997bd-6ba1-4ebb-b8d3-08be081add91-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.706323 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sqd5\" (UniqueName: \"kubernetes.io/projected/426997bd-6ba1-4ebb-b8d3-08be081add91-kube-api-access-9sqd5\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.706364 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.706375 4826 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/426997bd-6ba1-4ebb-b8d3-08be081add91-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.706385 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/426997bd-6ba1-4ebb-b8d3-08be081add91-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.742813 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.807657 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.816660 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" path="/var/lib/kubelet/pods/0da3bc6b-99a0-4de9-9479-5aaef8bfd81c/volumes" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.817314 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17dd6ec1-84fb-4bb3-8700-c8691f059937" path="/var/lib/kubelet/pods/17dd6ec1-84fb-4bb3-8700-c8691f059937/volumes" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.817820 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="229ff3bd-fac5-4bb5-ba1e-9e829c30f45b" path="/var/lib/kubelet/pods/229ff3bd-fac5-4bb5-ba1e-9e829c30f45b/volumes" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.818854 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" path="/var/lib/kubelet/pods/34f59971-b32b-4b19-950c-77af3de22fd6/volumes" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.819549 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" path="/var/lib/kubelet/pods/bbbbec70-be7f-4a31-9f97-76d5c78b1cd0/volumes" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.820146 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9016472-5ff0-4849-bc8a-c1d815d27931" path="/var/lib/kubelet/pods/d9016472-5ff0-4849-bc8a-c1d815d27931/volumes" Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.878515 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 07:06:38 crc kubenswrapper[4826]: I0129 07:06:38.891435 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 07:06:39 crc kubenswrapper[4826]: E0129 07:06:39.753404 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:39 crc kubenswrapper[4826]: E0129 07:06:39.754327 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:39 crc kubenswrapper[4826]: E0129 07:06:39.754576 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:39 crc kubenswrapper[4826]: E0129 07:06:39.757727 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:39 crc kubenswrapper[4826]: E0129 07:06:39.757803 4826 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovsdb-server" Jan 29 07:06:39 crc kubenswrapper[4826]: E0129 07:06:39.760125 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:39 crc kubenswrapper[4826]: E0129 07:06:39.761452 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:39 crc kubenswrapper[4826]: E0129 07:06:39.761513 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovs-vswitchd" Jan 29 07:06:40 crc kubenswrapper[4826]: I0129 07:06:40.820784 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="426997bd-6ba1-4ebb-b8d3-08be081add91" path="/var/lib/kubelet/pods/426997bd-6ba1-4ebb-b8d3-08be081add91/volumes" Jan 29 07:06:41 crc kubenswrapper[4826]: I0129 07:06:41.066708 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:41 crc kubenswrapper[4826]: I0129 07:06:41.128127 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:41 crc kubenswrapper[4826]: I0129 07:06:41.309080 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrjtp"] Jan 29 07:06:42 crc kubenswrapper[4826]: I0129 07:06:42.581417 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jrjtp" podUID="bf226f75-106f-4f53-b33b-59f9ebbbefc3" containerName="registry-server" containerID="cri-o://add0379d6310866f8debdb534faa7bf14685002e4ca0c63cd2f7b82d237d1c35" gracePeriod=2 Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.591478 4826 generic.go:334] "Generic (PLEG): container finished" podID="bf226f75-106f-4f53-b33b-59f9ebbbefc3" containerID="add0379d6310866f8debdb534faa7bf14685002e4ca0c63cd2f7b82d237d1c35" exitCode=0 Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.591695 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrjtp" event={"ID":"bf226f75-106f-4f53-b33b-59f9ebbbefc3","Type":"ContainerDied","Data":"add0379d6310866f8debdb534faa7bf14685002e4ca0c63cd2f7b82d237d1c35"} Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.591845 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrjtp" event={"ID":"bf226f75-106f-4f53-b33b-59f9ebbbefc3","Type":"ContainerDied","Data":"cfaadf49ccb5cee55cd87bffa659423f7d3f0d5b098726a989003d913660c54c"} Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.591867 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfaadf49ccb5cee55cd87bffa659423f7d3f0d5b098726a989003d913660c54c" Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.605768 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.705773 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf226f75-106f-4f53-b33b-59f9ebbbefc3-utilities\") pod \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\" (UID: \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\") " Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.705924 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf226f75-106f-4f53-b33b-59f9ebbbefc3-catalog-content\") pod \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\" (UID: \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\") " Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.705959 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb8zm\" (UniqueName: \"kubernetes.io/projected/bf226f75-106f-4f53-b33b-59f9ebbbefc3-kube-api-access-rb8zm\") pod \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\" (UID: \"bf226f75-106f-4f53-b33b-59f9ebbbefc3\") " Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.706839 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf226f75-106f-4f53-b33b-59f9ebbbefc3-utilities" (OuterVolumeSpecName: "utilities") pod "bf226f75-106f-4f53-b33b-59f9ebbbefc3" (UID: "bf226f75-106f-4f53-b33b-59f9ebbbefc3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.715566 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf226f75-106f-4f53-b33b-59f9ebbbefc3-kube-api-access-rb8zm" (OuterVolumeSpecName: "kube-api-access-rb8zm") pod "bf226f75-106f-4f53-b33b-59f9ebbbefc3" (UID: "bf226f75-106f-4f53-b33b-59f9ebbbefc3"). InnerVolumeSpecName "kube-api-access-rb8zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.807522 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf226f75-106f-4f53-b33b-59f9ebbbefc3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.807551 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb8zm\" (UniqueName: \"kubernetes.io/projected/bf226f75-106f-4f53-b33b-59f9ebbbefc3-kube-api-access-rb8zm\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.859210 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf226f75-106f-4f53-b33b-59f9ebbbefc3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf226f75-106f-4f53-b33b-59f9ebbbefc3" (UID: "bf226f75-106f-4f53-b33b-59f9ebbbefc3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:43 crc kubenswrapper[4826]: I0129 07:06:43.909420 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf226f75-106f-4f53-b33b-59f9ebbbefc3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:44 crc kubenswrapper[4826]: I0129 07:06:44.601951 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrjtp" Jan 29 07:06:44 crc kubenswrapper[4826]: I0129 07:06:44.659040 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrjtp"] Jan 29 07:06:44 crc kubenswrapper[4826]: I0129 07:06:44.668193 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jrjtp"] Jan 29 07:06:44 crc kubenswrapper[4826]: E0129 07:06:44.752547 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:44 crc kubenswrapper[4826]: E0129 07:06:44.752990 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:44 crc kubenswrapper[4826]: E0129 07:06:44.753316 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:44 crc kubenswrapper[4826]: E0129 07:06:44.753352 4826 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovsdb-server" Jan 29 07:06:44 crc kubenswrapper[4826]: E0129 07:06:44.754787 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:44 crc kubenswrapper[4826]: E0129 07:06:44.757123 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:44 crc kubenswrapper[4826]: E0129 07:06:44.759189 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:44 crc kubenswrapper[4826]: E0129 07:06:44.759349 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovs-vswitchd" Jan 29 07:06:44 crc kubenswrapper[4826]: I0129 07:06:44.819827 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf226f75-106f-4f53-b33b-59f9ebbbefc3" path="/var/lib/kubelet/pods/bf226f75-106f-4f53-b33b-59f9ebbbefc3/volumes" Jan 29 07:06:45 crc kubenswrapper[4826]: I0129 07:06:45.947053 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-669bb5748f-zjsxt" podUID="258e4d75-ecca-4001-9f56-aeb39557b326" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": dial tcp 10.217.0.158:9696: connect: connection refused" Jan 29 07:06:49 crc kubenswrapper[4826]: E0129 07:06:49.753365 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:49 crc kubenswrapper[4826]: E0129 07:06:49.753756 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:49 crc kubenswrapper[4826]: E0129 07:06:49.754181 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:49 crc kubenswrapper[4826]: E0129 07:06:49.754244 4826 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovsdb-server" Jan 29 07:06:49 crc kubenswrapper[4826]: E0129 07:06:49.756658 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:49 crc kubenswrapper[4826]: E0129 07:06:49.758599 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:49 crc kubenswrapper[4826]: E0129 07:06:49.759881 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:49 crc kubenswrapper[4826]: E0129 07:06:49.759954 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovs-vswitchd" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.666864 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.701212 4826 generic.go:334] "Generic (PLEG): container finished" podID="258e4d75-ecca-4001-9f56-aeb39557b326" containerID="5ef7b36d03b1ef462470c22e1ad57d72fd7e0094c00a4af390d706239a6a169c" exitCode=0 Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.701256 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-669bb5748f-zjsxt" event={"ID":"258e4d75-ecca-4001-9f56-aeb39557b326","Type":"ContainerDied","Data":"5ef7b36d03b1ef462470c22e1ad57d72fd7e0094c00a4af390d706239a6a169c"} Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.701278 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-669bb5748f-zjsxt" event={"ID":"258e4d75-ecca-4001-9f56-aeb39557b326","Type":"ContainerDied","Data":"3693f8212ab3bdea973eefc6771205df90b07ff7f38ccabae234cd26fecefbff"} Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.701310 4826 scope.go:117] "RemoveContainer" containerID="9dfce637d6a3b6a7342be469a0635e6d8e5afd174f4e993540d7e2a69349016b" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.701289 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-669bb5748f-zjsxt" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.724058 4826 scope.go:117] "RemoveContainer" containerID="5ef7b36d03b1ef462470c22e1ad57d72fd7e0094c00a4af390d706239a6a169c" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.725529 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-internal-tls-certs\") pod \"258e4d75-ecca-4001-9f56-aeb39557b326\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.725570 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-ovndb-tls-certs\") pod \"258e4d75-ecca-4001-9f56-aeb39557b326\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.725616 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-httpd-config\") pod \"258e4d75-ecca-4001-9f56-aeb39557b326\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.725693 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-combined-ca-bundle\") pod \"258e4d75-ecca-4001-9f56-aeb39557b326\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.725731 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp7z9\" (UniqueName: \"kubernetes.io/projected/258e4d75-ecca-4001-9f56-aeb39557b326-kube-api-access-tp7z9\") pod \"258e4d75-ecca-4001-9f56-aeb39557b326\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.725780 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-config\") pod \"258e4d75-ecca-4001-9f56-aeb39557b326\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.725813 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-public-tls-certs\") pod \"258e4d75-ecca-4001-9f56-aeb39557b326\" (UID: \"258e4d75-ecca-4001-9f56-aeb39557b326\") " Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.730890 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "258e4d75-ecca-4001-9f56-aeb39557b326" (UID: "258e4d75-ecca-4001-9f56-aeb39557b326"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.731363 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/258e4d75-ecca-4001-9f56-aeb39557b326-kube-api-access-tp7z9" (OuterVolumeSpecName: "kube-api-access-tp7z9") pod "258e4d75-ecca-4001-9f56-aeb39557b326" (UID: "258e4d75-ecca-4001-9f56-aeb39557b326"). InnerVolumeSpecName "kube-api-access-tp7z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.761140 4826 scope.go:117] "RemoveContainer" containerID="9dfce637d6a3b6a7342be469a0635e6d8e5afd174f4e993540d7e2a69349016b" Jan 29 07:06:50 crc kubenswrapper[4826]: E0129 07:06:50.761802 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dfce637d6a3b6a7342be469a0635e6d8e5afd174f4e993540d7e2a69349016b\": container with ID starting with 9dfce637d6a3b6a7342be469a0635e6d8e5afd174f4e993540d7e2a69349016b not found: ID does not exist" containerID="9dfce637d6a3b6a7342be469a0635e6d8e5afd174f4e993540d7e2a69349016b" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.761835 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dfce637d6a3b6a7342be469a0635e6d8e5afd174f4e993540d7e2a69349016b"} err="failed to get container status \"9dfce637d6a3b6a7342be469a0635e6d8e5afd174f4e993540d7e2a69349016b\": rpc error: code = NotFound desc = could not find container \"9dfce637d6a3b6a7342be469a0635e6d8e5afd174f4e993540d7e2a69349016b\": container with ID starting with 9dfce637d6a3b6a7342be469a0635e6d8e5afd174f4e993540d7e2a69349016b not found: ID does not exist" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.761858 4826 scope.go:117] "RemoveContainer" containerID="5ef7b36d03b1ef462470c22e1ad57d72fd7e0094c00a4af390d706239a6a169c" Jan 29 07:06:50 crc kubenswrapper[4826]: E0129 07:06:50.762108 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ef7b36d03b1ef462470c22e1ad57d72fd7e0094c00a4af390d706239a6a169c\": container with ID starting with 5ef7b36d03b1ef462470c22e1ad57d72fd7e0094c00a4af390d706239a6a169c not found: ID does not exist" containerID="5ef7b36d03b1ef462470c22e1ad57d72fd7e0094c00a4af390d706239a6a169c" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.762134 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef7b36d03b1ef462470c22e1ad57d72fd7e0094c00a4af390d706239a6a169c"} err="failed to get container status \"5ef7b36d03b1ef462470c22e1ad57d72fd7e0094c00a4af390d706239a6a169c\": rpc error: code = NotFound desc = could not find container \"5ef7b36d03b1ef462470c22e1ad57d72fd7e0094c00a4af390d706239a6a169c\": container with ID starting with 5ef7b36d03b1ef462470c22e1ad57d72fd7e0094c00a4af390d706239a6a169c not found: ID does not exist" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.773287 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "258e4d75-ecca-4001-9f56-aeb39557b326" (UID: "258e4d75-ecca-4001-9f56-aeb39557b326"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.773779 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "258e4d75-ecca-4001-9f56-aeb39557b326" (UID: "258e4d75-ecca-4001-9f56-aeb39557b326"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.782136 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "258e4d75-ecca-4001-9f56-aeb39557b326" (UID: "258e4d75-ecca-4001-9f56-aeb39557b326"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.782268 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-config" (OuterVolumeSpecName: "config") pod "258e4d75-ecca-4001-9f56-aeb39557b326" (UID: "258e4d75-ecca-4001-9f56-aeb39557b326"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.802282 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "258e4d75-ecca-4001-9f56-aeb39557b326" (UID: "258e4d75-ecca-4001-9f56-aeb39557b326"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.827751 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.827776 4826 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.827785 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.827793 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.827803 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp7z9\" (UniqueName: \"kubernetes.io/projected/258e4d75-ecca-4001-9f56-aeb39557b326-kube-api-access-tp7z9\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.827813 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-config\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:50 crc kubenswrapper[4826]: I0129 07:06:50.827821 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/258e4d75-ecca-4001-9f56-aeb39557b326-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:51 crc kubenswrapper[4826]: I0129 07:06:51.020274 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-669bb5748f-zjsxt"] Jan 29 07:06:51 crc kubenswrapper[4826]: I0129 07:06:51.031844 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-669bb5748f-zjsxt"] Jan 29 07:06:52 crc kubenswrapper[4826]: I0129 07:06:52.831184 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="258e4d75-ecca-4001-9f56-aeb39557b326" path="/var/lib/kubelet/pods/258e4d75-ecca-4001-9f56-aeb39557b326/volumes" Jan 29 07:06:54 crc kubenswrapper[4826]: E0129 07:06:54.752861 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:54 crc kubenswrapper[4826]: E0129 07:06:54.754432 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:54 crc kubenswrapper[4826]: E0129 07:06:54.754742 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 07:06:54 crc kubenswrapper[4826]: E0129 07:06:54.754767 4826 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovsdb-server" Jan 29 07:06:54 crc kubenswrapper[4826]: E0129 07:06:54.755570 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:54 crc kubenswrapper[4826]: E0129 07:06:54.757316 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:54 crc kubenswrapper[4826]: E0129 07:06:54.758708 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 07:06:54 crc kubenswrapper[4826]: E0129 07:06:54.758756 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-m2d2v" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovs-vswitchd" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.268760 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.360233 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.360284 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift\") pod \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.360353 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-cache\") pod \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.360403 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s588z\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-kube-api-access-s588z\") pod \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.360433 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-combined-ca-bundle\") pod \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.360521 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-lock\") pod \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\" (UID: \"85b51a36-8aa5-46e7-b8ab-a7e672c491d7\") " Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.361269 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-lock" (OuterVolumeSpecName: "lock") pod "85b51a36-8aa5-46e7-b8ab-a7e672c491d7" (UID: "85b51a36-8aa5-46e7-b8ab-a7e672c491d7"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.361387 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-cache" (OuterVolumeSpecName: "cache") pod "85b51a36-8aa5-46e7-b8ab-a7e672c491d7" (UID: "85b51a36-8aa5-46e7-b8ab-a7e672c491d7"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.366514 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-kube-api-access-s588z" (OuterVolumeSpecName: "kube-api-access-s588z") pod "85b51a36-8aa5-46e7-b8ab-a7e672c491d7" (UID: "85b51a36-8aa5-46e7-b8ab-a7e672c491d7"). InnerVolumeSpecName "kube-api-access-s588z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.367070 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m2d2v_2d13cc8c-363d-4dcb-af5f-92318cf72a81/ovs-vswitchd/0.log" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.367875 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "85b51a36-8aa5-46e7-b8ab-a7e672c491d7" (UID: "85b51a36-8aa5-46e7-b8ab-a7e672c491d7"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.368243 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "85b51a36-8aa5-46e7-b8ab-a7e672c491d7" (UID: "85b51a36-8aa5-46e7-b8ab-a7e672c491d7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.368427 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.461738 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-etc-ovs\") pod \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.461785 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-run\") pod \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.461812 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-lib\") pod \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.461845 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d13cc8c-363d-4dcb-af5f-92318cf72a81-scripts\") pod \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.461857 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "2d13cc8c-363d-4dcb-af5f-92318cf72a81" (UID: "2d13cc8c-363d-4dcb-af5f-92318cf72a81"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.461888 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-run" (OuterVolumeSpecName: "var-run") pod "2d13cc8c-363d-4dcb-af5f-92318cf72a81" (UID: "2d13cc8c-363d-4dcb-af5f-92318cf72a81"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.461930 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-lib" (OuterVolumeSpecName: "var-lib") pod "2d13cc8c-363d-4dcb-af5f-92318cf72a81" (UID: "2d13cc8c-363d-4dcb-af5f-92318cf72a81"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.461967 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-log\") pod \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.462015 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvv57\" (UniqueName: \"kubernetes.io/projected/2d13cc8c-363d-4dcb-af5f-92318cf72a81-kube-api-access-qvv57\") pod \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\" (UID: \"2d13cc8c-363d-4dcb-af5f-92318cf72a81\") " Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.462064 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-log" (OuterVolumeSpecName: "var-log") pod "2d13cc8c-363d-4dcb-af5f-92318cf72a81" (UID: "2d13cc8c-363d-4dcb-af5f-92318cf72a81"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.462351 4826 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.462376 4826 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-lock\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.462402 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.462415 4826 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.462429 4826 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.462442 4826 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.462453 4826 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-cache\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.462464 4826 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2d13cc8c-363d-4dcb-af5f-92318cf72a81-var-lib\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.462476 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s588z\" (UniqueName: \"kubernetes.io/projected/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-kube-api-access-s588z\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.462994 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d13cc8c-363d-4dcb-af5f-92318cf72a81-scripts" (OuterVolumeSpecName: "scripts") pod "2d13cc8c-363d-4dcb-af5f-92318cf72a81" (UID: "2d13cc8c-363d-4dcb-af5f-92318cf72a81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.465691 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d13cc8c-363d-4dcb-af5f-92318cf72a81-kube-api-access-qvv57" (OuterVolumeSpecName: "kube-api-access-qvv57") pod "2d13cc8c-363d-4dcb-af5f-92318cf72a81" (UID: "2d13cc8c-363d-4dcb-af5f-92318cf72a81"). InnerVolumeSpecName "kube-api-access-qvv57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.487664 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.564093 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.564448 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d13cc8c-363d-4dcb-af5f-92318cf72a81-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.564631 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvv57\" (UniqueName: \"kubernetes.io/projected/2d13cc8c-363d-4dcb-af5f-92318cf72a81-kube-api-access-qvv57\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.641712 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85b51a36-8aa5-46e7-b8ab-a7e672c491d7" (UID: "85b51a36-8aa5-46e7-b8ab-a7e672c491d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.666044 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85b51a36-8aa5-46e7-b8ab-a7e672c491d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.787585 4826 generic.go:334] "Generic (PLEG): container finished" podID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" exitCode=137 Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.787740 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-m2d2v" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.787774 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m2d2v" event={"ID":"2d13cc8c-363d-4dcb-af5f-92318cf72a81","Type":"ContainerDied","Data":"7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08"} Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.788454 4826 scope.go:117] "RemoveContainer" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.788399 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m2d2v" event={"ID":"2d13cc8c-363d-4dcb-af5f-92318cf72a81","Type":"ContainerDied","Data":"0d27e5fdbb1ac758e1918ca2744d38aad57098af1310e0adf874d910b96e24d4"} Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.814046 4826 generic.go:334] "Generic (PLEG): container finished" podID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerID="c55f19f9268bb7c5e036640126bfced1629e2a36cd93c83c41856348935b8605" exitCode=137 Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.814593 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.826056 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"c55f19f9268bb7c5e036640126bfced1629e2a36cd93c83c41856348935b8605"} Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.826368 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"85b51a36-8aa5-46e7-b8ab-a7e672c491d7","Type":"ContainerDied","Data":"c5ffe9f77bd5baddc552c700e4feec61734365d73b6268a0736162bddafa3c78"} Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.851032 4826 scope.go:117] "RemoveContainer" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.864662 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-m2d2v"] Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.872061 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-m2d2v"] Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.890509 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.894666 4826 scope.go:117] "RemoveContainer" containerID="cedf10890041c37f6ac64bb119ab4a13de2addc104534b554aebcdd824040d66" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.897627 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.924031 4826 scope.go:117] "RemoveContainer" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" Jan 29 07:06:58 crc kubenswrapper[4826]: E0129 07:06:58.924757 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08\": container with ID starting with 7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08 not found: ID does not exist" containerID="7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.924812 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08"} err="failed to get container status \"7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08\": rpc error: code = NotFound desc = could not find container \"7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08\": container with ID starting with 7279ce2f370fc8321c9e51d6b7f683eb1d9bd5181f88ceec0fa850371165ae08 not found: ID does not exist" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.924846 4826 scope.go:117] "RemoveContainer" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" Jan 29 07:06:58 crc kubenswrapper[4826]: E0129 07:06:58.925228 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6\": container with ID starting with 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 not found: ID does not exist" containerID="55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.925254 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6"} err="failed to get container status \"55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6\": rpc error: code = NotFound desc = could not find container \"55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6\": container with ID starting with 55f4b9bfec4354c97a1ee291889ec348aea92eb506499e943c7d7509b0aac5c6 not found: ID does not exist" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.925271 4826 scope.go:117] "RemoveContainer" containerID="cedf10890041c37f6ac64bb119ab4a13de2addc104534b554aebcdd824040d66" Jan 29 07:06:58 crc kubenswrapper[4826]: E0129 07:06:58.925954 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cedf10890041c37f6ac64bb119ab4a13de2addc104534b554aebcdd824040d66\": container with ID starting with cedf10890041c37f6ac64bb119ab4a13de2addc104534b554aebcdd824040d66 not found: ID does not exist" containerID="cedf10890041c37f6ac64bb119ab4a13de2addc104534b554aebcdd824040d66" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.926002 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cedf10890041c37f6ac64bb119ab4a13de2addc104534b554aebcdd824040d66"} err="failed to get container status \"cedf10890041c37f6ac64bb119ab4a13de2addc104534b554aebcdd824040d66\": rpc error: code = NotFound desc = could not find container \"cedf10890041c37f6ac64bb119ab4a13de2addc104534b554aebcdd824040d66\": container with ID starting with cedf10890041c37f6ac64bb119ab4a13de2addc104534b554aebcdd824040d66 not found: ID does not exist" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.926036 4826 scope.go:117] "RemoveContainer" containerID="c55f19f9268bb7c5e036640126bfced1629e2a36cd93c83c41856348935b8605" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.950167 4826 scope.go:117] "RemoveContainer" containerID="68c20d8c7fd999c8a7abcb16322015c670f6cd5b3f9b312e5fcdd9e5080bab19" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.970956 4826 scope.go:117] "RemoveContainer" containerID="b8b353bcfe9ffab3fe12e51fc1add01cbe0b68e3df21ff9ba62958988fa40c6a" Jan 29 07:06:58 crc kubenswrapper[4826]: I0129 07:06:58.990469 4826 scope.go:117] "RemoveContainer" containerID="efc661e13d0a3ae031a931433c46d983a09a4a66496a4085e3c7846558e05913" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.007118 4826 scope.go:117] "RemoveContainer" containerID="18c3f34b023f38c953270c50875c33014ac0c890dbc279a1a0f7ee0521285e95" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.035118 4826 scope.go:117] "RemoveContainer" containerID="38125c7287003d38ece9a28cd533530a5bf461facb796759edb517b616c413a5" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.054685 4826 scope.go:117] "RemoveContainer" containerID="b530291b6bb170ceb4af6e542a3feb436b2acdf8fb99e834d63a533292236ca3" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.075042 4826 scope.go:117] "RemoveContainer" containerID="4b4aa223c22b7eac65d63841c37b12133011befa1024ace90050e7ee1a72c510" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.097863 4826 scope.go:117] "RemoveContainer" containerID="af48e0d4ae8e00f830fb238c9c15333c7f43281d0531755b7ffe26f6fbf4c8c6" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.130053 4826 scope.go:117] "RemoveContainer" containerID="1aa4a9af19d077a22d1e3b460ff6f966acbc829b62c52982dbfc8cc5b918e542" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.162644 4826 scope.go:117] "RemoveContainer" containerID="d88ad576d400eb2984c89463da7988fbb988847d02187cc4b64734122dc40271" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.188824 4826 scope.go:117] "RemoveContainer" containerID="44307ca66666a832b9005e4a358a9e508afcfb3595db492ed5623df991aeca7a" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.215813 4826 scope.go:117] "RemoveContainer" containerID="a26eddb89adfadac24b26e3ff2294f83b0c4d77a9551462e60d51f7c34fff67e" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.240213 4826 scope.go:117] "RemoveContainer" containerID="3db554d1986fc0fb902da044bdce99172798bdf604c6dcbc79f7a7a8d3cf339a" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.267824 4826 scope.go:117] "RemoveContainer" containerID="6f8be9e7ae68c9cdb46477974e5f9323232d52e97765253a593be96d962264d1" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.295513 4826 scope.go:117] "RemoveContainer" containerID="c55f19f9268bb7c5e036640126bfced1629e2a36cd93c83c41856348935b8605" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.296192 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55f19f9268bb7c5e036640126bfced1629e2a36cd93c83c41856348935b8605\": container with ID starting with c55f19f9268bb7c5e036640126bfced1629e2a36cd93c83c41856348935b8605 not found: ID does not exist" containerID="c55f19f9268bb7c5e036640126bfced1629e2a36cd93c83c41856348935b8605" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.296254 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55f19f9268bb7c5e036640126bfced1629e2a36cd93c83c41856348935b8605"} err="failed to get container status \"c55f19f9268bb7c5e036640126bfced1629e2a36cd93c83c41856348935b8605\": rpc error: code = NotFound desc = could not find container \"c55f19f9268bb7c5e036640126bfced1629e2a36cd93c83c41856348935b8605\": container with ID starting with c55f19f9268bb7c5e036640126bfced1629e2a36cd93c83c41856348935b8605 not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.296285 4826 scope.go:117] "RemoveContainer" containerID="68c20d8c7fd999c8a7abcb16322015c670f6cd5b3f9b312e5fcdd9e5080bab19" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.296850 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68c20d8c7fd999c8a7abcb16322015c670f6cd5b3f9b312e5fcdd9e5080bab19\": container with ID starting with 68c20d8c7fd999c8a7abcb16322015c670f6cd5b3f9b312e5fcdd9e5080bab19 not found: ID does not exist" containerID="68c20d8c7fd999c8a7abcb16322015c670f6cd5b3f9b312e5fcdd9e5080bab19" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.296890 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68c20d8c7fd999c8a7abcb16322015c670f6cd5b3f9b312e5fcdd9e5080bab19"} err="failed to get container status \"68c20d8c7fd999c8a7abcb16322015c670f6cd5b3f9b312e5fcdd9e5080bab19\": rpc error: code = NotFound desc = could not find container \"68c20d8c7fd999c8a7abcb16322015c670f6cd5b3f9b312e5fcdd9e5080bab19\": container with ID starting with 68c20d8c7fd999c8a7abcb16322015c670f6cd5b3f9b312e5fcdd9e5080bab19 not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.296919 4826 scope.go:117] "RemoveContainer" containerID="b8b353bcfe9ffab3fe12e51fc1add01cbe0b68e3df21ff9ba62958988fa40c6a" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.297266 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8b353bcfe9ffab3fe12e51fc1add01cbe0b68e3df21ff9ba62958988fa40c6a\": container with ID starting with b8b353bcfe9ffab3fe12e51fc1add01cbe0b68e3df21ff9ba62958988fa40c6a not found: ID does not exist" containerID="b8b353bcfe9ffab3fe12e51fc1add01cbe0b68e3df21ff9ba62958988fa40c6a" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.297360 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8b353bcfe9ffab3fe12e51fc1add01cbe0b68e3df21ff9ba62958988fa40c6a"} err="failed to get container status \"b8b353bcfe9ffab3fe12e51fc1add01cbe0b68e3df21ff9ba62958988fa40c6a\": rpc error: code = NotFound desc = could not find container \"b8b353bcfe9ffab3fe12e51fc1add01cbe0b68e3df21ff9ba62958988fa40c6a\": container with ID starting with b8b353bcfe9ffab3fe12e51fc1add01cbe0b68e3df21ff9ba62958988fa40c6a not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.297402 4826 scope.go:117] "RemoveContainer" containerID="efc661e13d0a3ae031a931433c46d983a09a4a66496a4085e3c7846558e05913" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.297778 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efc661e13d0a3ae031a931433c46d983a09a4a66496a4085e3c7846558e05913\": container with ID starting with efc661e13d0a3ae031a931433c46d983a09a4a66496a4085e3c7846558e05913 not found: ID does not exist" containerID="efc661e13d0a3ae031a931433c46d983a09a4a66496a4085e3c7846558e05913" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.297832 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc661e13d0a3ae031a931433c46d983a09a4a66496a4085e3c7846558e05913"} err="failed to get container status \"efc661e13d0a3ae031a931433c46d983a09a4a66496a4085e3c7846558e05913\": rpc error: code = NotFound desc = could not find container \"efc661e13d0a3ae031a931433c46d983a09a4a66496a4085e3c7846558e05913\": container with ID starting with efc661e13d0a3ae031a931433c46d983a09a4a66496a4085e3c7846558e05913 not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.297855 4826 scope.go:117] "RemoveContainer" containerID="18c3f34b023f38c953270c50875c33014ac0c890dbc279a1a0f7ee0521285e95" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.298240 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18c3f34b023f38c953270c50875c33014ac0c890dbc279a1a0f7ee0521285e95\": container with ID starting with 18c3f34b023f38c953270c50875c33014ac0c890dbc279a1a0f7ee0521285e95 not found: ID does not exist" containerID="18c3f34b023f38c953270c50875c33014ac0c890dbc279a1a0f7ee0521285e95" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.298281 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18c3f34b023f38c953270c50875c33014ac0c890dbc279a1a0f7ee0521285e95"} err="failed to get container status \"18c3f34b023f38c953270c50875c33014ac0c890dbc279a1a0f7ee0521285e95\": rpc error: code = NotFound desc = could not find container \"18c3f34b023f38c953270c50875c33014ac0c890dbc279a1a0f7ee0521285e95\": container with ID starting with 18c3f34b023f38c953270c50875c33014ac0c890dbc279a1a0f7ee0521285e95 not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.298331 4826 scope.go:117] "RemoveContainer" containerID="38125c7287003d38ece9a28cd533530a5bf461facb796759edb517b616c413a5" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.298745 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38125c7287003d38ece9a28cd533530a5bf461facb796759edb517b616c413a5\": container with ID starting with 38125c7287003d38ece9a28cd533530a5bf461facb796759edb517b616c413a5 not found: ID does not exist" containerID="38125c7287003d38ece9a28cd533530a5bf461facb796759edb517b616c413a5" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.298793 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38125c7287003d38ece9a28cd533530a5bf461facb796759edb517b616c413a5"} err="failed to get container status \"38125c7287003d38ece9a28cd533530a5bf461facb796759edb517b616c413a5\": rpc error: code = NotFound desc = could not find container \"38125c7287003d38ece9a28cd533530a5bf461facb796759edb517b616c413a5\": container with ID starting with 38125c7287003d38ece9a28cd533530a5bf461facb796759edb517b616c413a5 not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.298829 4826 scope.go:117] "RemoveContainer" containerID="b530291b6bb170ceb4af6e542a3feb436b2acdf8fb99e834d63a533292236ca3" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.299137 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b530291b6bb170ceb4af6e542a3feb436b2acdf8fb99e834d63a533292236ca3\": container with ID starting with b530291b6bb170ceb4af6e542a3feb436b2acdf8fb99e834d63a533292236ca3 not found: ID does not exist" containerID="b530291b6bb170ceb4af6e542a3feb436b2acdf8fb99e834d63a533292236ca3" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.299206 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b530291b6bb170ceb4af6e542a3feb436b2acdf8fb99e834d63a533292236ca3"} err="failed to get container status \"b530291b6bb170ceb4af6e542a3feb436b2acdf8fb99e834d63a533292236ca3\": rpc error: code = NotFound desc = could not find container \"b530291b6bb170ceb4af6e542a3feb436b2acdf8fb99e834d63a533292236ca3\": container with ID starting with b530291b6bb170ceb4af6e542a3feb436b2acdf8fb99e834d63a533292236ca3 not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.299225 4826 scope.go:117] "RemoveContainer" containerID="4b4aa223c22b7eac65d63841c37b12133011befa1024ace90050e7ee1a72c510" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.299597 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b4aa223c22b7eac65d63841c37b12133011befa1024ace90050e7ee1a72c510\": container with ID starting with 4b4aa223c22b7eac65d63841c37b12133011befa1024ace90050e7ee1a72c510 not found: ID does not exist" containerID="4b4aa223c22b7eac65d63841c37b12133011befa1024ace90050e7ee1a72c510" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.299627 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b4aa223c22b7eac65d63841c37b12133011befa1024ace90050e7ee1a72c510"} err="failed to get container status \"4b4aa223c22b7eac65d63841c37b12133011befa1024ace90050e7ee1a72c510\": rpc error: code = NotFound desc = could not find container \"4b4aa223c22b7eac65d63841c37b12133011befa1024ace90050e7ee1a72c510\": container with ID starting with 4b4aa223c22b7eac65d63841c37b12133011befa1024ace90050e7ee1a72c510 not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.299645 4826 scope.go:117] "RemoveContainer" containerID="af48e0d4ae8e00f830fb238c9c15333c7f43281d0531755b7ffe26f6fbf4c8c6" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.300040 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af48e0d4ae8e00f830fb238c9c15333c7f43281d0531755b7ffe26f6fbf4c8c6\": container with ID starting with af48e0d4ae8e00f830fb238c9c15333c7f43281d0531755b7ffe26f6fbf4c8c6 not found: ID does not exist" containerID="af48e0d4ae8e00f830fb238c9c15333c7f43281d0531755b7ffe26f6fbf4c8c6" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.300072 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af48e0d4ae8e00f830fb238c9c15333c7f43281d0531755b7ffe26f6fbf4c8c6"} err="failed to get container status \"af48e0d4ae8e00f830fb238c9c15333c7f43281d0531755b7ffe26f6fbf4c8c6\": rpc error: code = NotFound desc = could not find container \"af48e0d4ae8e00f830fb238c9c15333c7f43281d0531755b7ffe26f6fbf4c8c6\": container with ID starting with af48e0d4ae8e00f830fb238c9c15333c7f43281d0531755b7ffe26f6fbf4c8c6 not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.300093 4826 scope.go:117] "RemoveContainer" containerID="1aa4a9af19d077a22d1e3b460ff6f966acbc829b62c52982dbfc8cc5b918e542" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.302330 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aa4a9af19d077a22d1e3b460ff6f966acbc829b62c52982dbfc8cc5b918e542\": container with ID starting with 1aa4a9af19d077a22d1e3b460ff6f966acbc829b62c52982dbfc8cc5b918e542 not found: ID does not exist" containerID="1aa4a9af19d077a22d1e3b460ff6f966acbc829b62c52982dbfc8cc5b918e542" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.302389 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa4a9af19d077a22d1e3b460ff6f966acbc829b62c52982dbfc8cc5b918e542"} err="failed to get container status \"1aa4a9af19d077a22d1e3b460ff6f966acbc829b62c52982dbfc8cc5b918e542\": rpc error: code = NotFound desc = could not find container \"1aa4a9af19d077a22d1e3b460ff6f966acbc829b62c52982dbfc8cc5b918e542\": container with ID starting with 1aa4a9af19d077a22d1e3b460ff6f966acbc829b62c52982dbfc8cc5b918e542 not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.302476 4826 scope.go:117] "RemoveContainer" containerID="d88ad576d400eb2984c89463da7988fbb988847d02187cc4b64734122dc40271" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.302902 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d88ad576d400eb2984c89463da7988fbb988847d02187cc4b64734122dc40271\": container with ID starting with d88ad576d400eb2984c89463da7988fbb988847d02187cc4b64734122dc40271 not found: ID does not exist" containerID="d88ad576d400eb2984c89463da7988fbb988847d02187cc4b64734122dc40271" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.302935 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88ad576d400eb2984c89463da7988fbb988847d02187cc4b64734122dc40271"} err="failed to get container status \"d88ad576d400eb2984c89463da7988fbb988847d02187cc4b64734122dc40271\": rpc error: code = NotFound desc = could not find container \"d88ad576d400eb2984c89463da7988fbb988847d02187cc4b64734122dc40271\": container with ID starting with d88ad576d400eb2984c89463da7988fbb988847d02187cc4b64734122dc40271 not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.302954 4826 scope.go:117] "RemoveContainer" containerID="44307ca66666a832b9005e4a358a9e508afcfb3595db492ed5623df991aeca7a" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.304055 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44307ca66666a832b9005e4a358a9e508afcfb3595db492ed5623df991aeca7a\": container with ID starting with 44307ca66666a832b9005e4a358a9e508afcfb3595db492ed5623df991aeca7a not found: ID does not exist" containerID="44307ca66666a832b9005e4a358a9e508afcfb3595db492ed5623df991aeca7a" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.304115 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44307ca66666a832b9005e4a358a9e508afcfb3595db492ed5623df991aeca7a"} err="failed to get container status \"44307ca66666a832b9005e4a358a9e508afcfb3595db492ed5623df991aeca7a\": rpc error: code = NotFound desc = could not find container \"44307ca66666a832b9005e4a358a9e508afcfb3595db492ed5623df991aeca7a\": container with ID starting with 44307ca66666a832b9005e4a358a9e508afcfb3595db492ed5623df991aeca7a not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.304149 4826 scope.go:117] "RemoveContainer" containerID="a26eddb89adfadac24b26e3ff2294f83b0c4d77a9551462e60d51f7c34fff67e" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.304625 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a26eddb89adfadac24b26e3ff2294f83b0c4d77a9551462e60d51f7c34fff67e\": container with ID starting with a26eddb89adfadac24b26e3ff2294f83b0c4d77a9551462e60d51f7c34fff67e not found: ID does not exist" containerID="a26eddb89adfadac24b26e3ff2294f83b0c4d77a9551462e60d51f7c34fff67e" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.304665 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a26eddb89adfadac24b26e3ff2294f83b0c4d77a9551462e60d51f7c34fff67e"} err="failed to get container status \"a26eddb89adfadac24b26e3ff2294f83b0c4d77a9551462e60d51f7c34fff67e\": rpc error: code = NotFound desc = could not find container \"a26eddb89adfadac24b26e3ff2294f83b0c4d77a9551462e60d51f7c34fff67e\": container with ID starting with a26eddb89adfadac24b26e3ff2294f83b0c4d77a9551462e60d51f7c34fff67e not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.304712 4826 scope.go:117] "RemoveContainer" containerID="3db554d1986fc0fb902da044bdce99172798bdf604c6dcbc79f7a7a8d3cf339a" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.305149 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3db554d1986fc0fb902da044bdce99172798bdf604c6dcbc79f7a7a8d3cf339a\": container with ID starting with 3db554d1986fc0fb902da044bdce99172798bdf604c6dcbc79f7a7a8d3cf339a not found: ID does not exist" containerID="3db554d1986fc0fb902da044bdce99172798bdf604c6dcbc79f7a7a8d3cf339a" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.305205 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db554d1986fc0fb902da044bdce99172798bdf604c6dcbc79f7a7a8d3cf339a"} err="failed to get container status \"3db554d1986fc0fb902da044bdce99172798bdf604c6dcbc79f7a7a8d3cf339a\": rpc error: code = NotFound desc = could not find container \"3db554d1986fc0fb902da044bdce99172798bdf604c6dcbc79f7a7a8d3cf339a\": container with ID starting with 3db554d1986fc0fb902da044bdce99172798bdf604c6dcbc79f7a7a8d3cf339a not found: ID does not exist" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.305223 4826 scope.go:117] "RemoveContainer" containerID="6f8be9e7ae68c9cdb46477974e5f9323232d52e97765253a593be96d962264d1" Jan 29 07:06:59 crc kubenswrapper[4826]: E0129 07:06:59.305819 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f8be9e7ae68c9cdb46477974e5f9323232d52e97765253a593be96d962264d1\": container with ID starting with 6f8be9e7ae68c9cdb46477974e5f9323232d52e97765253a593be96d962264d1 not found: ID does not exist" containerID="6f8be9e7ae68c9cdb46477974e5f9323232d52e97765253a593be96d962264d1" Jan 29 07:06:59 crc kubenswrapper[4826]: I0129 07:06:59.305854 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f8be9e7ae68c9cdb46477974e5f9323232d52e97765253a593be96d962264d1"} err="failed to get container status \"6f8be9e7ae68c9cdb46477974e5f9323232d52e97765253a593be96d962264d1\": rpc error: code = NotFound desc = could not find container \"6f8be9e7ae68c9cdb46477974e5f9323232d52e97765253a593be96d962264d1\": container with ID starting with 6f8be9e7ae68c9cdb46477974e5f9323232d52e97765253a593be96d962264d1 not found: ID does not exist" Jan 29 07:07:00 crc kubenswrapper[4826]: I0129 07:07:00.823771 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" path="/var/lib/kubelet/pods/2d13cc8c-363d-4dcb-af5f-92318cf72a81/volumes" Jan 29 07:07:00 crc kubenswrapper[4826]: I0129 07:07:00.825668 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" path="/var/lib/kubelet/pods/85b51a36-8aa5-46e7-b8ab-a7e672c491d7/volumes" Jan 29 07:07:05 crc kubenswrapper[4826]: I0129 07:07:05.656700 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:07:05 crc kubenswrapper[4826]: I0129 07:07:05.657189 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:07:05 crc kubenswrapper[4826]: I0129 07:07:05.657269 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 07:07:05 crc kubenswrapper[4826]: I0129 07:07:05.658916 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"577b176493c80a578b39974191ea87b611ce451ac0e7d53efe3f736b701ffd68"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:07:05 crc kubenswrapper[4826]: I0129 07:07:05.659064 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://577b176493c80a578b39974191ea87b611ce451ac0e7d53efe3f736b701ffd68" gracePeriod=600 Jan 29 07:07:05 crc kubenswrapper[4826]: I0129 07:07:05.891966 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="577b176493c80a578b39974191ea87b611ce451ac0e7d53efe3f736b701ffd68" exitCode=0 Jan 29 07:07:05 crc kubenswrapper[4826]: I0129 07:07:05.892031 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"577b176493c80a578b39974191ea87b611ce451ac0e7d53efe3f736b701ffd68"} Jan 29 07:07:05 crc kubenswrapper[4826]: I0129 07:07:05.892571 4826 scope.go:117] "RemoveContainer" containerID="2746c36a8cbae641f39bc5b503c4b8bd16a73e3034bddd5ca4705c812e26566f" Jan 29 07:07:06 crc kubenswrapper[4826]: I0129 07:07:06.908669 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0"} Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835057 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vpc8j"] Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835790 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426997bd-6ba1-4ebb-b8d3-08be081add91" containerName="galera" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835802 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="426997bd-6ba1-4ebb-b8d3-08be081add91" containerName="galera" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835813 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-replicator" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835819 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-replicator" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835830 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf226f75-106f-4f53-b33b-59f9ebbbefc3" containerName="extract-content" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835836 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf226f75-106f-4f53-b33b-59f9ebbbefc3" containerName="extract-content" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835848 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca39ae08-94df-4778-8203-bcff5806eff0" containerName="barbican-api-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835854 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca39ae08-94df-4778-8203-bcff5806eff0" containerName="barbican-api-log" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835863 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf226f75-106f-4f53-b33b-59f9ebbbefc3" containerName="registry-server" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835868 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf226f75-106f-4f53-b33b-59f9ebbbefc3" containerName="registry-server" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835879 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426997bd-6ba1-4ebb-b8d3-08be081add91" containerName="mysql-bootstrap" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835886 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="426997bd-6ba1-4ebb-b8d3-08be081add91" containerName="mysql-bootstrap" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835892 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9016472-5ff0-4849-bc8a-c1d815d27931" containerName="keystone-api" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835897 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9016472-5ff0-4849-bc8a-c1d815d27931" containerName="keystone-api" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835906 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af36d2e1-464b-4ada-9b91-2c18c52502d1" containerName="memcached" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835911 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="af36d2e1-464b-4ada-9b91-2c18c52502d1" containerName="memcached" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835918 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-auditor" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835923 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-auditor" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835929 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="rsync" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835934 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="rsync" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835943 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf226f75-106f-4f53-b33b-59f9ebbbefc3" containerName="extract-utilities" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835949 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf226f75-106f-4f53-b33b-59f9ebbbefc3" containerName="extract-utilities" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835957 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17dd6ec1-84fb-4bb3-8700-c8691f059937" containerName="openstack-network-exporter" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835963 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="17dd6ec1-84fb-4bb3-8700-c8691f059937" containerName="openstack-network-exporter" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835973 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="229ff3bd-fac5-4bb5-ba1e-9e829c30f45b" containerName="nova-cell0-conductor-conductor" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835978 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="229ff3bd-fac5-4bb5-ba1e-9e829c30f45b" containerName="nova-cell0-conductor-conductor" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.835986 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovsdb-server" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.835991 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovsdb-server" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836000 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-replicator" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836006 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-replicator" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836014 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="proxy-httpd" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836020 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="proxy-httpd" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836029 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216e612b-abc2-4d7c-8b10-28a595de5302" containerName="barbican-keystone-listener-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836035 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="216e612b-abc2-4d7c-8b10-28a595de5302" containerName="barbican-keystone-listener-log" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836041 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de4a3bc-a01f-424a-8f17-60deaba1f189" containerName="nova-api-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836047 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de4a3bc-a01f-424a-8f17-60deaba1f189" containerName="nova-api-log" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836056 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426fe450-4a4d-4048-8ea4-422d39482ceb" containerName="barbican-worker-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836062 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="426fe450-4a4d-4048-8ea4-422d39482ceb" containerName="barbican-worker-log" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836071 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1794f620-102a-4b9c-9097-713579ec55ad" containerName="setup-container" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836081 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1794f620-102a-4b9c-9097-713579ec55ad" containerName="setup-container" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836090 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca39ae08-94df-4778-8203-bcff5806eff0" containerName="barbican-api" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836096 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca39ae08-94df-4778-8203-bcff5806eff0" containerName="barbican-api" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836107 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="ceilometer-central-agent" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836112 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="ceilometer-central-agent" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836120 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-auditor" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836125 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-auditor" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836134 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" containerName="placement-api" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836139 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" containerName="placement-api" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836149 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5378dab4-ad0c-4259-a7d2-d3f7e784a142" containerName="glance-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836156 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5378dab4-ad0c-4259-a7d2-d3f7e784a142" containerName="glance-log" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836163 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerName="nova-metadata-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836169 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerName="nova-metadata-log" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836178 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-expirer" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836185 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-expirer" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836194 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-replicator" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836199 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-replicator" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836208 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" containerName="setup-container" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836213 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" containerName="setup-container" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836221 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" containerName="ovn-controller" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836226 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" containerName="ovn-controller" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836232 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovs-vswitchd" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836237 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovs-vswitchd" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836246 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960b6ae0-2577-444e-bc2a-bea4ec2917f9" containerName="nova-scheduler-scheduler" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836251 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="960b6ae0-2577-444e-bc2a-bea4ec2917f9" containerName="nova-scheduler-scheduler" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836260 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258e4d75-ecca-4001-9f56-aeb39557b326" containerName="neutron-httpd" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836266 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="258e4d75-ecca-4001-9f56-aeb39557b326" containerName="neutron-httpd" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836273 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-server" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836278 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-server" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836284 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42903a4e-8bdc-4c7b-bd44-b87199a848e6" containerName="kube-state-metrics" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836290 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="42903a4e-8bdc-4c7b-bd44-b87199a848e6" containerName="kube-state-metrics" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836311 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="ceilometer-notification-agent" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836317 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="ceilometer-notification-agent" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836324 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-updater" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836329 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-updater" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836340 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-server" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836347 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-server" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836353 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5378dab4-ad0c-4259-a7d2-d3f7e784a142" containerName="glance-httpd" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836359 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5378dab4-ad0c-4259-a7d2-d3f7e784a142" containerName="glance-httpd" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836369 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1794f620-102a-4b9c-9097-713579ec55ad" containerName="rabbitmq" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836374 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1794f620-102a-4b9c-9097-713579ec55ad" containerName="rabbitmq" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836381 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" containerName="placement-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836387 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" containerName="placement-log" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836396 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-server" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836402 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-server" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836408 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de4a3bc-a01f-424a-8f17-60deaba1f189" containerName="nova-api-api" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836413 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de4a3bc-a01f-424a-8f17-60deaba1f189" containerName="nova-api-api" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836420 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17dd6ec1-84fb-4bb3-8700-c8691f059937" containerName="ovn-northd" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836425 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="17dd6ec1-84fb-4bb3-8700-c8691f059937" containerName="ovn-northd" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836434 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-updater" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836439 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-updater" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836449 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c080b978-6895-4067-9dd5-2c23d4d68518" containerName="glance-httpd" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836454 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c080b978-6895-4067-9dd5-2c23d4d68518" containerName="glance-httpd" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836460 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426fe450-4a4d-4048-8ea4-422d39482ceb" containerName="barbican-worker" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836465 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="426fe450-4a4d-4048-8ea4-422d39482ceb" containerName="barbican-worker" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836475 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovsdb-server-init" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836481 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovsdb-server-init" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836491 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216e612b-abc2-4d7c-8b10-28a595de5302" containerName="barbican-keystone-listener" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836496 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="216e612b-abc2-4d7c-8b10-28a595de5302" containerName="barbican-keystone-listener" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836504 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="sg-core" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836509 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="sg-core" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836518 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c080b978-6895-4067-9dd5-2c23d4d68518" containerName="glance-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836523 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c080b978-6895-4067-9dd5-2c23d4d68518" containerName="glance-log" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836529 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerName="nova-metadata-metadata" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836534 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerName="nova-metadata-metadata" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836544 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-auditor" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836550 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-auditor" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836556 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea529cf3-184e-446a-9c6a-759cf1bab14c" containerName="cinder-scheduler" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836561 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea529cf3-184e-446a-9c6a-759cf1bab14c" containerName="cinder-scheduler" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836569 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea529cf3-184e-446a-9c6a-759cf1bab14c" containerName="probe" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836574 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea529cf3-184e-446a-9c6a-759cf1bab14c" containerName="probe" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836583 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-reaper" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836588 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-reaper" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836597 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258e4d75-ecca-4001-9f56-aeb39557b326" containerName="neutron-api" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836602 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="258e4d75-ecca-4001-9f56-aeb39557b326" containerName="neutron-api" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836609 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" containerName="rabbitmq" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836614 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" containerName="rabbitmq" Jan 29 07:07:56 crc kubenswrapper[4826]: E0129 07:07:56.836621 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="swift-recon-cron" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836627 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="swift-recon-cron" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836744 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" containerName="placement-api" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836753 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c080b978-6895-4067-9dd5-2c23d4d68518" containerName="glance-httpd" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836764 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-auditor" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836772 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da3bc6b-99a0-4de9-9479-5aaef8bfd81c" containerName="rabbitmq" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836782 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-expirer" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836791 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="swift-recon-cron" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836800 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="229ff3bd-fac5-4bb5-ba1e-9e829c30f45b" containerName="nova-cell0-conductor-conductor" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836807 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca39ae08-94df-4778-8203-bcff5806eff0" containerName="barbican-api" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836814 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="426fe450-4a4d-4048-8ea4-422d39482ceb" containerName="barbican-worker-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836823 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="42903a4e-8bdc-4c7b-bd44-b87199a848e6" containerName="kube-state-metrics" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836832 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-auditor" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836838 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="426997bd-6ba1-4ebb-b8d3-08be081add91" containerName="galera" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836848 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de4a3bc-a01f-424a-8f17-60deaba1f189" containerName="nova-api-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836855 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca39ae08-94df-4778-8203-bcff5806eff0" containerName="barbican-api-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836863 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="258e4d75-ecca-4001-9f56-aeb39557b326" containerName="neutron-api" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836870 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerName="nova-metadata-metadata" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836879 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="216e612b-abc2-4d7c-8b10-28a595de5302" containerName="barbican-keystone-listener" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836887 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf226f75-106f-4f53-b33b-59f9ebbbefc3" containerName="registry-server" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836893 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de4a3bc-a01f-424a-8f17-60deaba1f189" containerName="nova-api-api" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836899 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="216e612b-abc2-4d7c-8b10-28a595de5302" containerName="barbican-keystone-listener-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836905 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="ceilometer-notification-agent" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836915 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="17dd6ec1-84fb-4bb3-8700-c8691f059937" containerName="openstack-network-exporter" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836921 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="9349a8ff-2652-4dcf-89d9-6d440269be8c" containerName="nova-metadata-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836927 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovsdb-server" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836935 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c080b978-6895-4067-9dd5-2c23d4d68518" containerName="glance-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836943 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea529cf3-184e-446a-9c6a-759cf1bab14c" containerName="cinder-scheduler" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836949 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="sg-core" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836959 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-updater" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836969 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-reaper" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836977 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="426fe450-4a4d-4048-8ea4-422d39482ceb" containerName="barbican-worker" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836983 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea529cf3-184e-446a-9c6a-759cf1bab14c" containerName="probe" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836990 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="17dd6ec1-84fb-4bb3-8700-c8691f059937" containerName="ovn-northd" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.836996 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="af36d2e1-464b-4ada-9b91-2c18c52502d1" containerName="memcached" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837001 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="rsync" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837007 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="proxy-httpd" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837016 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-replicator" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837023 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9016472-5ff0-4849-bc8a-c1d815d27931" containerName="keystone-api" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837029 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1beb9e09-4039-4ce6-a33f-0d34e10b1cfe" containerName="placement-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837035 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5378dab4-ad0c-4259-a7d2-d3f7e784a142" containerName="glance-httpd" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837045 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1794f620-102a-4b9c-9097-713579ec55ad" containerName="rabbitmq" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837053 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-server" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837061 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-replicator" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837070 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-replicator" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837076 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5378dab4-ad0c-4259-a7d2-d3f7e784a142" containerName="glance-log" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837083 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-updater" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837090 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="object-auditor" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837098 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="258e4d75-ecca-4001-9f56-aeb39557b326" containerName="neutron-httpd" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837104 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="container-server" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837113 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="960b6ae0-2577-444e-bc2a-bea4ec2917f9" containerName="nova-scheduler-scheduler" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837120 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d13cc8c-363d-4dcb-af5f-92318cf72a81" containerName="ovs-vswitchd" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837127 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b51a36-8aa5-46e7-b8ab-a7e672c491d7" containerName="account-server" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837135 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbbbec70-be7f-4a31-9f97-76d5c78b1cd0" containerName="ovn-controller" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.837144 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f59971-b32b-4b19-950c-77af3de22fd6" containerName="ceilometer-central-agent" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.838015 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:07:56 crc kubenswrapper[4826]: I0129 07:07:56.845450 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vpc8j"] Jan 29 07:07:57 crc kubenswrapper[4826]: I0129 07:07:57.006152 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec17a23-d3c2-49db-91de-e9385be0f426-catalog-content\") pod \"certified-operators-vpc8j\" (UID: \"0ec17a23-d3c2-49db-91de-e9385be0f426\") " pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:07:57 crc kubenswrapper[4826]: I0129 07:07:57.006276 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec17a23-d3c2-49db-91de-e9385be0f426-utilities\") pod \"certified-operators-vpc8j\" (UID: \"0ec17a23-d3c2-49db-91de-e9385be0f426\") " pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:07:57 crc kubenswrapper[4826]: I0129 07:07:57.006343 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gln45\" (UniqueName: \"kubernetes.io/projected/0ec17a23-d3c2-49db-91de-e9385be0f426-kube-api-access-gln45\") pod \"certified-operators-vpc8j\" (UID: \"0ec17a23-d3c2-49db-91de-e9385be0f426\") " pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:07:57 crc kubenswrapper[4826]: I0129 07:07:57.107570 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec17a23-d3c2-49db-91de-e9385be0f426-catalog-content\") pod \"certified-operators-vpc8j\" (UID: \"0ec17a23-d3c2-49db-91de-e9385be0f426\") " pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:07:57 crc kubenswrapper[4826]: I0129 07:07:57.107661 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec17a23-d3c2-49db-91de-e9385be0f426-utilities\") pod \"certified-operators-vpc8j\" (UID: \"0ec17a23-d3c2-49db-91de-e9385be0f426\") " pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:07:57 crc kubenswrapper[4826]: I0129 07:07:57.107710 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gln45\" (UniqueName: \"kubernetes.io/projected/0ec17a23-d3c2-49db-91de-e9385be0f426-kube-api-access-gln45\") pod \"certified-operators-vpc8j\" (UID: \"0ec17a23-d3c2-49db-91de-e9385be0f426\") " pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:07:57 crc kubenswrapper[4826]: I0129 07:07:57.108201 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec17a23-d3c2-49db-91de-e9385be0f426-catalog-content\") pod \"certified-operators-vpc8j\" (UID: \"0ec17a23-d3c2-49db-91de-e9385be0f426\") " pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:07:57 crc kubenswrapper[4826]: I0129 07:07:57.108268 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec17a23-d3c2-49db-91de-e9385be0f426-utilities\") pod \"certified-operators-vpc8j\" (UID: \"0ec17a23-d3c2-49db-91de-e9385be0f426\") " pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:07:57 crc kubenswrapper[4826]: I0129 07:07:57.147496 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gln45\" (UniqueName: \"kubernetes.io/projected/0ec17a23-d3c2-49db-91de-e9385be0f426-kube-api-access-gln45\") pod \"certified-operators-vpc8j\" (UID: \"0ec17a23-d3c2-49db-91de-e9385be0f426\") " pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:07:57 crc kubenswrapper[4826]: I0129 07:07:57.197639 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:07:57 crc kubenswrapper[4826]: I0129 07:07:57.468920 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vpc8j"] Jan 29 07:07:57 crc kubenswrapper[4826]: I0129 07:07:57.502536 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc8j" event={"ID":"0ec17a23-d3c2-49db-91de-e9385be0f426","Type":"ContainerStarted","Data":"186ea378c5b9b2b2517013ea50f52ab526ec41c298c97d6e8e68ea8c807cc50d"} Jan 29 07:07:58 crc kubenswrapper[4826]: I0129 07:07:58.517684 4826 generic.go:334] "Generic (PLEG): container finished" podID="0ec17a23-d3c2-49db-91de-e9385be0f426" containerID="8b40892f6b7c4035ebf907d486ffb19a4cb700d5217eb6ffa428cbf1349c5282" exitCode=0 Jan 29 07:07:58 crc kubenswrapper[4826]: I0129 07:07:58.518055 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc8j" event={"ID":"0ec17a23-d3c2-49db-91de-e9385be0f426","Type":"ContainerDied","Data":"8b40892f6b7c4035ebf907d486ffb19a4cb700d5217eb6ffa428cbf1349c5282"} Jan 29 07:07:58 crc kubenswrapper[4826]: I0129 07:07:58.762818 4826 scope.go:117] "RemoveContainer" containerID="9cb768d646eadba2cb5d45a808c8c8e8c1b8f03c854c19add211340ed6949a2d" Jan 29 07:07:58 crc kubenswrapper[4826]: I0129 07:07:58.801460 4826 scope.go:117] "RemoveContainer" containerID="41887c449ab701a5102ed17452b957ff0bc72de7e1198e0897cd0855d871173f" Jan 29 07:07:58 crc kubenswrapper[4826]: I0129 07:07:58.833975 4826 scope.go:117] "RemoveContainer" containerID="92577bcdcf244adebede7b28b6bb8f3affcb3adeed6e90572060e941115a1be5" Jan 29 07:07:58 crc kubenswrapper[4826]: I0129 07:07:58.862654 4826 scope.go:117] "RemoveContainer" containerID="1937919f8dd64b752c871037ec07858c20e0540d1d4d7464eab4f0a0259be556" Jan 29 07:07:59 crc kubenswrapper[4826]: I0129 07:07:59.534127 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc8j" event={"ID":"0ec17a23-d3c2-49db-91de-e9385be0f426","Type":"ContainerStarted","Data":"2eb55a646e84762aae87fcaec1b25ea1871f22c16071745e31f2eedccce130c4"} Jan 29 07:08:00 crc kubenswrapper[4826]: I0129 07:08:00.554397 4826 generic.go:334] "Generic (PLEG): container finished" podID="0ec17a23-d3c2-49db-91de-e9385be0f426" containerID="2eb55a646e84762aae87fcaec1b25ea1871f22c16071745e31f2eedccce130c4" exitCode=0 Jan 29 07:08:00 crc kubenswrapper[4826]: I0129 07:08:00.554460 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc8j" event={"ID":"0ec17a23-d3c2-49db-91de-e9385be0f426","Type":"ContainerDied","Data":"2eb55a646e84762aae87fcaec1b25ea1871f22c16071745e31f2eedccce130c4"} Jan 29 07:08:01 crc kubenswrapper[4826]: I0129 07:08:01.568137 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc8j" event={"ID":"0ec17a23-d3c2-49db-91de-e9385be0f426","Type":"ContainerStarted","Data":"bae1ffccc4a4a23cd2b027f746c653f22bb9071c1f73f37dcbc64472921dd164"} Jan 29 07:08:01 crc kubenswrapper[4826]: I0129 07:08:01.663909 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vpc8j" podStartSLOduration=2.876559514 podStartE2EDuration="5.663882739s" podCreationTimestamp="2026-01-29 07:07:56 +0000 UTC" firstStartedPulling="2026-01-29 07:07:58.520126472 +0000 UTC m=+1462.381919581" lastFinishedPulling="2026-01-29 07:08:01.307449697 +0000 UTC m=+1465.169242806" observedRunningTime="2026-01-29 07:08:01.6149063 +0000 UTC m=+1465.476699379" watchObservedRunningTime="2026-01-29 07:08:01.663882739 +0000 UTC m=+1465.525675808" Jan 29 07:08:07 crc kubenswrapper[4826]: I0129 07:08:07.199916 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:08:07 crc kubenswrapper[4826]: I0129 07:08:07.200792 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:08:07 crc kubenswrapper[4826]: I0129 07:08:07.281165 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:08:07 crc kubenswrapper[4826]: I0129 07:08:07.694082 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:08:07 crc kubenswrapper[4826]: I0129 07:08:07.749756 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vpc8j"] Jan 29 07:08:09 crc kubenswrapper[4826]: I0129 07:08:09.646209 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vpc8j" podUID="0ec17a23-d3c2-49db-91de-e9385be0f426" containerName="registry-server" containerID="cri-o://bae1ffccc4a4a23cd2b027f746c653f22bb9071c1f73f37dcbc64472921dd164" gracePeriod=2 Jan 29 07:08:10 crc kubenswrapper[4826]: I0129 07:08:10.659871 4826 generic.go:334] "Generic (PLEG): container finished" podID="0ec17a23-d3c2-49db-91de-e9385be0f426" containerID="bae1ffccc4a4a23cd2b027f746c653f22bb9071c1f73f37dcbc64472921dd164" exitCode=0 Jan 29 07:08:10 crc kubenswrapper[4826]: I0129 07:08:10.659927 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc8j" event={"ID":"0ec17a23-d3c2-49db-91de-e9385be0f426","Type":"ContainerDied","Data":"bae1ffccc4a4a23cd2b027f746c653f22bb9071c1f73f37dcbc64472921dd164"} Jan 29 07:08:10 crc kubenswrapper[4826]: I0129 07:08:10.831349 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:08:10 crc kubenswrapper[4826]: I0129 07:08:10.870564 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec17a23-d3c2-49db-91de-e9385be0f426-catalog-content\") pod \"0ec17a23-d3c2-49db-91de-e9385be0f426\" (UID: \"0ec17a23-d3c2-49db-91de-e9385be0f426\") " Jan 29 07:08:10 crc kubenswrapper[4826]: I0129 07:08:10.870698 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gln45\" (UniqueName: \"kubernetes.io/projected/0ec17a23-d3c2-49db-91de-e9385be0f426-kube-api-access-gln45\") pod \"0ec17a23-d3c2-49db-91de-e9385be0f426\" (UID: \"0ec17a23-d3c2-49db-91de-e9385be0f426\") " Jan 29 07:08:10 crc kubenswrapper[4826]: I0129 07:08:10.870777 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec17a23-d3c2-49db-91de-e9385be0f426-utilities\") pod \"0ec17a23-d3c2-49db-91de-e9385be0f426\" (UID: \"0ec17a23-d3c2-49db-91de-e9385be0f426\") " Jan 29 07:08:10 crc kubenswrapper[4826]: I0129 07:08:10.872214 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec17a23-d3c2-49db-91de-e9385be0f426-utilities" (OuterVolumeSpecName: "utilities") pod "0ec17a23-d3c2-49db-91de-e9385be0f426" (UID: "0ec17a23-d3c2-49db-91de-e9385be0f426"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:08:10 crc kubenswrapper[4826]: I0129 07:08:10.888229 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec17a23-d3c2-49db-91de-e9385be0f426-kube-api-access-gln45" (OuterVolumeSpecName: "kube-api-access-gln45") pod "0ec17a23-d3c2-49db-91de-e9385be0f426" (UID: "0ec17a23-d3c2-49db-91de-e9385be0f426"). InnerVolumeSpecName "kube-api-access-gln45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:08:10 crc kubenswrapper[4826]: I0129 07:08:10.976984 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gln45\" (UniqueName: \"kubernetes.io/projected/0ec17a23-d3c2-49db-91de-e9385be0f426-kube-api-access-gln45\") on node \"crc\" DevicePath \"\"" Jan 29 07:08:10 crc kubenswrapper[4826]: I0129 07:08:10.977034 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec17a23-d3c2-49db-91de-e9385be0f426-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:08:11 crc kubenswrapper[4826]: I0129 07:08:11.672552 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc8j" event={"ID":"0ec17a23-d3c2-49db-91de-e9385be0f426","Type":"ContainerDied","Data":"186ea378c5b9b2b2517013ea50f52ab526ec41c298c97d6e8e68ea8c807cc50d"} Jan 29 07:08:11 crc kubenswrapper[4826]: I0129 07:08:11.672616 4826 scope.go:117] "RemoveContainer" containerID="bae1ffccc4a4a23cd2b027f746c653f22bb9071c1f73f37dcbc64472921dd164" Jan 29 07:08:11 crc kubenswrapper[4826]: I0129 07:08:11.672636 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpc8j" Jan 29 07:08:11 crc kubenswrapper[4826]: I0129 07:08:11.697969 4826 scope.go:117] "RemoveContainer" containerID="2eb55a646e84762aae87fcaec1b25ea1871f22c16071745e31f2eedccce130c4" Jan 29 07:08:11 crc kubenswrapper[4826]: I0129 07:08:11.726403 4826 scope.go:117] "RemoveContainer" containerID="8b40892f6b7c4035ebf907d486ffb19a4cb700d5217eb6ffa428cbf1349c5282" Jan 29 07:08:15 crc kubenswrapper[4826]: I0129 07:08:15.990279 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec17a23-d3c2-49db-91de-e9385be0f426-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ec17a23-d3c2-49db-91de-e9385be0f426" (UID: "0ec17a23-d3c2-49db-91de-e9385be0f426"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:08:16 crc kubenswrapper[4826]: I0129 07:08:16.060053 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec17a23-d3c2-49db-91de-e9385be0f426-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:08:16 crc kubenswrapper[4826]: I0129 07:08:16.222101 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vpc8j"] Jan 29 07:08:16 crc kubenswrapper[4826]: I0129 07:08:16.234845 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vpc8j"] Jan 29 07:08:16 crc kubenswrapper[4826]: I0129 07:08:16.818440 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec17a23-d3c2-49db-91de-e9385be0f426" path="/var/lib/kubelet/pods/0ec17a23-d3c2-49db-91de-e9385be0f426/volumes" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.737097 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cmfn6"] Jan 29 07:08:33 crc kubenswrapper[4826]: E0129 07:08:33.738190 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec17a23-d3c2-49db-91de-e9385be0f426" containerName="extract-content" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.738206 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec17a23-d3c2-49db-91de-e9385be0f426" containerName="extract-content" Jan 29 07:08:33 crc kubenswrapper[4826]: E0129 07:08:33.738252 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec17a23-d3c2-49db-91de-e9385be0f426" containerName="registry-server" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.738262 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec17a23-d3c2-49db-91de-e9385be0f426" containerName="registry-server" Jan 29 07:08:33 crc kubenswrapper[4826]: E0129 07:08:33.738282 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec17a23-d3c2-49db-91de-e9385be0f426" containerName="extract-utilities" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.738291 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec17a23-d3c2-49db-91de-e9385be0f426" containerName="extract-utilities" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.738529 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec17a23-d3c2-49db-91de-e9385be0f426" containerName="registry-server" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.746893 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.753454 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmfn6"] Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.860199 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkzrw\" (UniqueName: \"kubernetes.io/projected/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-kube-api-access-xkzrw\") pod \"redhat-marketplace-cmfn6\" (UID: \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\") " pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.860672 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-catalog-content\") pod \"redhat-marketplace-cmfn6\" (UID: \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\") " pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.860737 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-utilities\") pod \"redhat-marketplace-cmfn6\" (UID: \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\") " pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.962349 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-utilities\") pod \"redhat-marketplace-cmfn6\" (UID: \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\") " pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.962420 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzrw\" (UniqueName: \"kubernetes.io/projected/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-kube-api-access-xkzrw\") pod \"redhat-marketplace-cmfn6\" (UID: \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\") " pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.962658 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-catalog-content\") pod \"redhat-marketplace-cmfn6\" (UID: \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\") " pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.963007 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-utilities\") pod \"redhat-marketplace-cmfn6\" (UID: \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\") " pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.963064 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-catalog-content\") pod \"redhat-marketplace-cmfn6\" (UID: \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\") " pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:33 crc kubenswrapper[4826]: I0129 07:08:33.996149 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzrw\" (UniqueName: \"kubernetes.io/projected/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-kube-api-access-xkzrw\") pod \"redhat-marketplace-cmfn6\" (UID: \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\") " pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:34 crc kubenswrapper[4826]: I0129 07:08:34.077699 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:34 crc kubenswrapper[4826]: I0129 07:08:34.538511 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmfn6"] Jan 29 07:08:34 crc kubenswrapper[4826]: W0129 07:08:34.547512 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda535eef6_d4a3_4b2c_ac45_8fd536c8c975.slice/crio-e86ec616433a8d24b74b884cf93405cd237ab43f95377cde90bbec334aad67a2 WatchSource:0}: Error finding container e86ec616433a8d24b74b884cf93405cd237ab43f95377cde90bbec334aad67a2: Status 404 returned error can't find the container with id e86ec616433a8d24b74b884cf93405cd237ab43f95377cde90bbec334aad67a2 Jan 29 07:08:34 crc kubenswrapper[4826]: I0129 07:08:34.922459 4826 generic.go:334] "Generic (PLEG): container finished" podID="a535eef6-d4a3-4b2c-ac45-8fd536c8c975" containerID="92831ba7fdb663675b71fad975961ea6cac4b693854cdac7641cb791d374449d" exitCode=0 Jan 29 07:08:34 crc kubenswrapper[4826]: I0129 07:08:34.922588 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmfn6" event={"ID":"a535eef6-d4a3-4b2c-ac45-8fd536c8c975","Type":"ContainerDied","Data":"92831ba7fdb663675b71fad975961ea6cac4b693854cdac7641cb791d374449d"} Jan 29 07:08:34 crc kubenswrapper[4826]: I0129 07:08:34.922826 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmfn6" event={"ID":"a535eef6-d4a3-4b2c-ac45-8fd536c8c975","Type":"ContainerStarted","Data":"e86ec616433a8d24b74b884cf93405cd237ab43f95377cde90bbec334aad67a2"} Jan 29 07:08:36 crc kubenswrapper[4826]: I0129 07:08:36.946981 4826 generic.go:334] "Generic (PLEG): container finished" podID="a535eef6-d4a3-4b2c-ac45-8fd536c8c975" containerID="b843ee42251262c00c568f7b6994feceac0cdc118367e06f6b41ed426a917840" exitCode=0 Jan 29 07:08:36 crc kubenswrapper[4826]: I0129 07:08:36.947062 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmfn6" event={"ID":"a535eef6-d4a3-4b2c-ac45-8fd536c8c975","Type":"ContainerDied","Data":"b843ee42251262c00c568f7b6994feceac0cdc118367e06f6b41ed426a917840"} Jan 29 07:08:37 crc kubenswrapper[4826]: I0129 07:08:37.959663 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmfn6" event={"ID":"a535eef6-d4a3-4b2c-ac45-8fd536c8c975","Type":"ContainerStarted","Data":"66168430ca76e8bf3155b514603110db0c017ec5e92b9cc1059db06d35d4ed65"} Jan 29 07:08:37 crc kubenswrapper[4826]: I0129 07:08:37.978542 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cmfn6" podStartSLOduration=2.54337984 podStartE2EDuration="4.978516422s" podCreationTimestamp="2026-01-29 07:08:33 +0000 UTC" firstStartedPulling="2026-01-29 07:08:34.925363779 +0000 UTC m=+1498.787156878" lastFinishedPulling="2026-01-29 07:08:37.360500351 +0000 UTC m=+1501.222293460" observedRunningTime="2026-01-29 07:08:37.977733462 +0000 UTC m=+1501.839526531" watchObservedRunningTime="2026-01-29 07:08:37.978516422 +0000 UTC m=+1501.840309491" Jan 29 07:08:44 crc kubenswrapper[4826]: I0129 07:08:44.078897 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:44 crc kubenswrapper[4826]: I0129 07:08:44.079485 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:44 crc kubenswrapper[4826]: I0129 07:08:44.138664 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:45 crc kubenswrapper[4826]: I0129 07:08:45.078204 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:45 crc kubenswrapper[4826]: I0129 07:08:45.129759 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmfn6"] Jan 29 07:08:47 crc kubenswrapper[4826]: I0129 07:08:47.064356 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cmfn6" podUID="a535eef6-d4a3-4b2c-ac45-8fd536c8c975" containerName="registry-server" containerID="cri-o://66168430ca76e8bf3155b514603110db0c017ec5e92b9cc1059db06d35d4ed65" gracePeriod=2 Jan 29 07:08:48 crc kubenswrapper[4826]: I0129 07:08:48.080692 4826 generic.go:334] "Generic (PLEG): container finished" podID="a535eef6-d4a3-4b2c-ac45-8fd536c8c975" containerID="66168430ca76e8bf3155b514603110db0c017ec5e92b9cc1059db06d35d4ed65" exitCode=0 Jan 29 07:08:48 crc kubenswrapper[4826]: I0129 07:08:48.080770 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmfn6" event={"ID":"a535eef6-d4a3-4b2c-ac45-8fd536c8c975","Type":"ContainerDied","Data":"66168430ca76e8bf3155b514603110db0c017ec5e92b9cc1059db06d35d4ed65"} Jan 29 07:08:48 crc kubenswrapper[4826]: I0129 07:08:48.404783 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:48 crc kubenswrapper[4826]: I0129 07:08:48.483662 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkzrw\" (UniqueName: \"kubernetes.io/projected/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-kube-api-access-xkzrw\") pod \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\" (UID: \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\") " Jan 29 07:08:48 crc kubenswrapper[4826]: I0129 07:08:48.483821 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-utilities\") pod \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\" (UID: \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\") " Jan 29 07:08:48 crc kubenswrapper[4826]: I0129 07:08:48.483849 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-catalog-content\") pod \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\" (UID: \"a535eef6-d4a3-4b2c-ac45-8fd536c8c975\") " Jan 29 07:08:48 crc kubenswrapper[4826]: I0129 07:08:48.488771 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-utilities" (OuterVolumeSpecName: "utilities") pod "a535eef6-d4a3-4b2c-ac45-8fd536c8c975" (UID: "a535eef6-d4a3-4b2c-ac45-8fd536c8c975"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:08:48 crc kubenswrapper[4826]: I0129 07:08:48.488796 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-kube-api-access-xkzrw" (OuterVolumeSpecName: "kube-api-access-xkzrw") pod "a535eef6-d4a3-4b2c-ac45-8fd536c8c975" (UID: "a535eef6-d4a3-4b2c-ac45-8fd536c8c975"). InnerVolumeSpecName "kube-api-access-xkzrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:08:48 crc kubenswrapper[4826]: I0129 07:08:48.538756 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a535eef6-d4a3-4b2c-ac45-8fd536c8c975" (UID: "a535eef6-d4a3-4b2c-ac45-8fd536c8c975"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:08:48 crc kubenswrapper[4826]: I0129 07:08:48.585260 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:08:48 crc kubenswrapper[4826]: I0129 07:08:48.585331 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:08:48 crc kubenswrapper[4826]: I0129 07:08:48.585347 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkzrw\" (UniqueName: \"kubernetes.io/projected/a535eef6-d4a3-4b2c-ac45-8fd536c8c975-kube-api-access-xkzrw\") on node \"crc\" DevicePath \"\"" Jan 29 07:08:49 crc kubenswrapper[4826]: I0129 07:08:49.091174 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmfn6" event={"ID":"a535eef6-d4a3-4b2c-ac45-8fd536c8c975","Type":"ContainerDied","Data":"e86ec616433a8d24b74b884cf93405cd237ab43f95377cde90bbec334aad67a2"} Jan 29 07:08:49 crc kubenswrapper[4826]: I0129 07:08:49.091228 4826 scope.go:117] "RemoveContainer" containerID="66168430ca76e8bf3155b514603110db0c017ec5e92b9cc1059db06d35d4ed65" Jan 29 07:08:49 crc kubenswrapper[4826]: I0129 07:08:49.091251 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmfn6" Jan 29 07:08:49 crc kubenswrapper[4826]: I0129 07:08:49.121419 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmfn6"] Jan 29 07:08:49 crc kubenswrapper[4826]: I0129 07:08:49.129401 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmfn6"] Jan 29 07:08:49 crc kubenswrapper[4826]: I0129 07:08:49.134561 4826 scope.go:117] "RemoveContainer" containerID="b843ee42251262c00c568f7b6994feceac0cdc118367e06f6b41ed426a917840" Jan 29 07:08:49 crc kubenswrapper[4826]: I0129 07:08:49.160508 4826 scope.go:117] "RemoveContainer" containerID="92831ba7fdb663675b71fad975961ea6cac4b693854cdac7641cb791d374449d" Jan 29 07:08:50 crc kubenswrapper[4826]: I0129 07:08:50.824609 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a535eef6-d4a3-4b2c-ac45-8fd536c8c975" path="/var/lib/kubelet/pods/a535eef6-d4a3-4b2c-ac45-8fd536c8c975/volumes" Jan 29 07:08:50 crc kubenswrapper[4826]: I0129 07:08:50.943609 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rj7jf"] Jan 29 07:08:50 crc kubenswrapper[4826]: E0129 07:08:50.943935 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a535eef6-d4a3-4b2c-ac45-8fd536c8c975" containerName="extract-content" Jan 29 07:08:50 crc kubenswrapper[4826]: I0129 07:08:50.943955 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a535eef6-d4a3-4b2c-ac45-8fd536c8c975" containerName="extract-content" Jan 29 07:08:50 crc kubenswrapper[4826]: E0129 07:08:50.943974 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a535eef6-d4a3-4b2c-ac45-8fd536c8c975" containerName="registry-server" Jan 29 07:08:50 crc kubenswrapper[4826]: I0129 07:08:50.943983 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a535eef6-d4a3-4b2c-ac45-8fd536c8c975" containerName="registry-server" Jan 29 07:08:50 crc kubenswrapper[4826]: E0129 07:08:50.944006 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a535eef6-d4a3-4b2c-ac45-8fd536c8c975" containerName="extract-utilities" Jan 29 07:08:50 crc kubenswrapper[4826]: I0129 07:08:50.944015 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a535eef6-d4a3-4b2c-ac45-8fd536c8c975" containerName="extract-utilities" Jan 29 07:08:50 crc kubenswrapper[4826]: I0129 07:08:50.944175 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a535eef6-d4a3-4b2c-ac45-8fd536c8c975" containerName="registry-server" Jan 29 07:08:50 crc kubenswrapper[4826]: I0129 07:08:50.945432 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:08:50 crc kubenswrapper[4826]: I0129 07:08:50.970782 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rj7jf"] Jan 29 07:08:51 crc kubenswrapper[4826]: I0129 07:08:51.027480 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe9c47a-87e8-488b-862f-70d2d6837701-utilities\") pod \"community-operators-rj7jf\" (UID: \"bbe9c47a-87e8-488b-862f-70d2d6837701\") " pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:08:51 crc kubenswrapper[4826]: I0129 07:08:51.027563 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxtw7\" (UniqueName: \"kubernetes.io/projected/bbe9c47a-87e8-488b-862f-70d2d6837701-kube-api-access-xxtw7\") pod \"community-operators-rj7jf\" (UID: \"bbe9c47a-87e8-488b-862f-70d2d6837701\") " pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:08:51 crc kubenswrapper[4826]: I0129 07:08:51.027598 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe9c47a-87e8-488b-862f-70d2d6837701-catalog-content\") pod \"community-operators-rj7jf\" (UID: \"bbe9c47a-87e8-488b-862f-70d2d6837701\") " pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:08:51 crc kubenswrapper[4826]: I0129 07:08:51.129348 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe9c47a-87e8-488b-862f-70d2d6837701-utilities\") pod \"community-operators-rj7jf\" (UID: \"bbe9c47a-87e8-488b-862f-70d2d6837701\") " pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:08:51 crc kubenswrapper[4826]: I0129 07:08:51.129529 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxtw7\" (UniqueName: \"kubernetes.io/projected/bbe9c47a-87e8-488b-862f-70d2d6837701-kube-api-access-xxtw7\") pod \"community-operators-rj7jf\" (UID: \"bbe9c47a-87e8-488b-862f-70d2d6837701\") " pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:08:51 crc kubenswrapper[4826]: I0129 07:08:51.129629 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe9c47a-87e8-488b-862f-70d2d6837701-catalog-content\") pod \"community-operators-rj7jf\" (UID: \"bbe9c47a-87e8-488b-862f-70d2d6837701\") " pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:08:51 crc kubenswrapper[4826]: I0129 07:08:51.130217 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe9c47a-87e8-488b-862f-70d2d6837701-utilities\") pod \"community-operators-rj7jf\" (UID: \"bbe9c47a-87e8-488b-862f-70d2d6837701\") " pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:08:51 crc kubenswrapper[4826]: I0129 07:08:51.130242 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe9c47a-87e8-488b-862f-70d2d6837701-catalog-content\") pod \"community-operators-rj7jf\" (UID: \"bbe9c47a-87e8-488b-862f-70d2d6837701\") " pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:08:51 crc kubenswrapper[4826]: I0129 07:08:51.145529 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxtw7\" (UniqueName: \"kubernetes.io/projected/bbe9c47a-87e8-488b-862f-70d2d6837701-kube-api-access-xxtw7\") pod \"community-operators-rj7jf\" (UID: \"bbe9c47a-87e8-488b-862f-70d2d6837701\") " pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:08:51 crc kubenswrapper[4826]: I0129 07:08:51.285794 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:08:51 crc kubenswrapper[4826]: I0129 07:08:51.793256 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rj7jf"] Jan 29 07:08:52 crc kubenswrapper[4826]: I0129 07:08:52.120466 4826 generic.go:334] "Generic (PLEG): container finished" podID="bbe9c47a-87e8-488b-862f-70d2d6837701" containerID="006e3a0686e5939e8ce97742e166594446e3c3369913d7feb89e400cc37b2a0b" exitCode=0 Jan 29 07:08:52 crc kubenswrapper[4826]: I0129 07:08:52.120557 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj7jf" event={"ID":"bbe9c47a-87e8-488b-862f-70d2d6837701","Type":"ContainerDied","Data":"006e3a0686e5939e8ce97742e166594446e3c3369913d7feb89e400cc37b2a0b"} Jan 29 07:08:52 crc kubenswrapper[4826]: I0129 07:08:52.120971 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj7jf" event={"ID":"bbe9c47a-87e8-488b-862f-70d2d6837701","Type":"ContainerStarted","Data":"19b0218e8c5cccd8bca8c77653969a6175101b588dad40529d450409394deb00"} Jan 29 07:08:53 crc kubenswrapper[4826]: I0129 07:08:53.135521 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj7jf" event={"ID":"bbe9c47a-87e8-488b-862f-70d2d6837701","Type":"ContainerStarted","Data":"5a70785f2f66a61e0ce2790ffba1017c6dcdce0eb7e0d7531d1d2a3807cbe32f"} Jan 29 07:08:54 crc kubenswrapper[4826]: I0129 07:08:54.148833 4826 generic.go:334] "Generic (PLEG): container finished" podID="bbe9c47a-87e8-488b-862f-70d2d6837701" containerID="5a70785f2f66a61e0ce2790ffba1017c6dcdce0eb7e0d7531d1d2a3807cbe32f" exitCode=0 Jan 29 07:08:54 crc kubenswrapper[4826]: I0129 07:08:54.149292 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj7jf" event={"ID":"bbe9c47a-87e8-488b-862f-70d2d6837701","Type":"ContainerDied","Data":"5a70785f2f66a61e0ce2790ffba1017c6dcdce0eb7e0d7531d1d2a3807cbe32f"} Jan 29 07:08:55 crc kubenswrapper[4826]: I0129 07:08:55.163629 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj7jf" event={"ID":"bbe9c47a-87e8-488b-862f-70d2d6837701","Type":"ContainerStarted","Data":"600c3454c484cf202be6be0224fc5263078829dbe4ec24395b39276ea0091f6c"} Jan 29 07:08:55 crc kubenswrapper[4826]: I0129 07:08:55.195246 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rj7jf" podStartSLOduration=2.7156205939999998 podStartE2EDuration="5.195229395s" podCreationTimestamp="2026-01-29 07:08:50 +0000 UTC" firstStartedPulling="2026-01-29 07:08:52.122613389 +0000 UTC m=+1515.984406468" lastFinishedPulling="2026-01-29 07:08:54.60222216 +0000 UTC m=+1518.464015269" observedRunningTime="2026-01-29 07:08:55.185841333 +0000 UTC m=+1519.047634412" watchObservedRunningTime="2026-01-29 07:08:55.195229395 +0000 UTC m=+1519.057022474" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.037276 4826 scope.go:117] "RemoveContainer" containerID="bf86fb4ad75b42745b9024c2a242dcc8628452687b71ca3b6937b01bb71646c9" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.069092 4826 scope.go:117] "RemoveContainer" containerID="f36d662feb9b70757b17b28dd352459437ead99aa4d2976e18151a94e375a7af" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.117245 4826 scope.go:117] "RemoveContainer" containerID="3951b0758c239a5b0edda6fe5d77c334c3b619b6fe73c93f7cc738f9168885ad" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.152344 4826 scope.go:117] "RemoveContainer" containerID="d2b2d9c846c9f62ce0471ed8392344e2eed633a6d6946652cf52b4e532c79a51" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.185786 4826 scope.go:117] "RemoveContainer" containerID="ee90b5e3403529b38e18ca6e7743dca447faa8f7d746c8a4314b599953d2bdc2" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.245063 4826 scope.go:117] "RemoveContainer" containerID="8a1a13c0fa6aa99b5ae28c30e096d59872f199829e1f0dd92715f6a3e51161b6" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.279210 4826 scope.go:117] "RemoveContainer" containerID="413d8ba5077330d1cb894502ce4b2de16c2c1508b66a60974ea40c424b79ef12" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.317609 4826 scope.go:117] "RemoveContainer" containerID="b8c90ffd14ba63fdc1141218b4b95071de44fe71869b70a72ff68b6e207fdeab" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.342483 4826 scope.go:117] "RemoveContainer" containerID="cb11fed4a94e5e8a15ea4adda925e0e38ecf713101a66a332eeccfc99758d0e6" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.371361 4826 scope.go:117] "RemoveContainer" containerID="3b16dc91c4cd7ce31b4ff6c7a4831ff1ec0c0e0999a5d166cdaf61d18f96c236" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.400359 4826 scope.go:117] "RemoveContainer" containerID="cf2944944f9f21b3ba12ebc08b9e24adaea20d8108a210e674d22e24563e0980" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.439637 4826 scope.go:117] "RemoveContainer" containerID="727b87c675e49b86f65ef5c3d398c7e86e56f8ef827c047c69e48782eb0c3773" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.473889 4826 scope.go:117] "RemoveContainer" containerID="ff2e9d1be46f0b2258c946e148a06f55582dc3fdd26410fbc6021f87bfbe528f" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.491457 4826 scope.go:117] "RemoveContainer" containerID="ee88680810e029e7435b67f77ba4c16aee4d06f22ee550823bd4f344ca8762f9" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.515743 4826 scope.go:117] "RemoveContainer" containerID="137afef9d47f0a6a5614d4f2d7c8e1432be4bfbe5c3da69b6db4f80ef575253a" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.535865 4826 scope.go:117] "RemoveContainer" containerID="e1e42bde75bc8599c637749a61aa4b362868edadf725f9ac179d9cfb49bf7db4" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.566015 4826 scope.go:117] "RemoveContainer" containerID="a480955a0615881272dba83f8f5857808fefb8de24dbded4f680c4605d685e59" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.598728 4826 scope.go:117] "RemoveContainer" containerID="e5181413a8e5faf280cefd33bbfd96ed69dcab6f05230cff35710cfaa85a847f" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.625890 4826 scope.go:117] "RemoveContainer" containerID="d549f4b596986e823d47d770b85f06655aed211e20d2127f7baf8c70f10f5971" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.648815 4826 scope.go:117] "RemoveContainer" containerID="c064964aba665f0e4b9b54a6929bd5f3e07cb3ed36b2f733b8bc30cf4991d621" Jan 29 07:08:59 crc kubenswrapper[4826]: I0129 07:08:59.666091 4826 scope.go:117] "RemoveContainer" containerID="845f1e3e68f92d66cad36f3caa9cbd051acb175e0581aa74b30ede6cbc6d5c03" Jan 29 07:09:01 crc kubenswrapper[4826]: I0129 07:09:01.286738 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:09:01 crc kubenswrapper[4826]: I0129 07:09:01.286908 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:09:01 crc kubenswrapper[4826]: I0129 07:09:01.360423 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:09:02 crc kubenswrapper[4826]: I0129 07:09:02.297924 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:09:02 crc kubenswrapper[4826]: I0129 07:09:02.349239 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rj7jf"] Jan 29 07:09:04 crc kubenswrapper[4826]: I0129 07:09:04.270447 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rj7jf" podUID="bbe9c47a-87e8-488b-862f-70d2d6837701" containerName="registry-server" containerID="cri-o://600c3454c484cf202be6be0224fc5263078829dbe4ec24395b39276ea0091f6c" gracePeriod=2 Jan 29 07:09:04 crc kubenswrapper[4826]: I0129 07:09:04.751325 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:09:04 crc kubenswrapper[4826]: I0129 07:09:04.930019 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxtw7\" (UniqueName: \"kubernetes.io/projected/bbe9c47a-87e8-488b-862f-70d2d6837701-kube-api-access-xxtw7\") pod \"bbe9c47a-87e8-488b-862f-70d2d6837701\" (UID: \"bbe9c47a-87e8-488b-862f-70d2d6837701\") " Jan 29 07:09:04 crc kubenswrapper[4826]: I0129 07:09:04.930174 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe9c47a-87e8-488b-862f-70d2d6837701-catalog-content\") pod \"bbe9c47a-87e8-488b-862f-70d2d6837701\" (UID: \"bbe9c47a-87e8-488b-862f-70d2d6837701\") " Jan 29 07:09:04 crc kubenswrapper[4826]: I0129 07:09:04.930204 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe9c47a-87e8-488b-862f-70d2d6837701-utilities\") pod \"bbe9c47a-87e8-488b-862f-70d2d6837701\" (UID: \"bbe9c47a-87e8-488b-862f-70d2d6837701\") " Jan 29 07:09:04 crc kubenswrapper[4826]: I0129 07:09:04.931620 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbe9c47a-87e8-488b-862f-70d2d6837701-utilities" (OuterVolumeSpecName: "utilities") pod "bbe9c47a-87e8-488b-862f-70d2d6837701" (UID: "bbe9c47a-87e8-488b-862f-70d2d6837701"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:09:04 crc kubenswrapper[4826]: I0129 07:09:04.939686 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe9c47a-87e8-488b-862f-70d2d6837701-kube-api-access-xxtw7" (OuterVolumeSpecName: "kube-api-access-xxtw7") pod "bbe9c47a-87e8-488b-862f-70d2d6837701" (UID: "bbe9c47a-87e8-488b-862f-70d2d6837701"). InnerVolumeSpecName "kube-api-access-xxtw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:09:04 crc kubenswrapper[4826]: I0129 07:09:04.979754 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbe9c47a-87e8-488b-862f-70d2d6837701-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbe9c47a-87e8-488b-862f-70d2d6837701" (UID: "bbe9c47a-87e8-488b-862f-70d2d6837701"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.032596 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe9c47a-87e8-488b-862f-70d2d6837701-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.032654 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxtw7\" (UniqueName: \"kubernetes.io/projected/bbe9c47a-87e8-488b-862f-70d2d6837701-kube-api-access-xxtw7\") on node \"crc\" DevicePath \"\"" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.032675 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe9c47a-87e8-488b-862f-70d2d6837701-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.285493 4826 generic.go:334] "Generic (PLEG): container finished" podID="bbe9c47a-87e8-488b-862f-70d2d6837701" containerID="600c3454c484cf202be6be0224fc5263078829dbe4ec24395b39276ea0091f6c" exitCode=0 Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.285556 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj7jf" event={"ID":"bbe9c47a-87e8-488b-862f-70d2d6837701","Type":"ContainerDied","Data":"600c3454c484cf202be6be0224fc5263078829dbe4ec24395b39276ea0091f6c"} Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.285608 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj7jf" event={"ID":"bbe9c47a-87e8-488b-862f-70d2d6837701","Type":"ContainerDied","Data":"19b0218e8c5cccd8bca8c77653969a6175101b588dad40529d450409394deb00"} Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.285635 4826 scope.go:117] "RemoveContainer" containerID="600c3454c484cf202be6be0224fc5263078829dbe4ec24395b39276ea0091f6c" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.285639 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj7jf" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.329907 4826 scope.go:117] "RemoveContainer" containerID="5a70785f2f66a61e0ce2790ffba1017c6dcdce0eb7e0d7531d1d2a3807cbe32f" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.336685 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rj7jf"] Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.344429 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rj7jf"] Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.369513 4826 scope.go:117] "RemoveContainer" containerID="006e3a0686e5939e8ce97742e166594446e3c3369913d7feb89e400cc37b2a0b" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.393632 4826 scope.go:117] "RemoveContainer" containerID="600c3454c484cf202be6be0224fc5263078829dbe4ec24395b39276ea0091f6c" Jan 29 07:09:05 crc kubenswrapper[4826]: E0129 07:09:05.394108 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"600c3454c484cf202be6be0224fc5263078829dbe4ec24395b39276ea0091f6c\": container with ID starting with 600c3454c484cf202be6be0224fc5263078829dbe4ec24395b39276ea0091f6c not found: ID does not exist" containerID="600c3454c484cf202be6be0224fc5263078829dbe4ec24395b39276ea0091f6c" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.394160 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"600c3454c484cf202be6be0224fc5263078829dbe4ec24395b39276ea0091f6c"} err="failed to get container status \"600c3454c484cf202be6be0224fc5263078829dbe4ec24395b39276ea0091f6c\": rpc error: code = NotFound desc = could not find container \"600c3454c484cf202be6be0224fc5263078829dbe4ec24395b39276ea0091f6c\": container with ID starting with 600c3454c484cf202be6be0224fc5263078829dbe4ec24395b39276ea0091f6c not found: ID does not exist" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.394195 4826 scope.go:117] "RemoveContainer" containerID="5a70785f2f66a61e0ce2790ffba1017c6dcdce0eb7e0d7531d1d2a3807cbe32f" Jan 29 07:09:05 crc kubenswrapper[4826]: E0129 07:09:05.395843 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a70785f2f66a61e0ce2790ffba1017c6dcdce0eb7e0d7531d1d2a3807cbe32f\": container with ID starting with 5a70785f2f66a61e0ce2790ffba1017c6dcdce0eb7e0d7531d1d2a3807cbe32f not found: ID does not exist" containerID="5a70785f2f66a61e0ce2790ffba1017c6dcdce0eb7e0d7531d1d2a3807cbe32f" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.395887 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a70785f2f66a61e0ce2790ffba1017c6dcdce0eb7e0d7531d1d2a3807cbe32f"} err="failed to get container status \"5a70785f2f66a61e0ce2790ffba1017c6dcdce0eb7e0d7531d1d2a3807cbe32f\": rpc error: code = NotFound desc = could not find container \"5a70785f2f66a61e0ce2790ffba1017c6dcdce0eb7e0d7531d1d2a3807cbe32f\": container with ID starting with 5a70785f2f66a61e0ce2790ffba1017c6dcdce0eb7e0d7531d1d2a3807cbe32f not found: ID does not exist" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.395909 4826 scope.go:117] "RemoveContainer" containerID="006e3a0686e5939e8ce97742e166594446e3c3369913d7feb89e400cc37b2a0b" Jan 29 07:09:05 crc kubenswrapper[4826]: E0129 07:09:05.396204 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"006e3a0686e5939e8ce97742e166594446e3c3369913d7feb89e400cc37b2a0b\": container with ID starting with 006e3a0686e5939e8ce97742e166594446e3c3369913d7feb89e400cc37b2a0b not found: ID does not exist" containerID="006e3a0686e5939e8ce97742e166594446e3c3369913d7feb89e400cc37b2a0b" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.396241 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006e3a0686e5939e8ce97742e166594446e3c3369913d7feb89e400cc37b2a0b"} err="failed to get container status \"006e3a0686e5939e8ce97742e166594446e3c3369913d7feb89e400cc37b2a0b\": rpc error: code = NotFound desc = could not find container \"006e3a0686e5939e8ce97742e166594446e3c3369913d7feb89e400cc37b2a0b\": container with ID starting with 006e3a0686e5939e8ce97742e166594446e3c3369913d7feb89e400cc37b2a0b not found: ID does not exist" Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.664132 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:09:05 crc kubenswrapper[4826]: I0129 07:09:05.664263 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:09:06 crc kubenswrapper[4826]: I0129 07:09:06.828343 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe9c47a-87e8-488b-862f-70d2d6837701" path="/var/lib/kubelet/pods/bbe9c47a-87e8-488b-862f-70d2d6837701/volumes" Jan 29 07:09:35 crc kubenswrapper[4826]: I0129 07:09:35.657091 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:09:35 crc kubenswrapper[4826]: I0129 07:09:35.657823 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.056666 4826 scope.go:117] "RemoveContainer" containerID="c3be21ae398eff67e22c85ef224b354ffbbd7d7ab54b85ce0a04e1584c9e8486" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.086222 4826 scope.go:117] "RemoveContainer" containerID="581b263ab80d201cfd6ee41c67707a16c2ed78463b0e46ab2d2fbca441113216" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.129424 4826 scope.go:117] "RemoveContainer" containerID="31b30318fc91eafcd3b97afe85a5e0965b844ee6e31ce721180f3fef71409d0b" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.147722 4826 scope.go:117] "RemoveContainer" containerID="da6172474a5804740243a88717e98452c0421876963c421e7935fba689bdc058" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.162768 4826 scope.go:117] "RemoveContainer" containerID="f1db27671e7941a3fe5f409f368278faebc2c9c102ed39040c62e814c55b33f6" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.186564 4826 scope.go:117] "RemoveContainer" containerID="b086320849aa987d26d49d964fbefd0fcd5dd9b1184d3344ea085e5c42fc14d1" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.207511 4826 scope.go:117] "RemoveContainer" containerID="f60ca07e9e25fa742e12d26cd7318095e6e3dc16fac1e585a5581d0cf9693fdd" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.226208 4826 scope.go:117] "RemoveContainer" containerID="639fc19de1ca53b22e5ad7ab867d42fad4024f7c5011feb4c18fae207a13d7e7" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.242360 4826 scope.go:117] "RemoveContainer" containerID="18ae489dd61bb2195354b5d8f5f9b6e0f384329f3a4ac858dde0d3feffe4b202" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.259776 4826 scope.go:117] "RemoveContainer" containerID="b3e85ce7ed4e1d2758772a2b894824794e2f18c81bade5dce37e78c8548d5969" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.277154 4826 scope.go:117] "RemoveContainer" containerID="0bbf6d62e960b5a682af4aa5e41584da96c6f57e7c9e6126855743ce70f35375" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.292906 4826 scope.go:117] "RemoveContainer" containerID="3bfc6875edaf5adbf81402c08f2ba55583e837eda4f1c32161d763c93ada8c24" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.314317 4826 scope.go:117] "RemoveContainer" containerID="c8f387f435d969f8cf85b78f6f85ede7c54c71cb9b6213cda2569b918c6e2d3b" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.350487 4826 scope.go:117] "RemoveContainer" containerID="4d9bbe77efa1079486e055eb220a04a3be411395b46dcaaf31c558f3d4ccb6f8" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.373316 4826 scope.go:117] "RemoveContainer" containerID="16b794547221f9aaaeba5d721b9f980ce5d5698a660a295e46a0036d996a08a9" Jan 29 07:10:00 crc kubenswrapper[4826]: I0129 07:10:00.399283 4826 scope.go:117] "RemoveContainer" containerID="3817b6b8e4ea595c7ced258dd6bfd2af338287753b307239e9914dc8d293a791" Jan 29 07:10:05 crc kubenswrapper[4826]: I0129 07:10:05.656249 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:10:05 crc kubenswrapper[4826]: I0129 07:10:05.656975 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:10:05 crc kubenswrapper[4826]: I0129 07:10:05.657052 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 07:10:05 crc kubenswrapper[4826]: I0129 07:10:05.657758 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:10:05 crc kubenswrapper[4826]: I0129 07:10:05.657839 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" gracePeriod=600 Jan 29 07:10:05 crc kubenswrapper[4826]: E0129 07:10:05.842419 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:10:05 crc kubenswrapper[4826]: I0129 07:10:05.878432 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" exitCode=0 Jan 29 07:10:05 crc kubenswrapper[4826]: I0129 07:10:05.878472 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0"} Jan 29 07:10:05 crc kubenswrapper[4826]: I0129 07:10:05.878509 4826 scope.go:117] "RemoveContainer" containerID="577b176493c80a578b39974191ea87b611ce451ac0e7d53efe3f736b701ffd68" Jan 29 07:10:05 crc kubenswrapper[4826]: I0129 07:10:05.879524 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:10:05 crc kubenswrapper[4826]: E0129 07:10:05.879998 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:10:16 crc kubenswrapper[4826]: I0129 07:10:16.813924 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:10:16 crc kubenswrapper[4826]: E0129 07:10:16.814977 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:10:28 crc kubenswrapper[4826]: I0129 07:10:28.809015 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:10:28 crc kubenswrapper[4826]: E0129 07:10:28.809861 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:10:41 crc kubenswrapper[4826]: I0129 07:10:41.809027 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:10:41 crc kubenswrapper[4826]: E0129 07:10:41.809911 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:10:53 crc kubenswrapper[4826]: I0129 07:10:53.808730 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:10:53 crc kubenswrapper[4826]: E0129 07:10:53.809673 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.563088 4826 scope.go:117] "RemoveContainer" containerID="fc743e44b2c9d827e60ea724bbe7d974a44f3ee74e7756065624a98438ca7451" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.582440 4826 scope.go:117] "RemoveContainer" containerID="bd6aa32cd0f15e492ee0631ae9d7045ced48634f793b26fba31fc94692e39ffc" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.616176 4826 scope.go:117] "RemoveContainer" containerID="4677b9febca43835d07a5261efafaae8aef936a63e7457f76d3eb9bfc3d8d34e" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.642843 4826 scope.go:117] "RemoveContainer" containerID="c6920cb0da3a23ded2fd24cbb3779d77c24038298111549583e3ace69ab21a37" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.681052 4826 scope.go:117] "RemoveContainer" containerID="191e40e872573258f2a0551d032e4f1395ced10fe9b16a57036a4eba25a4925f" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.695575 4826 scope.go:117] "RemoveContainer" containerID="dda22345193f3efa59fe420778353c99c634977df85688616a4c520ba59db42f" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.711523 4826 scope.go:117] "RemoveContainer" containerID="0418191ed0b042868c16dbc2385f651ded763d791ad329ba66d9fa1fc9bf6d89" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.728968 4826 scope.go:117] "RemoveContainer" containerID="a79a2b51e9072617acf015809bd0e0041cd28d831aae105ff145d129c0de79cf" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.748710 4826 scope.go:117] "RemoveContainer" containerID="63052738398e84954be9279b95df1f9ed8911464f4d8a722134201ecbd948294" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.777465 4826 scope.go:117] "RemoveContainer" containerID="a70ebef7793fc16d23eb3cf7086d5a32ae98490e208816e9aff22022cb690fb9" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.800856 4826 scope.go:117] "RemoveContainer" containerID="33a75c8196166d38a593255be21141b79abb974c28717d5282e9fca7bf70f313" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.855188 4826 scope.go:117] "RemoveContainer" containerID="f6aa23acc47e692957ab07731c46d27ddbb133feea525282ac5021374f9f89a9" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.877184 4826 scope.go:117] "RemoveContainer" containerID="df2585b24365121c60a8b8502943ca22fe5a1a6fc73f6f2496cfd131bb0cca5c" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.899895 4826 scope.go:117] "RemoveContainer" containerID="6873be83fc4a55210d906801985bb6bb4c3941be3e78f955610aac735d9256bd" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.940920 4826 scope.go:117] "RemoveContainer" containerID="8d5b3fff199854dae6cde1a71f974779070350c89f28ec58fbdde4598e08fca8" Jan 29 07:11:00 crc kubenswrapper[4826]: I0129 07:11:00.960757 4826 scope.go:117] "RemoveContainer" containerID="cc5c9fe906766fdcc27e8bc446b6ae193bd58eed0a3689382a15788e0c5569f2" Jan 29 07:11:05 crc kubenswrapper[4826]: I0129 07:11:05.809043 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:11:05 crc kubenswrapper[4826]: E0129 07:11:05.809642 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:11:20 crc kubenswrapper[4826]: I0129 07:11:20.808739 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:11:20 crc kubenswrapper[4826]: E0129 07:11:20.809827 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:11:35 crc kubenswrapper[4826]: I0129 07:11:35.810399 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:11:35 crc kubenswrapper[4826]: E0129 07:11:35.811708 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:11:47 crc kubenswrapper[4826]: I0129 07:11:47.808865 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:11:47 crc kubenswrapper[4826]: E0129 07:11:47.810040 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:12:00 crc kubenswrapper[4826]: I0129 07:12:00.809321 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:12:00 crc kubenswrapper[4826]: E0129 07:12:00.809889 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:12:01 crc kubenswrapper[4826]: I0129 07:12:01.176406 4826 scope.go:117] "RemoveContainer" containerID="38411dbe679ef8c89b951c34afc01e2b04f84066915dd175b2c3a4d60a5cccb1" Jan 29 07:12:01 crc kubenswrapper[4826]: I0129 07:12:01.201548 4826 scope.go:117] "RemoveContainer" containerID="aa923caa75a4ef623de542ee9505460d14e9077582229b30f18ddbd944849073" Jan 29 07:12:01 crc kubenswrapper[4826]: I0129 07:12:01.222661 4826 scope.go:117] "RemoveContainer" containerID="768c302445504a9f1d3eff35fdd9007e37101a52925f9f686a83584515eeb5c2" Jan 29 07:12:01 crc kubenswrapper[4826]: I0129 07:12:01.236021 4826 scope.go:117] "RemoveContainer" containerID="c2c81f5f8792fc508f613a497ea4d651ee5e12ec7d124a7e01884a02455c9ffc" Jan 29 07:12:01 crc kubenswrapper[4826]: I0129 07:12:01.276020 4826 scope.go:117] "RemoveContainer" containerID="bddf8e03297919dd378b31ed57d97c8e00f8aa6eb7eb5c177dd2b26d1146eb32" Jan 29 07:12:01 crc kubenswrapper[4826]: I0129 07:12:01.301243 4826 scope.go:117] "RemoveContainer" containerID="b50ac83dbd4ed6b1a94d8b1a7e79a0a2f0cbe1ecca7f07d779adbd94338a2040" Jan 29 07:12:01 crc kubenswrapper[4826]: I0129 07:12:01.318863 4826 scope.go:117] "RemoveContainer" containerID="d9368ce3c4b22eb7ed796c96a8b8b0a80f4ad3b81b2110fb2465fa6b3f09ec54" Jan 29 07:12:14 crc kubenswrapper[4826]: I0129 07:12:14.102028 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:12:14 crc kubenswrapper[4826]: E0129 07:12:14.102678 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:12:28 crc kubenswrapper[4826]: I0129 07:12:28.809080 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:12:28 crc kubenswrapper[4826]: E0129 07:12:28.809950 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:12:39 crc kubenswrapper[4826]: I0129 07:12:39.809004 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:12:39 crc kubenswrapper[4826]: E0129 07:12:39.810089 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:12:53 crc kubenswrapper[4826]: I0129 07:12:53.810242 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:12:53 crc kubenswrapper[4826]: E0129 07:12:53.811639 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:13:01 crc kubenswrapper[4826]: I0129 07:13:01.399762 4826 scope.go:117] "RemoveContainer" containerID="7d37d3a61c1d81543cb62525459e1b9aa5156ff41900e162e2c6a48e863800d4" Jan 29 07:13:01 crc kubenswrapper[4826]: I0129 07:13:01.436780 4826 scope.go:117] "RemoveContainer" containerID="600e4e259dc70cf7553b00525bc8575413856859a15dc568c7f670972c1cf37a" Jan 29 07:13:01 crc kubenswrapper[4826]: I0129 07:13:01.486283 4826 scope.go:117] "RemoveContainer" containerID="add0379d6310866f8debdb534faa7bf14685002e4ca0c63cd2f7b82d237d1c35" Jan 29 07:13:05 crc kubenswrapper[4826]: I0129 07:13:05.808512 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:13:05 crc kubenswrapper[4826]: E0129 07:13:05.809195 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:13:17 crc kubenswrapper[4826]: I0129 07:13:17.808790 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:13:17 crc kubenswrapper[4826]: E0129 07:13:17.809714 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:13:32 crc kubenswrapper[4826]: I0129 07:13:32.809347 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:13:32 crc kubenswrapper[4826]: E0129 07:13:32.810089 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:13:43 crc kubenswrapper[4826]: I0129 07:13:43.809338 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:13:43 crc kubenswrapper[4826]: E0129 07:13:43.810144 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:13:54 crc kubenswrapper[4826]: I0129 07:13:54.809989 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:13:54 crc kubenswrapper[4826]: E0129 07:13:54.811385 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:14:06 crc kubenswrapper[4826]: I0129 07:14:06.813046 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:14:06 crc kubenswrapper[4826]: E0129 07:14:06.813923 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:14:17 crc kubenswrapper[4826]: I0129 07:14:17.809502 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:14:17 crc kubenswrapper[4826]: E0129 07:14:17.810604 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:14:30 crc kubenswrapper[4826]: I0129 07:14:30.808355 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:14:30 crc kubenswrapper[4826]: E0129 07:14:30.809058 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:14:45 crc kubenswrapper[4826]: I0129 07:14:45.809099 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:14:45 crc kubenswrapper[4826]: E0129 07:14:45.810162 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:14:56 crc kubenswrapper[4826]: I0129 07:14:56.817380 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:14:56 crc kubenswrapper[4826]: E0129 07:14:56.818761 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.180446 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4"] Jan 29 07:15:00 crc kubenswrapper[4826]: E0129 07:15:00.181148 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe9c47a-87e8-488b-862f-70d2d6837701" containerName="registry-server" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.181165 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe9c47a-87e8-488b-862f-70d2d6837701" containerName="registry-server" Jan 29 07:15:00 crc kubenswrapper[4826]: E0129 07:15:00.181204 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe9c47a-87e8-488b-862f-70d2d6837701" containerName="extract-utilities" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.181214 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe9c47a-87e8-488b-862f-70d2d6837701" containerName="extract-utilities" Jan 29 07:15:00 crc kubenswrapper[4826]: E0129 07:15:00.181228 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe9c47a-87e8-488b-862f-70d2d6837701" containerName="extract-content" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.181236 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe9c47a-87e8-488b-862f-70d2d6837701" containerName="extract-content" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.181452 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe9c47a-87e8-488b-862f-70d2d6837701" containerName="registry-server" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.182099 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.185081 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.185876 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.195659 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4"] Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.247644 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6nfw\" (UniqueName: \"kubernetes.io/projected/a83d0c70-48d2-4019-be7c-0df2e68c51c9-kube-api-access-p6nfw\") pod \"collect-profiles-29494515-4v6n4\" (UID: \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.247726 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a83d0c70-48d2-4019-be7c-0df2e68c51c9-secret-volume\") pod \"collect-profiles-29494515-4v6n4\" (UID: \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.247759 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a83d0c70-48d2-4019-be7c-0df2e68c51c9-config-volume\") pod \"collect-profiles-29494515-4v6n4\" (UID: \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.349128 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6nfw\" (UniqueName: \"kubernetes.io/projected/a83d0c70-48d2-4019-be7c-0df2e68c51c9-kube-api-access-p6nfw\") pod \"collect-profiles-29494515-4v6n4\" (UID: \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.349274 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a83d0c70-48d2-4019-be7c-0df2e68c51c9-secret-volume\") pod \"collect-profiles-29494515-4v6n4\" (UID: \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.349366 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a83d0c70-48d2-4019-be7c-0df2e68c51c9-config-volume\") pod \"collect-profiles-29494515-4v6n4\" (UID: \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.350566 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a83d0c70-48d2-4019-be7c-0df2e68c51c9-config-volume\") pod \"collect-profiles-29494515-4v6n4\" (UID: \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.356442 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a83d0c70-48d2-4019-be7c-0df2e68c51c9-secret-volume\") pod \"collect-profiles-29494515-4v6n4\" (UID: \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.374051 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6nfw\" (UniqueName: \"kubernetes.io/projected/a83d0c70-48d2-4019-be7c-0df2e68c51c9-kube-api-access-p6nfw\") pod \"collect-profiles-29494515-4v6n4\" (UID: \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.508257 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" Jan 29 07:15:00 crc kubenswrapper[4826]: I0129 07:15:00.971168 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4"] Jan 29 07:15:01 crc kubenswrapper[4826]: I0129 07:15:01.935228 4826 generic.go:334] "Generic (PLEG): container finished" podID="a83d0c70-48d2-4019-be7c-0df2e68c51c9" containerID="23e37bf1db5d58773c0607ca48c45358bf4c862698164d3ac4558d86047aa7ac" exitCode=0 Jan 29 07:15:01 crc kubenswrapper[4826]: I0129 07:15:01.935333 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" event={"ID":"a83d0c70-48d2-4019-be7c-0df2e68c51c9","Type":"ContainerDied","Data":"23e37bf1db5d58773c0607ca48c45358bf4c862698164d3ac4558d86047aa7ac"} Jan 29 07:15:01 crc kubenswrapper[4826]: I0129 07:15:01.935374 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" event={"ID":"a83d0c70-48d2-4019-be7c-0df2e68c51c9","Type":"ContainerStarted","Data":"1fae958d095bd26df30656c340ee886dca6e68ced7717adaf70bef1eae51d2a4"} Jan 29 07:15:03 crc kubenswrapper[4826]: I0129 07:15:03.302615 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" Jan 29 07:15:03 crc kubenswrapper[4826]: I0129 07:15:03.500909 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6nfw\" (UniqueName: \"kubernetes.io/projected/a83d0c70-48d2-4019-be7c-0df2e68c51c9-kube-api-access-p6nfw\") pod \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\" (UID: \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\") " Jan 29 07:15:03 crc kubenswrapper[4826]: I0129 07:15:03.501021 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a83d0c70-48d2-4019-be7c-0df2e68c51c9-config-volume\") pod \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\" (UID: \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\") " Jan 29 07:15:03 crc kubenswrapper[4826]: I0129 07:15:03.501215 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a83d0c70-48d2-4019-be7c-0df2e68c51c9-secret-volume\") pod \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\" (UID: \"a83d0c70-48d2-4019-be7c-0df2e68c51c9\") " Jan 29 07:15:03 crc kubenswrapper[4826]: I0129 07:15:03.501987 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a83d0c70-48d2-4019-be7c-0df2e68c51c9-config-volume" (OuterVolumeSpecName: "config-volume") pod "a83d0c70-48d2-4019-be7c-0df2e68c51c9" (UID: "a83d0c70-48d2-4019-be7c-0df2e68c51c9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:15:03 crc kubenswrapper[4826]: I0129 07:15:03.507001 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83d0c70-48d2-4019-be7c-0df2e68c51c9-kube-api-access-p6nfw" (OuterVolumeSpecName: "kube-api-access-p6nfw") pod "a83d0c70-48d2-4019-be7c-0df2e68c51c9" (UID: "a83d0c70-48d2-4019-be7c-0df2e68c51c9"). InnerVolumeSpecName "kube-api-access-p6nfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:15:03 crc kubenswrapper[4826]: I0129 07:15:03.508712 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83d0c70-48d2-4019-be7c-0df2e68c51c9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a83d0c70-48d2-4019-be7c-0df2e68c51c9" (UID: "a83d0c70-48d2-4019-be7c-0df2e68c51c9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:15:03 crc kubenswrapper[4826]: I0129 07:15:03.602752 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a83d0c70-48d2-4019-be7c-0df2e68c51c9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:15:03 crc kubenswrapper[4826]: I0129 07:15:03.603069 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6nfw\" (UniqueName: \"kubernetes.io/projected/a83d0c70-48d2-4019-be7c-0df2e68c51c9-kube-api-access-p6nfw\") on node \"crc\" DevicePath \"\"" Jan 29 07:15:03 crc kubenswrapper[4826]: I0129 07:15:03.603153 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a83d0c70-48d2-4019-be7c-0df2e68c51c9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:15:03 crc kubenswrapper[4826]: I0129 07:15:03.955639 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" event={"ID":"a83d0c70-48d2-4019-be7c-0df2e68c51c9","Type":"ContainerDied","Data":"1fae958d095bd26df30656c340ee886dca6e68ced7717adaf70bef1eae51d2a4"} Jan 29 07:15:03 crc kubenswrapper[4826]: I0129 07:15:03.955711 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fae958d095bd26df30656c340ee886dca6e68ced7717adaf70bef1eae51d2a4" Jan 29 07:15:03 crc kubenswrapper[4826]: I0129 07:15:03.955728 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4" Jan 29 07:15:07 crc kubenswrapper[4826]: I0129 07:15:07.809615 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:15:09 crc kubenswrapper[4826]: I0129 07:15:09.004356 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"285c448faefe70cc4e0b94ee6286bfb2b42948aa328ed316b0407ab940baa937"} Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.352727 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nknd4"] Jan 29 07:16:21 crc kubenswrapper[4826]: E0129 07:16:21.354448 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83d0c70-48d2-4019-be7c-0df2e68c51c9" containerName="collect-profiles" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.354486 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83d0c70-48d2-4019-be7c-0df2e68c51c9" containerName="collect-profiles" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.354988 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83d0c70-48d2-4019-be7c-0df2e68c51c9" containerName="collect-profiles" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.361469 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.371821 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nknd4"] Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.429967 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch4zf\" (UniqueName: \"kubernetes.io/projected/558fab10-2595-4384-8444-d781063bd6ef-kube-api-access-ch4zf\") pod \"redhat-operators-nknd4\" (UID: \"558fab10-2595-4384-8444-d781063bd6ef\") " pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.430034 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558fab10-2595-4384-8444-d781063bd6ef-catalog-content\") pod \"redhat-operators-nknd4\" (UID: \"558fab10-2595-4384-8444-d781063bd6ef\") " pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.430113 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558fab10-2595-4384-8444-d781063bd6ef-utilities\") pod \"redhat-operators-nknd4\" (UID: \"558fab10-2595-4384-8444-d781063bd6ef\") " pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.532330 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch4zf\" (UniqueName: \"kubernetes.io/projected/558fab10-2595-4384-8444-d781063bd6ef-kube-api-access-ch4zf\") pod \"redhat-operators-nknd4\" (UID: \"558fab10-2595-4384-8444-d781063bd6ef\") " pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.532412 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558fab10-2595-4384-8444-d781063bd6ef-catalog-content\") pod \"redhat-operators-nknd4\" (UID: \"558fab10-2595-4384-8444-d781063bd6ef\") " pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.532487 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558fab10-2595-4384-8444-d781063bd6ef-utilities\") pod \"redhat-operators-nknd4\" (UID: \"558fab10-2595-4384-8444-d781063bd6ef\") " pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.533162 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558fab10-2595-4384-8444-d781063bd6ef-catalog-content\") pod \"redhat-operators-nknd4\" (UID: \"558fab10-2595-4384-8444-d781063bd6ef\") " pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.533276 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558fab10-2595-4384-8444-d781063bd6ef-utilities\") pod \"redhat-operators-nknd4\" (UID: \"558fab10-2595-4384-8444-d781063bd6ef\") " pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.553427 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch4zf\" (UniqueName: \"kubernetes.io/projected/558fab10-2595-4384-8444-d781063bd6ef-kube-api-access-ch4zf\") pod \"redhat-operators-nknd4\" (UID: \"558fab10-2595-4384-8444-d781063bd6ef\") " pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.693833 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.941995 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nknd4"] Jan 29 07:16:21 crc kubenswrapper[4826]: I0129 07:16:21.977581 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nknd4" event={"ID":"558fab10-2595-4384-8444-d781063bd6ef","Type":"ContainerStarted","Data":"0da2049df73c489629e162c0c366432dfc53702d046a28553766bafdfb110d70"} Jan 29 07:16:22 crc kubenswrapper[4826]: I0129 07:16:22.986179 4826 generic.go:334] "Generic (PLEG): container finished" podID="558fab10-2595-4384-8444-d781063bd6ef" containerID="1bdc383587cf75f884889a6cfc95d23de66f345b5503c86112068c03bf25d598" exitCode=0 Jan 29 07:16:22 crc kubenswrapper[4826]: I0129 07:16:22.986243 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nknd4" event={"ID":"558fab10-2595-4384-8444-d781063bd6ef","Type":"ContainerDied","Data":"1bdc383587cf75f884889a6cfc95d23de66f345b5503c86112068c03bf25d598"} Jan 29 07:16:22 crc kubenswrapper[4826]: I0129 07:16:22.988445 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 07:16:23 crc kubenswrapper[4826]: I0129 07:16:23.995953 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nknd4" event={"ID":"558fab10-2595-4384-8444-d781063bd6ef","Type":"ContainerStarted","Data":"8a9133c1301d34a6cd4f61b062525da9ff24897b76fa6d193eb8d27056231c2c"} Jan 29 07:16:25 crc kubenswrapper[4826]: I0129 07:16:25.006979 4826 generic.go:334] "Generic (PLEG): container finished" podID="558fab10-2595-4384-8444-d781063bd6ef" containerID="8a9133c1301d34a6cd4f61b062525da9ff24897b76fa6d193eb8d27056231c2c" exitCode=0 Jan 29 07:16:25 crc kubenswrapper[4826]: I0129 07:16:25.007048 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nknd4" event={"ID":"558fab10-2595-4384-8444-d781063bd6ef","Type":"ContainerDied","Data":"8a9133c1301d34a6cd4f61b062525da9ff24897b76fa6d193eb8d27056231c2c"} Jan 29 07:16:26 crc kubenswrapper[4826]: I0129 07:16:26.018467 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nknd4" event={"ID":"558fab10-2595-4384-8444-d781063bd6ef","Type":"ContainerStarted","Data":"0a27cec1c0da2be9f80ab160bf97f0149fd669d3fda7a81df50d1859851d7558"} Jan 29 07:16:26 crc kubenswrapper[4826]: I0129 07:16:26.052518 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nknd4" podStartSLOduration=2.644474906 podStartE2EDuration="5.052482678s" podCreationTimestamp="2026-01-29 07:16:21 +0000 UTC" firstStartedPulling="2026-01-29 07:16:22.988173129 +0000 UTC m=+1966.849966188" lastFinishedPulling="2026-01-29 07:16:25.396180861 +0000 UTC m=+1969.257973960" observedRunningTime="2026-01-29 07:16:26.048292548 +0000 UTC m=+1969.910085637" watchObservedRunningTime="2026-01-29 07:16:26.052482678 +0000 UTC m=+1969.914275767" Jan 29 07:16:31 crc kubenswrapper[4826]: I0129 07:16:31.694621 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:31 crc kubenswrapper[4826]: I0129 07:16:31.694692 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:32 crc kubenswrapper[4826]: I0129 07:16:32.751156 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nknd4" podUID="558fab10-2595-4384-8444-d781063bd6ef" containerName="registry-server" probeResult="failure" output=< Jan 29 07:16:32 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 07:16:32 crc kubenswrapper[4826]: > Jan 29 07:16:41 crc kubenswrapper[4826]: I0129 07:16:41.755171 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:41 crc kubenswrapper[4826]: I0129 07:16:41.815937 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:41 crc kubenswrapper[4826]: I0129 07:16:41.992149 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nknd4"] Jan 29 07:16:43 crc kubenswrapper[4826]: I0129 07:16:43.172375 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nknd4" podUID="558fab10-2595-4384-8444-d781063bd6ef" containerName="registry-server" containerID="cri-o://0a27cec1c0da2be9f80ab160bf97f0149fd669d3fda7a81df50d1859851d7558" gracePeriod=2 Jan 29 07:16:43 crc kubenswrapper[4826]: I0129 07:16:43.657422 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:43 crc kubenswrapper[4826]: I0129 07:16:43.710902 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch4zf\" (UniqueName: \"kubernetes.io/projected/558fab10-2595-4384-8444-d781063bd6ef-kube-api-access-ch4zf\") pod \"558fab10-2595-4384-8444-d781063bd6ef\" (UID: \"558fab10-2595-4384-8444-d781063bd6ef\") " Jan 29 07:16:43 crc kubenswrapper[4826]: I0129 07:16:43.710978 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558fab10-2595-4384-8444-d781063bd6ef-utilities\") pod \"558fab10-2595-4384-8444-d781063bd6ef\" (UID: \"558fab10-2595-4384-8444-d781063bd6ef\") " Jan 29 07:16:43 crc kubenswrapper[4826]: I0129 07:16:43.711059 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558fab10-2595-4384-8444-d781063bd6ef-catalog-content\") pod \"558fab10-2595-4384-8444-d781063bd6ef\" (UID: \"558fab10-2595-4384-8444-d781063bd6ef\") " Jan 29 07:16:43 crc kubenswrapper[4826]: I0129 07:16:43.715673 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/558fab10-2595-4384-8444-d781063bd6ef-kube-api-access-ch4zf" (OuterVolumeSpecName: "kube-api-access-ch4zf") pod "558fab10-2595-4384-8444-d781063bd6ef" (UID: "558fab10-2595-4384-8444-d781063bd6ef"). InnerVolumeSpecName "kube-api-access-ch4zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:16:43 crc kubenswrapper[4826]: I0129 07:16:43.726215 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/558fab10-2595-4384-8444-d781063bd6ef-utilities" (OuterVolumeSpecName: "utilities") pod "558fab10-2595-4384-8444-d781063bd6ef" (UID: "558fab10-2595-4384-8444-d781063bd6ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:16:43 crc kubenswrapper[4826]: I0129 07:16:43.821019 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch4zf\" (UniqueName: \"kubernetes.io/projected/558fab10-2595-4384-8444-d781063bd6ef-kube-api-access-ch4zf\") on node \"crc\" DevicePath \"\"" Jan 29 07:16:43 crc kubenswrapper[4826]: I0129 07:16:43.821061 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/558fab10-2595-4384-8444-d781063bd6ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:16:43 crc kubenswrapper[4826]: I0129 07:16:43.843397 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/558fab10-2595-4384-8444-d781063bd6ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "558fab10-2595-4384-8444-d781063bd6ef" (UID: "558fab10-2595-4384-8444-d781063bd6ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:16:43 crc kubenswrapper[4826]: I0129 07:16:43.922639 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/558fab10-2595-4384-8444-d781063bd6ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.181772 4826 generic.go:334] "Generic (PLEG): container finished" podID="558fab10-2595-4384-8444-d781063bd6ef" containerID="0a27cec1c0da2be9f80ab160bf97f0149fd669d3fda7a81df50d1859851d7558" exitCode=0 Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.181826 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nknd4" event={"ID":"558fab10-2595-4384-8444-d781063bd6ef","Type":"ContainerDied","Data":"0a27cec1c0da2be9f80ab160bf97f0149fd669d3fda7a81df50d1859851d7558"} Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.181859 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nknd4" event={"ID":"558fab10-2595-4384-8444-d781063bd6ef","Type":"ContainerDied","Data":"0da2049df73c489629e162c0c366432dfc53702d046a28553766bafdfb110d70"} Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.181882 4826 scope.go:117] "RemoveContainer" containerID="0a27cec1c0da2be9f80ab160bf97f0149fd669d3fda7a81df50d1859851d7558" Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.182033 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nknd4" Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.207185 4826 scope.go:117] "RemoveContainer" containerID="8a9133c1301d34a6cd4f61b062525da9ff24897b76fa6d193eb8d27056231c2c" Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.234408 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nknd4"] Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.234459 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nknd4"] Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.251470 4826 scope.go:117] "RemoveContainer" containerID="1bdc383587cf75f884889a6cfc95d23de66f345b5503c86112068c03bf25d598" Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.278244 4826 scope.go:117] "RemoveContainer" containerID="0a27cec1c0da2be9f80ab160bf97f0149fd669d3fda7a81df50d1859851d7558" Jan 29 07:16:44 crc kubenswrapper[4826]: E0129 07:16:44.278803 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a27cec1c0da2be9f80ab160bf97f0149fd669d3fda7a81df50d1859851d7558\": container with ID starting with 0a27cec1c0da2be9f80ab160bf97f0149fd669d3fda7a81df50d1859851d7558 not found: ID does not exist" containerID="0a27cec1c0da2be9f80ab160bf97f0149fd669d3fda7a81df50d1859851d7558" Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.278838 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a27cec1c0da2be9f80ab160bf97f0149fd669d3fda7a81df50d1859851d7558"} err="failed to get container status \"0a27cec1c0da2be9f80ab160bf97f0149fd669d3fda7a81df50d1859851d7558\": rpc error: code = NotFound desc = could not find container \"0a27cec1c0da2be9f80ab160bf97f0149fd669d3fda7a81df50d1859851d7558\": container with ID starting with 0a27cec1c0da2be9f80ab160bf97f0149fd669d3fda7a81df50d1859851d7558 not found: ID does not exist" Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.278866 4826 scope.go:117] "RemoveContainer" containerID="8a9133c1301d34a6cd4f61b062525da9ff24897b76fa6d193eb8d27056231c2c" Jan 29 07:16:44 crc kubenswrapper[4826]: E0129 07:16:44.279756 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a9133c1301d34a6cd4f61b062525da9ff24897b76fa6d193eb8d27056231c2c\": container with ID starting with 8a9133c1301d34a6cd4f61b062525da9ff24897b76fa6d193eb8d27056231c2c not found: ID does not exist" containerID="8a9133c1301d34a6cd4f61b062525da9ff24897b76fa6d193eb8d27056231c2c" Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.279857 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a9133c1301d34a6cd4f61b062525da9ff24897b76fa6d193eb8d27056231c2c"} err="failed to get container status \"8a9133c1301d34a6cd4f61b062525da9ff24897b76fa6d193eb8d27056231c2c\": rpc error: code = NotFound desc = could not find container \"8a9133c1301d34a6cd4f61b062525da9ff24897b76fa6d193eb8d27056231c2c\": container with ID starting with 8a9133c1301d34a6cd4f61b062525da9ff24897b76fa6d193eb8d27056231c2c not found: ID does not exist" Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.279920 4826 scope.go:117] "RemoveContainer" containerID="1bdc383587cf75f884889a6cfc95d23de66f345b5503c86112068c03bf25d598" Jan 29 07:16:44 crc kubenswrapper[4826]: E0129 07:16:44.280376 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bdc383587cf75f884889a6cfc95d23de66f345b5503c86112068c03bf25d598\": container with ID starting with 1bdc383587cf75f884889a6cfc95d23de66f345b5503c86112068c03bf25d598 not found: ID does not exist" containerID="1bdc383587cf75f884889a6cfc95d23de66f345b5503c86112068c03bf25d598" Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.280432 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bdc383587cf75f884889a6cfc95d23de66f345b5503c86112068c03bf25d598"} err="failed to get container status \"1bdc383587cf75f884889a6cfc95d23de66f345b5503c86112068c03bf25d598\": rpc error: code = NotFound desc = could not find container \"1bdc383587cf75f884889a6cfc95d23de66f345b5503c86112068c03bf25d598\": container with ID starting with 1bdc383587cf75f884889a6cfc95d23de66f345b5503c86112068c03bf25d598 not found: ID does not exist" Jan 29 07:16:44 crc kubenswrapper[4826]: I0129 07:16:44.822714 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="558fab10-2595-4384-8444-d781063bd6ef" path="/var/lib/kubelet/pods/558fab10-2595-4384-8444-d781063bd6ef/volumes" Jan 29 07:17:35 crc kubenswrapper[4826]: I0129 07:17:35.655861 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:17:35 crc kubenswrapper[4826]: I0129 07:17:35.656431 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:18:05 crc kubenswrapper[4826]: I0129 07:18:05.656691 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:18:05 crc kubenswrapper[4826]: I0129 07:18:05.657712 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:18:20 crc kubenswrapper[4826]: I0129 07:18:20.804030 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5pfbz"] Jan 29 07:18:20 crc kubenswrapper[4826]: E0129 07:18:20.805036 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558fab10-2595-4384-8444-d781063bd6ef" containerName="extract-content" Jan 29 07:18:20 crc kubenswrapper[4826]: I0129 07:18:20.805057 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="558fab10-2595-4384-8444-d781063bd6ef" containerName="extract-content" Jan 29 07:18:20 crc kubenswrapper[4826]: E0129 07:18:20.805096 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558fab10-2595-4384-8444-d781063bd6ef" containerName="registry-server" Jan 29 07:18:20 crc kubenswrapper[4826]: I0129 07:18:20.805106 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="558fab10-2595-4384-8444-d781063bd6ef" containerName="registry-server" Jan 29 07:18:20 crc kubenswrapper[4826]: E0129 07:18:20.805126 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558fab10-2595-4384-8444-d781063bd6ef" containerName="extract-utilities" Jan 29 07:18:20 crc kubenswrapper[4826]: I0129 07:18:20.805136 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="558fab10-2595-4384-8444-d781063bd6ef" containerName="extract-utilities" Jan 29 07:18:20 crc kubenswrapper[4826]: I0129 07:18:20.805361 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="558fab10-2595-4384-8444-d781063bd6ef" containerName="registry-server" Jan 29 07:18:20 crc kubenswrapper[4826]: I0129 07:18:20.806925 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:20 crc kubenswrapper[4826]: I0129 07:18:20.824427 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5pfbz"] Jan 29 07:18:20 crc kubenswrapper[4826]: I0129 07:18:20.930581 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr2qt\" (UniqueName: \"kubernetes.io/projected/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-kube-api-access-qr2qt\") pod \"certified-operators-5pfbz\" (UID: \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\") " pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:20 crc kubenswrapper[4826]: I0129 07:18:20.930894 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-utilities\") pod \"certified-operators-5pfbz\" (UID: \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\") " pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:20 crc kubenswrapper[4826]: I0129 07:18:20.930981 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-catalog-content\") pod \"certified-operators-5pfbz\" (UID: \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\") " pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:21 crc kubenswrapper[4826]: I0129 07:18:21.032343 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-catalog-content\") pod \"certified-operators-5pfbz\" (UID: \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\") " pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:21 crc kubenswrapper[4826]: I0129 07:18:21.032484 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr2qt\" (UniqueName: \"kubernetes.io/projected/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-kube-api-access-qr2qt\") pod \"certified-operators-5pfbz\" (UID: \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\") " pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:21 crc kubenswrapper[4826]: I0129 07:18:21.032515 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-utilities\") pod \"certified-operators-5pfbz\" (UID: \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\") " pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:21 crc kubenswrapper[4826]: I0129 07:18:21.033317 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-utilities\") pod \"certified-operators-5pfbz\" (UID: \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\") " pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:21 crc kubenswrapper[4826]: I0129 07:18:21.033550 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-catalog-content\") pod \"certified-operators-5pfbz\" (UID: \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\") " pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:21 crc kubenswrapper[4826]: I0129 07:18:21.056663 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr2qt\" (UniqueName: \"kubernetes.io/projected/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-kube-api-access-qr2qt\") pod \"certified-operators-5pfbz\" (UID: \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\") " pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:21 crc kubenswrapper[4826]: I0129 07:18:21.134898 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:21 crc kubenswrapper[4826]: I0129 07:18:21.675016 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5pfbz"] Jan 29 07:18:22 crc kubenswrapper[4826]: I0129 07:18:22.001751 4826 generic.go:334] "Generic (PLEG): container finished" podID="2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" containerID="7185337a657ce5a88408f94a77815b3f4656b5275f0e87f27c8c6461ac0e0b22" exitCode=0 Jan 29 07:18:22 crc kubenswrapper[4826]: I0129 07:18:22.001809 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pfbz" event={"ID":"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a","Type":"ContainerDied","Data":"7185337a657ce5a88408f94a77815b3f4656b5275f0e87f27c8c6461ac0e0b22"} Jan 29 07:18:22 crc kubenswrapper[4826]: I0129 07:18:22.001862 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pfbz" event={"ID":"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a","Type":"ContainerStarted","Data":"76b356d53c01c7b32ccb7ef11c462a7aead7007ee440e88dbe9bac65b08549aa"} Jan 29 07:18:23 crc kubenswrapper[4826]: I0129 07:18:23.012372 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pfbz" event={"ID":"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a","Type":"ContainerStarted","Data":"ffaa2e4c4f0fba8f8751fb097a251c5b468d4a09a56a5b51f41a7dc3361f9b86"} Jan 29 07:18:24 crc kubenswrapper[4826]: I0129 07:18:24.024801 4826 generic.go:334] "Generic (PLEG): container finished" podID="2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" containerID="ffaa2e4c4f0fba8f8751fb097a251c5b468d4a09a56a5b51f41a7dc3361f9b86" exitCode=0 Jan 29 07:18:24 crc kubenswrapper[4826]: I0129 07:18:24.024860 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pfbz" event={"ID":"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a","Type":"ContainerDied","Data":"ffaa2e4c4f0fba8f8751fb097a251c5b468d4a09a56a5b51f41a7dc3361f9b86"} Jan 29 07:18:25 crc kubenswrapper[4826]: I0129 07:18:25.057503 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pfbz" event={"ID":"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a","Type":"ContainerStarted","Data":"e599dcc8b699eac32bbbfe429360c2102707d72562ed22421c0746831aba820b"} Jan 29 07:18:25 crc kubenswrapper[4826]: I0129 07:18:25.083711 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5pfbz" podStartSLOduration=2.633931353 podStartE2EDuration="5.083686869s" podCreationTimestamp="2026-01-29 07:18:20 +0000 UTC" firstStartedPulling="2026-01-29 07:18:22.004471578 +0000 UTC m=+2085.866264657" lastFinishedPulling="2026-01-29 07:18:24.454227094 +0000 UTC m=+2088.316020173" observedRunningTime="2026-01-29 07:18:25.080344781 +0000 UTC m=+2088.942137870" watchObservedRunningTime="2026-01-29 07:18:25.083686869 +0000 UTC m=+2088.945479958" Jan 29 07:18:31 crc kubenswrapper[4826]: I0129 07:18:31.135720 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:31 crc kubenswrapper[4826]: I0129 07:18:31.136360 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:31 crc kubenswrapper[4826]: I0129 07:18:31.204113 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:32 crc kubenswrapper[4826]: I0129 07:18:32.187264 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:32 crc kubenswrapper[4826]: I0129 07:18:32.251013 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5pfbz"] Jan 29 07:18:34 crc kubenswrapper[4826]: I0129 07:18:34.158835 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5pfbz" podUID="2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" containerName="registry-server" containerID="cri-o://e599dcc8b699eac32bbbfe429360c2102707d72562ed22421c0746831aba820b" gracePeriod=2 Jan 29 07:18:34 crc kubenswrapper[4826]: I0129 07:18:34.653634 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:34 crc kubenswrapper[4826]: I0129 07:18:34.761145 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr2qt\" (UniqueName: \"kubernetes.io/projected/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-kube-api-access-qr2qt\") pod \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\" (UID: \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\") " Jan 29 07:18:34 crc kubenswrapper[4826]: I0129 07:18:34.761333 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-catalog-content\") pod \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\" (UID: \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\") " Jan 29 07:18:34 crc kubenswrapper[4826]: I0129 07:18:34.761373 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-utilities\") pod \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\" (UID: \"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a\") " Jan 29 07:18:34 crc kubenswrapper[4826]: I0129 07:18:34.763018 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-utilities" (OuterVolumeSpecName: "utilities") pod "2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" (UID: "2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:18:34 crc kubenswrapper[4826]: I0129 07:18:34.770446 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-kube-api-access-qr2qt" (OuterVolumeSpecName: "kube-api-access-qr2qt") pod "2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" (UID: "2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a"). InnerVolumeSpecName "kube-api-access-qr2qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:18:34 crc kubenswrapper[4826]: I0129 07:18:34.830503 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" (UID: "2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:18:34 crc kubenswrapper[4826]: I0129 07:18:34.863993 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr2qt\" (UniqueName: \"kubernetes.io/projected/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-kube-api-access-qr2qt\") on node \"crc\" DevicePath \"\"" Jan 29 07:18:34 crc kubenswrapper[4826]: I0129 07:18:34.864039 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:18:34 crc kubenswrapper[4826]: I0129 07:18:34.864058 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.169810 4826 generic.go:334] "Generic (PLEG): container finished" podID="2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" containerID="e599dcc8b699eac32bbbfe429360c2102707d72562ed22421c0746831aba820b" exitCode=0 Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.169848 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pfbz" event={"ID":"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a","Type":"ContainerDied","Data":"e599dcc8b699eac32bbbfe429360c2102707d72562ed22421c0746831aba820b"} Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.169940 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pfbz" event={"ID":"2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a","Type":"ContainerDied","Data":"76b356d53c01c7b32ccb7ef11c462a7aead7007ee440e88dbe9bac65b08549aa"} Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.169954 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pfbz" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.169969 4826 scope.go:117] "RemoveContainer" containerID="e599dcc8b699eac32bbbfe429360c2102707d72562ed22421c0746831aba820b" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.193017 4826 scope.go:117] "RemoveContainer" containerID="ffaa2e4c4f0fba8f8751fb097a251c5b468d4a09a56a5b51f41a7dc3361f9b86" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.233689 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5pfbz"] Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.234248 4826 scope.go:117] "RemoveContainer" containerID="7185337a657ce5a88408f94a77815b3f4656b5275f0e87f27c8c6461ac0e0b22" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.243247 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5pfbz"] Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.267436 4826 scope.go:117] "RemoveContainer" containerID="e599dcc8b699eac32bbbfe429360c2102707d72562ed22421c0746831aba820b" Jan 29 07:18:35 crc kubenswrapper[4826]: E0129 07:18:35.268253 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e599dcc8b699eac32bbbfe429360c2102707d72562ed22421c0746831aba820b\": container with ID starting with e599dcc8b699eac32bbbfe429360c2102707d72562ed22421c0746831aba820b not found: ID does not exist" containerID="e599dcc8b699eac32bbbfe429360c2102707d72562ed22421c0746831aba820b" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.268489 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e599dcc8b699eac32bbbfe429360c2102707d72562ed22421c0746831aba820b"} err="failed to get container status \"e599dcc8b699eac32bbbfe429360c2102707d72562ed22421c0746831aba820b\": rpc error: code = NotFound desc = could not find container \"e599dcc8b699eac32bbbfe429360c2102707d72562ed22421c0746831aba820b\": container with ID starting with e599dcc8b699eac32bbbfe429360c2102707d72562ed22421c0746831aba820b not found: ID does not exist" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.268532 4826 scope.go:117] "RemoveContainer" containerID="ffaa2e4c4f0fba8f8751fb097a251c5b468d4a09a56a5b51f41a7dc3361f9b86" Jan 29 07:18:35 crc kubenswrapper[4826]: E0129 07:18:35.269538 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffaa2e4c4f0fba8f8751fb097a251c5b468d4a09a56a5b51f41a7dc3361f9b86\": container with ID starting with ffaa2e4c4f0fba8f8751fb097a251c5b468d4a09a56a5b51f41a7dc3361f9b86 not found: ID does not exist" containerID="ffaa2e4c4f0fba8f8751fb097a251c5b468d4a09a56a5b51f41a7dc3361f9b86" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.269589 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffaa2e4c4f0fba8f8751fb097a251c5b468d4a09a56a5b51f41a7dc3361f9b86"} err="failed to get container status \"ffaa2e4c4f0fba8f8751fb097a251c5b468d4a09a56a5b51f41a7dc3361f9b86\": rpc error: code = NotFound desc = could not find container \"ffaa2e4c4f0fba8f8751fb097a251c5b468d4a09a56a5b51f41a7dc3361f9b86\": container with ID starting with ffaa2e4c4f0fba8f8751fb097a251c5b468d4a09a56a5b51f41a7dc3361f9b86 not found: ID does not exist" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.269658 4826 scope.go:117] "RemoveContainer" containerID="7185337a657ce5a88408f94a77815b3f4656b5275f0e87f27c8c6461ac0e0b22" Jan 29 07:18:35 crc kubenswrapper[4826]: E0129 07:18:35.270255 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7185337a657ce5a88408f94a77815b3f4656b5275f0e87f27c8c6461ac0e0b22\": container with ID starting with 7185337a657ce5a88408f94a77815b3f4656b5275f0e87f27c8c6461ac0e0b22 not found: ID does not exist" containerID="7185337a657ce5a88408f94a77815b3f4656b5275f0e87f27c8c6461ac0e0b22" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.270319 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7185337a657ce5a88408f94a77815b3f4656b5275f0e87f27c8c6461ac0e0b22"} err="failed to get container status \"7185337a657ce5a88408f94a77815b3f4656b5275f0e87f27c8c6461ac0e0b22\": rpc error: code = NotFound desc = could not find container \"7185337a657ce5a88408f94a77815b3f4656b5275f0e87f27c8c6461ac0e0b22\": container with ID starting with 7185337a657ce5a88408f94a77815b3f4656b5275f0e87f27c8c6461ac0e0b22 not found: ID does not exist" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.656465 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.656529 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.656579 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.657250 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"285c448faefe70cc4e0b94ee6286bfb2b42948aa328ed316b0407ab940baa937"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:18:35 crc kubenswrapper[4826]: I0129 07:18:35.657366 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://285c448faefe70cc4e0b94ee6286bfb2b42948aa328ed316b0407ab940baa937" gracePeriod=600 Jan 29 07:18:36 crc kubenswrapper[4826]: I0129 07:18:36.189895 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="285c448faefe70cc4e0b94ee6286bfb2b42948aa328ed316b0407ab940baa937" exitCode=0 Jan 29 07:18:36 crc kubenswrapper[4826]: I0129 07:18:36.190534 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"285c448faefe70cc4e0b94ee6286bfb2b42948aa328ed316b0407ab940baa937"} Jan 29 07:18:36 crc kubenswrapper[4826]: I0129 07:18:36.190585 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc"} Jan 29 07:18:36 crc kubenswrapper[4826]: I0129 07:18:36.190615 4826 scope.go:117] "RemoveContainer" containerID="96af55d8aa0683eb5c7cbf6b77ee7119086cd3e7df0b1c777101e3bc473785a0" Jan 29 07:18:36 crc kubenswrapper[4826]: I0129 07:18:36.826682 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" path="/var/lib/kubelet/pods/2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a/volumes" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.418341 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vj7pq"] Jan 29 07:19:34 crc kubenswrapper[4826]: E0129 07:19:34.419835 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" containerName="extract-utilities" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.419867 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" containerName="extract-utilities" Jan 29 07:19:34 crc kubenswrapper[4826]: E0129 07:19:34.419921 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" containerName="registry-server" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.419942 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" containerName="registry-server" Jan 29 07:19:34 crc kubenswrapper[4826]: E0129 07:19:34.419979 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" containerName="extract-content" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.419997 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" containerName="extract-content" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.420399 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2667e3c9-32d7-4fe9-80ac-9d1ac4ac5d6a" containerName="registry-server" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.422915 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.432890 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj7pq"] Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.528085 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/162c1253-363e-4c47-b2ea-a8e498d13689-utilities\") pod \"redhat-marketplace-vj7pq\" (UID: \"162c1253-363e-4c47-b2ea-a8e498d13689\") " pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.528178 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/162c1253-363e-4c47-b2ea-a8e498d13689-catalog-content\") pod \"redhat-marketplace-vj7pq\" (UID: \"162c1253-363e-4c47-b2ea-a8e498d13689\") " pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.528226 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9jqm\" (UniqueName: \"kubernetes.io/projected/162c1253-363e-4c47-b2ea-a8e498d13689-kube-api-access-d9jqm\") pod \"redhat-marketplace-vj7pq\" (UID: \"162c1253-363e-4c47-b2ea-a8e498d13689\") " pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.629873 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/162c1253-363e-4c47-b2ea-a8e498d13689-catalog-content\") pod \"redhat-marketplace-vj7pq\" (UID: \"162c1253-363e-4c47-b2ea-a8e498d13689\") " pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.629955 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9jqm\" (UniqueName: \"kubernetes.io/projected/162c1253-363e-4c47-b2ea-a8e498d13689-kube-api-access-d9jqm\") pod \"redhat-marketplace-vj7pq\" (UID: \"162c1253-363e-4c47-b2ea-a8e498d13689\") " pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.630008 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/162c1253-363e-4c47-b2ea-a8e498d13689-utilities\") pod \"redhat-marketplace-vj7pq\" (UID: \"162c1253-363e-4c47-b2ea-a8e498d13689\") " pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.630637 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/162c1253-363e-4c47-b2ea-a8e498d13689-catalog-content\") pod \"redhat-marketplace-vj7pq\" (UID: \"162c1253-363e-4c47-b2ea-a8e498d13689\") " pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.630703 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/162c1253-363e-4c47-b2ea-a8e498d13689-utilities\") pod \"redhat-marketplace-vj7pq\" (UID: \"162c1253-363e-4c47-b2ea-a8e498d13689\") " pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.660391 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9jqm\" (UniqueName: \"kubernetes.io/projected/162c1253-363e-4c47-b2ea-a8e498d13689-kube-api-access-d9jqm\") pod \"redhat-marketplace-vj7pq\" (UID: \"162c1253-363e-4c47-b2ea-a8e498d13689\") " pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:34 crc kubenswrapper[4826]: I0129 07:19:34.786619 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:35 crc kubenswrapper[4826]: I0129 07:19:35.075737 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj7pq"] Jan 29 07:19:35 crc kubenswrapper[4826]: I0129 07:19:35.723133 4826 generic.go:334] "Generic (PLEG): container finished" podID="162c1253-363e-4c47-b2ea-a8e498d13689" containerID="b9380a030eefc78c1075693c89bf23d68e73ce108a5e397cc55ec605de9a6d59" exitCode=0 Jan 29 07:19:35 crc kubenswrapper[4826]: I0129 07:19:35.723254 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj7pq" event={"ID":"162c1253-363e-4c47-b2ea-a8e498d13689","Type":"ContainerDied","Data":"b9380a030eefc78c1075693c89bf23d68e73ce108a5e397cc55ec605de9a6d59"} Jan 29 07:19:35 crc kubenswrapper[4826]: I0129 07:19:35.723542 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj7pq" event={"ID":"162c1253-363e-4c47-b2ea-a8e498d13689","Type":"ContainerStarted","Data":"375c922536e176a053be33f6cdccdc32398c01eb204e01c2d19790f587ed1461"} Jan 29 07:19:36 crc kubenswrapper[4826]: I0129 07:19:36.738155 4826 generic.go:334] "Generic (PLEG): container finished" podID="162c1253-363e-4c47-b2ea-a8e498d13689" containerID="2c66d8d22bb3b82ceda8ad494a48547baf1fe2e91acf269eae311f5184cb6f8a" exitCode=0 Jan 29 07:19:36 crc kubenswrapper[4826]: I0129 07:19:36.738255 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj7pq" event={"ID":"162c1253-363e-4c47-b2ea-a8e498d13689","Type":"ContainerDied","Data":"2c66d8d22bb3b82ceda8ad494a48547baf1fe2e91acf269eae311f5184cb6f8a"} Jan 29 07:19:36 crc kubenswrapper[4826]: I0129 07:19:36.828054 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f4zzw"] Jan 29 07:19:36 crc kubenswrapper[4826]: I0129 07:19:36.831795 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:36 crc kubenswrapper[4826]: I0129 07:19:36.842852 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4zzw"] Jan 29 07:19:36 crc kubenswrapper[4826]: I0129 07:19:36.967901 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e02b62-fd90-4733-853a-d8b6f561206f-catalog-content\") pod \"community-operators-f4zzw\" (UID: \"d0e02b62-fd90-4733-853a-d8b6f561206f\") " pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:36 crc kubenswrapper[4826]: I0129 07:19:36.968041 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e02b62-fd90-4733-853a-d8b6f561206f-utilities\") pod \"community-operators-f4zzw\" (UID: \"d0e02b62-fd90-4733-853a-d8b6f561206f\") " pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:36 crc kubenswrapper[4826]: I0129 07:19:36.968170 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p45t\" (UniqueName: \"kubernetes.io/projected/d0e02b62-fd90-4733-853a-d8b6f561206f-kube-api-access-6p45t\") pod \"community-operators-f4zzw\" (UID: \"d0e02b62-fd90-4733-853a-d8b6f561206f\") " pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:37 crc kubenswrapper[4826]: I0129 07:19:37.098231 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p45t\" (UniqueName: \"kubernetes.io/projected/d0e02b62-fd90-4733-853a-d8b6f561206f-kube-api-access-6p45t\") pod \"community-operators-f4zzw\" (UID: \"d0e02b62-fd90-4733-853a-d8b6f561206f\") " pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:37 crc kubenswrapper[4826]: I0129 07:19:37.098318 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e02b62-fd90-4733-853a-d8b6f561206f-catalog-content\") pod \"community-operators-f4zzw\" (UID: \"d0e02b62-fd90-4733-853a-d8b6f561206f\") " pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:37 crc kubenswrapper[4826]: I0129 07:19:37.098406 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e02b62-fd90-4733-853a-d8b6f561206f-utilities\") pod \"community-operators-f4zzw\" (UID: \"d0e02b62-fd90-4733-853a-d8b6f561206f\") " pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:37 crc kubenswrapper[4826]: I0129 07:19:37.098912 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e02b62-fd90-4733-853a-d8b6f561206f-utilities\") pod \"community-operators-f4zzw\" (UID: \"d0e02b62-fd90-4733-853a-d8b6f561206f\") " pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:37 crc kubenswrapper[4826]: I0129 07:19:37.098978 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e02b62-fd90-4733-853a-d8b6f561206f-catalog-content\") pod \"community-operators-f4zzw\" (UID: \"d0e02b62-fd90-4733-853a-d8b6f561206f\") " pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:37 crc kubenswrapper[4826]: I0129 07:19:37.122941 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p45t\" (UniqueName: \"kubernetes.io/projected/d0e02b62-fd90-4733-853a-d8b6f561206f-kube-api-access-6p45t\") pod \"community-operators-f4zzw\" (UID: \"d0e02b62-fd90-4733-853a-d8b6f561206f\") " pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:37 crc kubenswrapper[4826]: I0129 07:19:37.165424 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:37 crc kubenswrapper[4826]: I0129 07:19:37.477610 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4zzw"] Jan 29 07:19:37 crc kubenswrapper[4826]: I0129 07:19:37.746570 4826 generic.go:334] "Generic (PLEG): container finished" podID="d0e02b62-fd90-4733-853a-d8b6f561206f" containerID="7d62c0d00dfc277de8400c47e25d7f53aebcac9a316673ce07b599789079e104" exitCode=0 Jan 29 07:19:37 crc kubenswrapper[4826]: I0129 07:19:37.747506 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4zzw" event={"ID":"d0e02b62-fd90-4733-853a-d8b6f561206f","Type":"ContainerDied","Data":"7d62c0d00dfc277de8400c47e25d7f53aebcac9a316673ce07b599789079e104"} Jan 29 07:19:37 crc kubenswrapper[4826]: I0129 07:19:37.747556 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4zzw" event={"ID":"d0e02b62-fd90-4733-853a-d8b6f561206f","Type":"ContainerStarted","Data":"11151258e65a5e14a3f4d34ca736f65164eca787003168b0b2b6985b4b0941ab"} Jan 29 07:19:37 crc kubenswrapper[4826]: I0129 07:19:37.750430 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj7pq" event={"ID":"162c1253-363e-4c47-b2ea-a8e498d13689","Type":"ContainerStarted","Data":"00f4368ccf3c63bc56ebf75d70ee2d0b7715a1847eda07e29a31f513af051806"} Jan 29 07:19:37 crc kubenswrapper[4826]: I0129 07:19:37.794009 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vj7pq" podStartSLOduration=2.303539575 podStartE2EDuration="3.793981784s" podCreationTimestamp="2026-01-29 07:19:34 +0000 UTC" firstStartedPulling="2026-01-29 07:19:35.725424444 +0000 UTC m=+2159.587217553" lastFinishedPulling="2026-01-29 07:19:37.215866693 +0000 UTC m=+2161.077659762" observedRunningTime="2026-01-29 07:19:37.791115748 +0000 UTC m=+2161.652908857" watchObservedRunningTime="2026-01-29 07:19:37.793981784 +0000 UTC m=+2161.655774883" Jan 29 07:19:39 crc kubenswrapper[4826]: I0129 07:19:39.767333 4826 generic.go:334] "Generic (PLEG): container finished" podID="d0e02b62-fd90-4733-853a-d8b6f561206f" containerID="5e059516618c04e990c5024092a013d011a2e86540517635270b3c3e07f8c380" exitCode=0 Jan 29 07:19:39 crc kubenswrapper[4826]: I0129 07:19:39.767698 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4zzw" event={"ID":"d0e02b62-fd90-4733-853a-d8b6f561206f","Type":"ContainerDied","Data":"5e059516618c04e990c5024092a013d011a2e86540517635270b3c3e07f8c380"} Jan 29 07:19:41 crc kubenswrapper[4826]: I0129 07:19:41.799115 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4zzw" event={"ID":"d0e02b62-fd90-4733-853a-d8b6f561206f","Type":"ContainerStarted","Data":"f4781ddbac7b7785505b986b02c9b0d08ebfc841a7be948f346c07e44ee0d35f"} Jan 29 07:19:41 crc kubenswrapper[4826]: I0129 07:19:41.830587 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f4zzw" podStartSLOduration=2.299551312 podStartE2EDuration="5.830572301s" podCreationTimestamp="2026-01-29 07:19:36 +0000 UTC" firstStartedPulling="2026-01-29 07:19:37.748952056 +0000 UTC m=+2161.610745125" lastFinishedPulling="2026-01-29 07:19:41.279973005 +0000 UTC m=+2165.141766114" observedRunningTime="2026-01-29 07:19:41.830007436 +0000 UTC m=+2165.691800505" watchObservedRunningTime="2026-01-29 07:19:41.830572301 +0000 UTC m=+2165.692365370" Jan 29 07:19:44 crc kubenswrapper[4826]: I0129 07:19:44.787511 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:44 crc kubenswrapper[4826]: I0129 07:19:44.789149 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:44 crc kubenswrapper[4826]: I0129 07:19:44.868677 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:44 crc kubenswrapper[4826]: I0129 07:19:44.943694 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:45 crc kubenswrapper[4826]: I0129 07:19:45.117838 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj7pq"] Jan 29 07:19:46 crc kubenswrapper[4826]: I0129 07:19:46.838433 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vj7pq" podUID="162c1253-363e-4c47-b2ea-a8e498d13689" containerName="registry-server" containerID="cri-o://00f4368ccf3c63bc56ebf75d70ee2d0b7715a1847eda07e29a31f513af051806" gracePeriod=2 Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.190125 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.190180 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.270788 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.820393 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.865148 4826 generic.go:334] "Generic (PLEG): container finished" podID="162c1253-363e-4c47-b2ea-a8e498d13689" containerID="00f4368ccf3c63bc56ebf75d70ee2d0b7715a1847eda07e29a31f513af051806" exitCode=0 Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.865203 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj7pq" event={"ID":"162c1253-363e-4c47-b2ea-a8e498d13689","Type":"ContainerDied","Data":"00f4368ccf3c63bc56ebf75d70ee2d0b7715a1847eda07e29a31f513af051806"} Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.865307 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj7pq" event={"ID":"162c1253-363e-4c47-b2ea-a8e498d13689","Type":"ContainerDied","Data":"375c922536e176a053be33f6cdccdc32398c01eb204e01c2d19790f587ed1461"} Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.865336 4826 scope.go:117] "RemoveContainer" containerID="00f4368ccf3c63bc56ebf75d70ee2d0b7715a1847eda07e29a31f513af051806" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.865451 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vj7pq" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.891466 4826 scope.go:117] "RemoveContainer" containerID="2c66d8d22bb3b82ceda8ad494a48547baf1fe2e91acf269eae311f5184cb6f8a" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.929987 4826 scope.go:117] "RemoveContainer" containerID="b9380a030eefc78c1075693c89bf23d68e73ce108a5e397cc55ec605de9a6d59" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.937513 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.951919 4826 scope.go:117] "RemoveContainer" containerID="00f4368ccf3c63bc56ebf75d70ee2d0b7715a1847eda07e29a31f513af051806" Jan 29 07:19:47 crc kubenswrapper[4826]: E0129 07:19:47.952401 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f4368ccf3c63bc56ebf75d70ee2d0b7715a1847eda07e29a31f513af051806\": container with ID starting with 00f4368ccf3c63bc56ebf75d70ee2d0b7715a1847eda07e29a31f513af051806 not found: ID does not exist" containerID="00f4368ccf3c63bc56ebf75d70ee2d0b7715a1847eda07e29a31f513af051806" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.952445 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f4368ccf3c63bc56ebf75d70ee2d0b7715a1847eda07e29a31f513af051806"} err="failed to get container status \"00f4368ccf3c63bc56ebf75d70ee2d0b7715a1847eda07e29a31f513af051806\": rpc error: code = NotFound desc = could not find container \"00f4368ccf3c63bc56ebf75d70ee2d0b7715a1847eda07e29a31f513af051806\": container with ID starting with 00f4368ccf3c63bc56ebf75d70ee2d0b7715a1847eda07e29a31f513af051806 not found: ID does not exist" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.952476 4826 scope.go:117] "RemoveContainer" containerID="2c66d8d22bb3b82ceda8ad494a48547baf1fe2e91acf269eae311f5184cb6f8a" Jan 29 07:19:47 crc kubenswrapper[4826]: E0129 07:19:47.952824 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c66d8d22bb3b82ceda8ad494a48547baf1fe2e91acf269eae311f5184cb6f8a\": container with ID starting with 2c66d8d22bb3b82ceda8ad494a48547baf1fe2e91acf269eae311f5184cb6f8a not found: ID does not exist" containerID="2c66d8d22bb3b82ceda8ad494a48547baf1fe2e91acf269eae311f5184cb6f8a" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.952862 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c66d8d22bb3b82ceda8ad494a48547baf1fe2e91acf269eae311f5184cb6f8a"} err="failed to get container status \"2c66d8d22bb3b82ceda8ad494a48547baf1fe2e91acf269eae311f5184cb6f8a\": rpc error: code = NotFound desc = could not find container \"2c66d8d22bb3b82ceda8ad494a48547baf1fe2e91acf269eae311f5184cb6f8a\": container with ID starting with 2c66d8d22bb3b82ceda8ad494a48547baf1fe2e91acf269eae311f5184cb6f8a not found: ID does not exist" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.952884 4826 scope.go:117] "RemoveContainer" containerID="b9380a030eefc78c1075693c89bf23d68e73ce108a5e397cc55ec605de9a6d59" Jan 29 07:19:47 crc kubenswrapper[4826]: E0129 07:19:47.953336 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9380a030eefc78c1075693c89bf23d68e73ce108a5e397cc55ec605de9a6d59\": container with ID starting with b9380a030eefc78c1075693c89bf23d68e73ce108a5e397cc55ec605de9a6d59 not found: ID does not exist" containerID="b9380a030eefc78c1075693c89bf23d68e73ce108a5e397cc55ec605de9a6d59" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.953379 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9380a030eefc78c1075693c89bf23d68e73ce108a5e397cc55ec605de9a6d59"} err="failed to get container status \"b9380a030eefc78c1075693c89bf23d68e73ce108a5e397cc55ec605de9a6d59\": rpc error: code = NotFound desc = could not find container \"b9380a030eefc78c1075693c89bf23d68e73ce108a5e397cc55ec605de9a6d59\": container with ID starting with b9380a030eefc78c1075693c89bf23d68e73ce108a5e397cc55ec605de9a6d59 not found: ID does not exist" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.981097 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9jqm\" (UniqueName: \"kubernetes.io/projected/162c1253-363e-4c47-b2ea-a8e498d13689-kube-api-access-d9jqm\") pod \"162c1253-363e-4c47-b2ea-a8e498d13689\" (UID: \"162c1253-363e-4c47-b2ea-a8e498d13689\") " Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.981276 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/162c1253-363e-4c47-b2ea-a8e498d13689-utilities\") pod \"162c1253-363e-4c47-b2ea-a8e498d13689\" (UID: \"162c1253-363e-4c47-b2ea-a8e498d13689\") " Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.981425 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/162c1253-363e-4c47-b2ea-a8e498d13689-catalog-content\") pod \"162c1253-363e-4c47-b2ea-a8e498d13689\" (UID: \"162c1253-363e-4c47-b2ea-a8e498d13689\") " Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.982682 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/162c1253-363e-4c47-b2ea-a8e498d13689-utilities" (OuterVolumeSpecName: "utilities") pod "162c1253-363e-4c47-b2ea-a8e498d13689" (UID: "162c1253-363e-4c47-b2ea-a8e498d13689"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:19:47 crc kubenswrapper[4826]: I0129 07:19:47.990870 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/162c1253-363e-4c47-b2ea-a8e498d13689-kube-api-access-d9jqm" (OuterVolumeSpecName: "kube-api-access-d9jqm") pod "162c1253-363e-4c47-b2ea-a8e498d13689" (UID: "162c1253-363e-4c47-b2ea-a8e498d13689"). InnerVolumeSpecName "kube-api-access-d9jqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:19:48 crc kubenswrapper[4826]: I0129 07:19:48.010571 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/162c1253-363e-4c47-b2ea-a8e498d13689-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "162c1253-363e-4c47-b2ea-a8e498d13689" (UID: "162c1253-363e-4c47-b2ea-a8e498d13689"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:19:48 crc kubenswrapper[4826]: I0129 07:19:48.082831 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/162c1253-363e-4c47-b2ea-a8e498d13689-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:19:48 crc kubenswrapper[4826]: I0129 07:19:48.082868 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9jqm\" (UniqueName: \"kubernetes.io/projected/162c1253-363e-4c47-b2ea-a8e498d13689-kube-api-access-d9jqm\") on node \"crc\" DevicePath \"\"" Jan 29 07:19:48 crc kubenswrapper[4826]: I0129 07:19:48.082883 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/162c1253-363e-4c47-b2ea-a8e498d13689-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:19:48 crc kubenswrapper[4826]: I0129 07:19:48.198620 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj7pq"] Jan 29 07:19:48 crc kubenswrapper[4826]: I0129 07:19:48.207459 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj7pq"] Jan 29 07:19:48 crc kubenswrapper[4826]: I0129 07:19:48.826935 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="162c1253-363e-4c47-b2ea-a8e498d13689" path="/var/lib/kubelet/pods/162c1253-363e-4c47-b2ea-a8e498d13689/volumes" Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.322668 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4zzw"] Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.323481 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f4zzw" podUID="d0e02b62-fd90-4733-853a-d8b6f561206f" containerName="registry-server" containerID="cri-o://f4781ddbac7b7785505b986b02c9b0d08ebfc841a7be948f346c07e44ee0d35f" gracePeriod=2 Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.802472 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.899731 4826 generic.go:334] "Generic (PLEG): container finished" podID="d0e02b62-fd90-4733-853a-d8b6f561206f" containerID="f4781ddbac7b7785505b986b02c9b0d08ebfc841a7be948f346c07e44ee0d35f" exitCode=0 Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.899778 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4zzw" event={"ID":"d0e02b62-fd90-4733-853a-d8b6f561206f","Type":"ContainerDied","Data":"f4781ddbac7b7785505b986b02c9b0d08ebfc841a7be948f346c07e44ee0d35f"} Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.899805 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4zzw" event={"ID":"d0e02b62-fd90-4733-853a-d8b6f561206f","Type":"ContainerDied","Data":"11151258e65a5e14a3f4d34ca736f65164eca787003168b0b2b6985b4b0941ab"} Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.899855 4826 scope.go:117] "RemoveContainer" containerID="f4781ddbac7b7785505b986b02c9b0d08ebfc841a7be948f346c07e44ee0d35f" Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.899925 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4zzw" Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.940525 4826 scope.go:117] "RemoveContainer" containerID="5e059516618c04e990c5024092a013d011a2e86540517635270b3c3e07f8c380" Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.940589 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e02b62-fd90-4733-853a-d8b6f561206f-catalog-content\") pod \"d0e02b62-fd90-4733-853a-d8b6f561206f\" (UID: \"d0e02b62-fd90-4733-853a-d8b6f561206f\") " Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.941208 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e02b62-fd90-4733-853a-d8b6f561206f-utilities\") pod \"d0e02b62-fd90-4733-853a-d8b6f561206f\" (UID: \"d0e02b62-fd90-4733-853a-d8b6f561206f\") " Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.941352 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p45t\" (UniqueName: \"kubernetes.io/projected/d0e02b62-fd90-4733-853a-d8b6f561206f-kube-api-access-6p45t\") pod \"d0e02b62-fd90-4733-853a-d8b6f561206f\" (UID: \"d0e02b62-fd90-4733-853a-d8b6f561206f\") " Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.943053 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e02b62-fd90-4733-853a-d8b6f561206f-utilities" (OuterVolumeSpecName: "utilities") pod "d0e02b62-fd90-4733-853a-d8b6f561206f" (UID: "d0e02b62-fd90-4733-853a-d8b6f561206f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:19:50 crc kubenswrapper[4826]: I0129 07:19:50.958733 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e02b62-fd90-4733-853a-d8b6f561206f-kube-api-access-6p45t" (OuterVolumeSpecName: "kube-api-access-6p45t") pod "d0e02b62-fd90-4733-853a-d8b6f561206f" (UID: "d0e02b62-fd90-4733-853a-d8b6f561206f"). InnerVolumeSpecName "kube-api-access-6p45t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:19:51 crc kubenswrapper[4826]: I0129 07:19:51.009497 4826 scope.go:117] "RemoveContainer" containerID="7d62c0d00dfc277de8400c47e25d7f53aebcac9a316673ce07b599789079e104" Jan 29 07:19:51 crc kubenswrapper[4826]: I0129 07:19:51.049389 4826 scope.go:117] "RemoveContainer" containerID="f4781ddbac7b7785505b986b02c9b0d08ebfc841a7be948f346c07e44ee0d35f" Jan 29 07:19:51 crc kubenswrapper[4826]: I0129 07:19:51.049953 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e02b62-fd90-4733-853a-d8b6f561206f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:19:51 crc kubenswrapper[4826]: I0129 07:19:51.049966 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p45t\" (UniqueName: \"kubernetes.io/projected/d0e02b62-fd90-4733-853a-d8b6f561206f-kube-api-access-6p45t\") on node \"crc\" DevicePath \"\"" Jan 29 07:19:51 crc kubenswrapper[4826]: E0129 07:19:51.053954 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4781ddbac7b7785505b986b02c9b0d08ebfc841a7be948f346c07e44ee0d35f\": container with ID starting with f4781ddbac7b7785505b986b02c9b0d08ebfc841a7be948f346c07e44ee0d35f not found: ID does not exist" containerID="f4781ddbac7b7785505b986b02c9b0d08ebfc841a7be948f346c07e44ee0d35f" Jan 29 07:19:51 crc kubenswrapper[4826]: I0129 07:19:51.053989 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4781ddbac7b7785505b986b02c9b0d08ebfc841a7be948f346c07e44ee0d35f"} err="failed to get container status \"f4781ddbac7b7785505b986b02c9b0d08ebfc841a7be948f346c07e44ee0d35f\": rpc error: code = NotFound desc = could not find container \"f4781ddbac7b7785505b986b02c9b0d08ebfc841a7be948f346c07e44ee0d35f\": container with ID starting with f4781ddbac7b7785505b986b02c9b0d08ebfc841a7be948f346c07e44ee0d35f not found: ID does not exist" Jan 29 07:19:51 crc kubenswrapper[4826]: I0129 07:19:51.054008 4826 scope.go:117] "RemoveContainer" containerID="5e059516618c04e990c5024092a013d011a2e86540517635270b3c3e07f8c380" Jan 29 07:19:51 crc kubenswrapper[4826]: E0129 07:19:51.060369 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e059516618c04e990c5024092a013d011a2e86540517635270b3c3e07f8c380\": container with ID starting with 5e059516618c04e990c5024092a013d011a2e86540517635270b3c3e07f8c380 not found: ID does not exist" containerID="5e059516618c04e990c5024092a013d011a2e86540517635270b3c3e07f8c380" Jan 29 07:19:51 crc kubenswrapper[4826]: I0129 07:19:51.060407 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e059516618c04e990c5024092a013d011a2e86540517635270b3c3e07f8c380"} err="failed to get container status \"5e059516618c04e990c5024092a013d011a2e86540517635270b3c3e07f8c380\": rpc error: code = NotFound desc = could not find container \"5e059516618c04e990c5024092a013d011a2e86540517635270b3c3e07f8c380\": container with ID starting with 5e059516618c04e990c5024092a013d011a2e86540517635270b3c3e07f8c380 not found: ID does not exist" Jan 29 07:19:51 crc kubenswrapper[4826]: I0129 07:19:51.060423 4826 scope.go:117] "RemoveContainer" containerID="7d62c0d00dfc277de8400c47e25d7f53aebcac9a316673ce07b599789079e104" Jan 29 07:19:51 crc kubenswrapper[4826]: E0129 07:19:51.073441 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d62c0d00dfc277de8400c47e25d7f53aebcac9a316673ce07b599789079e104\": container with ID starting with 7d62c0d00dfc277de8400c47e25d7f53aebcac9a316673ce07b599789079e104 not found: ID does not exist" containerID="7d62c0d00dfc277de8400c47e25d7f53aebcac9a316673ce07b599789079e104" Jan 29 07:19:51 crc kubenswrapper[4826]: I0129 07:19:51.073487 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d62c0d00dfc277de8400c47e25d7f53aebcac9a316673ce07b599789079e104"} err="failed to get container status \"7d62c0d00dfc277de8400c47e25d7f53aebcac9a316673ce07b599789079e104\": rpc error: code = NotFound desc = could not find container \"7d62c0d00dfc277de8400c47e25d7f53aebcac9a316673ce07b599789079e104\": container with ID starting with 7d62c0d00dfc277de8400c47e25d7f53aebcac9a316673ce07b599789079e104 not found: ID does not exist" Jan 29 07:19:51 crc kubenswrapper[4826]: I0129 07:19:51.073776 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e02b62-fd90-4733-853a-d8b6f561206f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0e02b62-fd90-4733-853a-d8b6f561206f" (UID: "d0e02b62-fd90-4733-853a-d8b6f561206f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:19:51 crc kubenswrapper[4826]: I0129 07:19:51.150791 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e02b62-fd90-4733-853a-d8b6f561206f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:19:51 crc kubenswrapper[4826]: I0129 07:19:51.245658 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4zzw"] Jan 29 07:19:51 crc kubenswrapper[4826]: I0129 07:19:51.251894 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f4zzw"] Jan 29 07:19:52 crc kubenswrapper[4826]: I0129 07:19:52.827029 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e02b62-fd90-4733-853a-d8b6f561206f" path="/var/lib/kubelet/pods/d0e02b62-fd90-4733-853a-d8b6f561206f/volumes" Jan 29 07:21:05 crc kubenswrapper[4826]: I0129 07:21:05.656423 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:21:05 crc kubenswrapper[4826]: I0129 07:21:05.657039 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:21:35 crc kubenswrapper[4826]: I0129 07:21:35.656378 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:21:35 crc kubenswrapper[4826]: I0129 07:21:35.657403 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:22:05 crc kubenswrapper[4826]: I0129 07:22:05.656814 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:22:05 crc kubenswrapper[4826]: I0129 07:22:05.657611 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:22:05 crc kubenswrapper[4826]: I0129 07:22:05.657683 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 07:22:05 crc kubenswrapper[4826]: I0129 07:22:05.658665 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:22:05 crc kubenswrapper[4826]: I0129 07:22:05.658768 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" gracePeriod=600 Jan 29 07:22:05 crc kubenswrapper[4826]: E0129 07:22:05.787924 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:22:06 crc kubenswrapper[4826]: I0129 07:22:06.172463 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" exitCode=0 Jan 29 07:22:06 crc kubenswrapper[4826]: I0129 07:22:06.172526 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc"} Jan 29 07:22:06 crc kubenswrapper[4826]: I0129 07:22:06.172576 4826 scope.go:117] "RemoveContainer" containerID="285c448faefe70cc4e0b94ee6286bfb2b42948aa328ed316b0407ab940baa937" Jan 29 07:22:06 crc kubenswrapper[4826]: I0129 07:22:06.173720 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:22:06 crc kubenswrapper[4826]: E0129 07:22:06.174332 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:22:17 crc kubenswrapper[4826]: I0129 07:22:17.809641 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:22:17 crc kubenswrapper[4826]: E0129 07:22:17.810797 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:22:30 crc kubenswrapper[4826]: I0129 07:22:30.809563 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:22:30 crc kubenswrapper[4826]: E0129 07:22:30.810407 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:22:44 crc kubenswrapper[4826]: I0129 07:22:44.809627 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:22:44 crc kubenswrapper[4826]: E0129 07:22:44.810738 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:22:58 crc kubenswrapper[4826]: I0129 07:22:58.808961 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:22:58 crc kubenswrapper[4826]: E0129 07:22:58.810049 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:23:11 crc kubenswrapper[4826]: I0129 07:23:11.808475 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:23:11 crc kubenswrapper[4826]: E0129 07:23:11.809069 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:23:26 crc kubenswrapper[4826]: I0129 07:23:26.817053 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:23:26 crc kubenswrapper[4826]: E0129 07:23:26.818820 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:23:39 crc kubenswrapper[4826]: I0129 07:23:39.809119 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:23:39 crc kubenswrapper[4826]: E0129 07:23:39.810119 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:23:53 crc kubenswrapper[4826]: I0129 07:23:53.809082 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:23:53 crc kubenswrapper[4826]: E0129 07:23:53.809875 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:24:06 crc kubenswrapper[4826]: I0129 07:24:06.818674 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:24:06 crc kubenswrapper[4826]: E0129 07:24:06.819906 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:24:20 crc kubenswrapper[4826]: I0129 07:24:20.809380 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:24:20 crc kubenswrapper[4826]: E0129 07:24:20.810177 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:24:33 crc kubenswrapper[4826]: I0129 07:24:33.808987 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:24:33 crc kubenswrapper[4826]: E0129 07:24:33.809898 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:24:47 crc kubenswrapper[4826]: I0129 07:24:47.809244 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:24:47 crc kubenswrapper[4826]: E0129 07:24:47.810223 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:25:01 crc kubenswrapper[4826]: I0129 07:25:01.809422 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:25:01 crc kubenswrapper[4826]: E0129 07:25:01.810581 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:25:13 crc kubenswrapper[4826]: I0129 07:25:13.808515 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:25:13 crc kubenswrapper[4826]: E0129 07:25:13.809197 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:25:28 crc kubenswrapper[4826]: I0129 07:25:28.809263 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:25:28 crc kubenswrapper[4826]: E0129 07:25:28.810341 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:25:43 crc kubenswrapper[4826]: I0129 07:25:43.810218 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:25:43 crc kubenswrapper[4826]: E0129 07:25:43.811632 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:25:56 crc kubenswrapper[4826]: I0129 07:25:56.812765 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:25:56 crc kubenswrapper[4826]: E0129 07:25:56.813624 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:26:11 crc kubenswrapper[4826]: I0129 07:26:11.809495 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:26:11 crc kubenswrapper[4826]: E0129 07:26:11.810803 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:26:26 crc kubenswrapper[4826]: I0129 07:26:26.816975 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:26:26 crc kubenswrapper[4826]: E0129 07:26:26.818343 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:26:41 crc kubenswrapper[4826]: I0129 07:26:41.809490 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:26:41 crc kubenswrapper[4826]: E0129 07:26:41.810534 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:26:54 crc kubenswrapper[4826]: I0129 07:26:54.812123 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:26:54 crc kubenswrapper[4826]: E0129 07:26:54.813288 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:27:07 crc kubenswrapper[4826]: I0129 07:27:07.809673 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:27:09 crc kubenswrapper[4826]: I0129 07:27:09.055538 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"cdf0954d547df06d481856ed121f2f9ee617f65f5bcb0c58b6a4636bb42863aa"} Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.557368 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xfsc2"] Jan 29 07:27:37 crc kubenswrapper[4826]: E0129 07:27:37.558077 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="162c1253-363e-4c47-b2ea-a8e498d13689" containerName="registry-server" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.558092 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="162c1253-363e-4c47-b2ea-a8e498d13689" containerName="registry-server" Jan 29 07:27:37 crc kubenswrapper[4826]: E0129 07:27:37.558104 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e02b62-fd90-4733-853a-d8b6f561206f" containerName="registry-server" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.558112 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e02b62-fd90-4733-853a-d8b6f561206f" containerName="registry-server" Jan 29 07:27:37 crc kubenswrapper[4826]: E0129 07:27:37.558129 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e02b62-fd90-4733-853a-d8b6f561206f" containerName="extract-content" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.558137 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e02b62-fd90-4733-853a-d8b6f561206f" containerName="extract-content" Jan 29 07:27:37 crc kubenswrapper[4826]: E0129 07:27:37.558155 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="162c1253-363e-4c47-b2ea-a8e498d13689" containerName="extract-utilities" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.558163 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="162c1253-363e-4c47-b2ea-a8e498d13689" containerName="extract-utilities" Jan 29 07:27:37 crc kubenswrapper[4826]: E0129 07:27:37.558174 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e02b62-fd90-4733-853a-d8b6f561206f" containerName="extract-utilities" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.558181 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e02b62-fd90-4733-853a-d8b6f561206f" containerName="extract-utilities" Jan 29 07:27:37 crc kubenswrapper[4826]: E0129 07:27:37.558195 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="162c1253-363e-4c47-b2ea-a8e498d13689" containerName="extract-content" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.558202 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="162c1253-363e-4c47-b2ea-a8e498d13689" containerName="extract-content" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.558388 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="162c1253-363e-4c47-b2ea-a8e498d13689" containerName="registry-server" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.558410 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e02b62-fd90-4733-853a-d8b6f561206f" containerName="registry-server" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.559633 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.571935 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xfsc2"] Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.611828 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9634541d-32d4-474f-a220-897fd4af2757-utilities\") pod \"redhat-operators-xfsc2\" (UID: \"9634541d-32d4-474f-a220-897fd4af2757\") " pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.611931 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssnjq\" (UniqueName: \"kubernetes.io/projected/9634541d-32d4-474f-a220-897fd4af2757-kube-api-access-ssnjq\") pod \"redhat-operators-xfsc2\" (UID: \"9634541d-32d4-474f-a220-897fd4af2757\") " pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.612004 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9634541d-32d4-474f-a220-897fd4af2757-catalog-content\") pod \"redhat-operators-xfsc2\" (UID: \"9634541d-32d4-474f-a220-897fd4af2757\") " pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.713456 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9634541d-32d4-474f-a220-897fd4af2757-utilities\") pod \"redhat-operators-xfsc2\" (UID: \"9634541d-32d4-474f-a220-897fd4af2757\") " pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.713536 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssnjq\" (UniqueName: \"kubernetes.io/projected/9634541d-32d4-474f-a220-897fd4af2757-kube-api-access-ssnjq\") pod \"redhat-operators-xfsc2\" (UID: \"9634541d-32d4-474f-a220-897fd4af2757\") " pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.713587 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9634541d-32d4-474f-a220-897fd4af2757-catalog-content\") pod \"redhat-operators-xfsc2\" (UID: \"9634541d-32d4-474f-a220-897fd4af2757\") " pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.714012 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9634541d-32d4-474f-a220-897fd4af2757-utilities\") pod \"redhat-operators-xfsc2\" (UID: \"9634541d-32d4-474f-a220-897fd4af2757\") " pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.714067 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9634541d-32d4-474f-a220-897fd4af2757-catalog-content\") pod \"redhat-operators-xfsc2\" (UID: \"9634541d-32d4-474f-a220-897fd4af2757\") " pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.733098 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssnjq\" (UniqueName: \"kubernetes.io/projected/9634541d-32d4-474f-a220-897fd4af2757-kube-api-access-ssnjq\") pod \"redhat-operators-xfsc2\" (UID: \"9634541d-32d4-474f-a220-897fd4af2757\") " pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:37 crc kubenswrapper[4826]: I0129 07:27:37.889694 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:38 crc kubenswrapper[4826]: I0129 07:27:38.324617 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xfsc2"] Jan 29 07:27:38 crc kubenswrapper[4826]: I0129 07:27:38.393346 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfsc2" event={"ID":"9634541d-32d4-474f-a220-897fd4af2757","Type":"ContainerStarted","Data":"dd8f7ba44c1f744c318b0d385030deb8eb4a645e7baf6a004102e2ec6bdcdc91"} Jan 29 07:27:39 crc kubenswrapper[4826]: I0129 07:27:39.406676 4826 generic.go:334] "Generic (PLEG): container finished" podID="9634541d-32d4-474f-a220-897fd4af2757" containerID="68c4a2840d93fa3a118746267cb295323e94548871d7e29b6b3acda4239b45fe" exitCode=0 Jan 29 07:27:39 crc kubenswrapper[4826]: I0129 07:27:39.406743 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfsc2" event={"ID":"9634541d-32d4-474f-a220-897fd4af2757","Type":"ContainerDied","Data":"68c4a2840d93fa3a118746267cb295323e94548871d7e29b6b3acda4239b45fe"} Jan 29 07:27:39 crc kubenswrapper[4826]: I0129 07:27:39.410056 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 07:27:41 crc kubenswrapper[4826]: I0129 07:27:41.430277 4826 generic.go:334] "Generic (PLEG): container finished" podID="9634541d-32d4-474f-a220-897fd4af2757" containerID="32c2126290b026d9c609b9e1c091ecbf41bd7d57634c01f697ea29e56297f86d" exitCode=0 Jan 29 07:27:41 crc kubenswrapper[4826]: I0129 07:27:41.430434 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfsc2" event={"ID":"9634541d-32d4-474f-a220-897fd4af2757","Type":"ContainerDied","Data":"32c2126290b026d9c609b9e1c091ecbf41bd7d57634c01f697ea29e56297f86d"} Jan 29 07:27:42 crc kubenswrapper[4826]: I0129 07:27:42.453666 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfsc2" event={"ID":"9634541d-32d4-474f-a220-897fd4af2757","Type":"ContainerStarted","Data":"177386b85e1ede17ab0708e31085d2c4352111de50ac75ef1079c0d32decce3e"} Jan 29 07:27:42 crc kubenswrapper[4826]: I0129 07:27:42.473518 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xfsc2" podStartSLOduration=3.055465454 podStartE2EDuration="5.473500672s" podCreationTimestamp="2026-01-29 07:27:37 +0000 UTC" firstStartedPulling="2026-01-29 07:27:39.40943085 +0000 UTC m=+2643.271223969" lastFinishedPulling="2026-01-29 07:27:41.827466108 +0000 UTC m=+2645.689259187" observedRunningTime="2026-01-29 07:27:42.469780053 +0000 UTC m=+2646.331573132" watchObservedRunningTime="2026-01-29 07:27:42.473500672 +0000 UTC m=+2646.335293741" Jan 29 07:27:47 crc kubenswrapper[4826]: I0129 07:27:47.890118 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:47 crc kubenswrapper[4826]: I0129 07:27:47.890712 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:48 crc kubenswrapper[4826]: I0129 07:27:48.949362 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xfsc2" podUID="9634541d-32d4-474f-a220-897fd4af2757" containerName="registry-server" probeResult="failure" output=< Jan 29 07:27:48 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 07:27:48 crc kubenswrapper[4826]: > Jan 29 07:27:57 crc kubenswrapper[4826]: I0129 07:27:57.953818 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:58 crc kubenswrapper[4826]: I0129 07:27:58.019109 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:27:58 crc kubenswrapper[4826]: I0129 07:27:58.204038 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xfsc2"] Jan 29 07:27:59 crc kubenswrapper[4826]: I0129 07:27:59.583037 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xfsc2" podUID="9634541d-32d4-474f-a220-897fd4af2757" containerName="registry-server" containerID="cri-o://177386b85e1ede17ab0708e31085d2c4352111de50ac75ef1079c0d32decce3e" gracePeriod=2 Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.090778 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.169793 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssnjq\" (UniqueName: \"kubernetes.io/projected/9634541d-32d4-474f-a220-897fd4af2757-kube-api-access-ssnjq\") pod \"9634541d-32d4-474f-a220-897fd4af2757\" (UID: \"9634541d-32d4-474f-a220-897fd4af2757\") " Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.170176 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9634541d-32d4-474f-a220-897fd4af2757-utilities\") pod \"9634541d-32d4-474f-a220-897fd4af2757\" (UID: \"9634541d-32d4-474f-a220-897fd4af2757\") " Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.170405 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9634541d-32d4-474f-a220-897fd4af2757-catalog-content\") pod \"9634541d-32d4-474f-a220-897fd4af2757\" (UID: \"9634541d-32d4-474f-a220-897fd4af2757\") " Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.170951 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9634541d-32d4-474f-a220-897fd4af2757-utilities" (OuterVolumeSpecName: "utilities") pod "9634541d-32d4-474f-a220-897fd4af2757" (UID: "9634541d-32d4-474f-a220-897fd4af2757"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.175961 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9634541d-32d4-474f-a220-897fd4af2757-kube-api-access-ssnjq" (OuterVolumeSpecName: "kube-api-access-ssnjq") pod "9634541d-32d4-474f-a220-897fd4af2757" (UID: "9634541d-32d4-474f-a220-897fd4af2757"). InnerVolumeSpecName "kube-api-access-ssnjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.271993 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssnjq\" (UniqueName: \"kubernetes.io/projected/9634541d-32d4-474f-a220-897fd4af2757-kube-api-access-ssnjq\") on node \"crc\" DevicePath \"\"" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.272031 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9634541d-32d4-474f-a220-897fd4af2757-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.317017 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9634541d-32d4-474f-a220-897fd4af2757-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9634541d-32d4-474f-a220-897fd4af2757" (UID: "9634541d-32d4-474f-a220-897fd4af2757"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.372867 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9634541d-32d4-474f-a220-897fd4af2757-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.596779 4826 generic.go:334] "Generic (PLEG): container finished" podID="9634541d-32d4-474f-a220-897fd4af2757" containerID="177386b85e1ede17ab0708e31085d2c4352111de50ac75ef1079c0d32decce3e" exitCode=0 Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.596833 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfsc2" event={"ID":"9634541d-32d4-474f-a220-897fd4af2757","Type":"ContainerDied","Data":"177386b85e1ede17ab0708e31085d2c4352111de50ac75ef1079c0d32decce3e"} Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.597084 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfsc2" event={"ID":"9634541d-32d4-474f-a220-897fd4af2757","Type":"ContainerDied","Data":"dd8f7ba44c1f744c318b0d385030deb8eb4a645e7baf6a004102e2ec6bdcdc91"} Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.597110 4826 scope.go:117] "RemoveContainer" containerID="177386b85e1ede17ab0708e31085d2c4352111de50ac75ef1079c0d32decce3e" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.596987 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfsc2" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.631931 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xfsc2"] Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.636442 4826 scope.go:117] "RemoveContainer" containerID="32c2126290b026d9c609b9e1c091ecbf41bd7d57634c01f697ea29e56297f86d" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.660911 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xfsc2"] Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.672390 4826 scope.go:117] "RemoveContainer" containerID="68c4a2840d93fa3a118746267cb295323e94548871d7e29b6b3acda4239b45fe" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.708769 4826 scope.go:117] "RemoveContainer" containerID="177386b85e1ede17ab0708e31085d2c4352111de50ac75ef1079c0d32decce3e" Jan 29 07:28:00 crc kubenswrapper[4826]: E0129 07:28:00.709277 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"177386b85e1ede17ab0708e31085d2c4352111de50ac75ef1079c0d32decce3e\": container with ID starting with 177386b85e1ede17ab0708e31085d2c4352111de50ac75ef1079c0d32decce3e not found: ID does not exist" containerID="177386b85e1ede17ab0708e31085d2c4352111de50ac75ef1079c0d32decce3e" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.709341 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177386b85e1ede17ab0708e31085d2c4352111de50ac75ef1079c0d32decce3e"} err="failed to get container status \"177386b85e1ede17ab0708e31085d2c4352111de50ac75ef1079c0d32decce3e\": rpc error: code = NotFound desc = could not find container \"177386b85e1ede17ab0708e31085d2c4352111de50ac75ef1079c0d32decce3e\": container with ID starting with 177386b85e1ede17ab0708e31085d2c4352111de50ac75ef1079c0d32decce3e not found: ID does not exist" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.709367 4826 scope.go:117] "RemoveContainer" containerID="32c2126290b026d9c609b9e1c091ecbf41bd7d57634c01f697ea29e56297f86d" Jan 29 07:28:00 crc kubenswrapper[4826]: E0129 07:28:00.709592 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c2126290b026d9c609b9e1c091ecbf41bd7d57634c01f697ea29e56297f86d\": container with ID starting with 32c2126290b026d9c609b9e1c091ecbf41bd7d57634c01f697ea29e56297f86d not found: ID does not exist" containerID="32c2126290b026d9c609b9e1c091ecbf41bd7d57634c01f697ea29e56297f86d" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.709622 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c2126290b026d9c609b9e1c091ecbf41bd7d57634c01f697ea29e56297f86d"} err="failed to get container status \"32c2126290b026d9c609b9e1c091ecbf41bd7d57634c01f697ea29e56297f86d\": rpc error: code = NotFound desc = could not find container \"32c2126290b026d9c609b9e1c091ecbf41bd7d57634c01f697ea29e56297f86d\": container with ID starting with 32c2126290b026d9c609b9e1c091ecbf41bd7d57634c01f697ea29e56297f86d not found: ID does not exist" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.709639 4826 scope.go:117] "RemoveContainer" containerID="68c4a2840d93fa3a118746267cb295323e94548871d7e29b6b3acda4239b45fe" Jan 29 07:28:00 crc kubenswrapper[4826]: E0129 07:28:00.709850 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68c4a2840d93fa3a118746267cb295323e94548871d7e29b6b3acda4239b45fe\": container with ID starting with 68c4a2840d93fa3a118746267cb295323e94548871d7e29b6b3acda4239b45fe not found: ID does not exist" containerID="68c4a2840d93fa3a118746267cb295323e94548871d7e29b6b3acda4239b45fe" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.709899 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68c4a2840d93fa3a118746267cb295323e94548871d7e29b6b3acda4239b45fe"} err="failed to get container status \"68c4a2840d93fa3a118746267cb295323e94548871d7e29b6b3acda4239b45fe\": rpc error: code = NotFound desc = could not find container \"68c4a2840d93fa3a118746267cb295323e94548871d7e29b6b3acda4239b45fe\": container with ID starting with 68c4a2840d93fa3a118746267cb295323e94548871d7e29b6b3acda4239b45fe not found: ID does not exist" Jan 29 07:28:00 crc kubenswrapper[4826]: I0129 07:28:00.816699 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9634541d-32d4-474f-a220-897fd4af2757" path="/var/lib/kubelet/pods/9634541d-32d4-474f-a220-897fd4af2757/volumes" Jan 29 07:29:35 crc kubenswrapper[4826]: I0129 07:29:35.656135 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:29:35 crc kubenswrapper[4826]: I0129 07:29:35.656886 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.175144 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v"] Jan 29 07:30:00 crc kubenswrapper[4826]: E0129 07:30:00.176257 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9634541d-32d4-474f-a220-897fd4af2757" containerName="registry-server" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.176280 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9634541d-32d4-474f-a220-897fd4af2757" containerName="registry-server" Jan 29 07:30:00 crc kubenswrapper[4826]: E0129 07:30:00.176336 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9634541d-32d4-474f-a220-897fd4af2757" containerName="extract-utilities" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.176350 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9634541d-32d4-474f-a220-897fd4af2757" containerName="extract-utilities" Jan 29 07:30:00 crc kubenswrapper[4826]: E0129 07:30:00.176395 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9634541d-32d4-474f-a220-897fd4af2757" containerName="extract-content" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.176408 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9634541d-32d4-474f-a220-897fd4af2757" containerName="extract-content" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.176652 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="9634541d-32d4-474f-a220-897fd4af2757" containerName="registry-server" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.177461 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.190057 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v"] Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.203157 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.203568 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.352629 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfs5s\" (UniqueName: \"kubernetes.io/projected/927e0d81-7950-4dec-a31a-88b7fff7b462-kube-api-access-kfs5s\") pod \"collect-profiles-29494530-zp45v\" (UID: \"927e0d81-7950-4dec-a31a-88b7fff7b462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.352898 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927e0d81-7950-4dec-a31a-88b7fff7b462-config-volume\") pod \"collect-profiles-29494530-zp45v\" (UID: \"927e0d81-7950-4dec-a31a-88b7fff7b462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.352946 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927e0d81-7950-4dec-a31a-88b7fff7b462-secret-volume\") pod \"collect-profiles-29494530-zp45v\" (UID: \"927e0d81-7950-4dec-a31a-88b7fff7b462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.453904 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927e0d81-7950-4dec-a31a-88b7fff7b462-config-volume\") pod \"collect-profiles-29494530-zp45v\" (UID: \"927e0d81-7950-4dec-a31a-88b7fff7b462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.454010 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927e0d81-7950-4dec-a31a-88b7fff7b462-secret-volume\") pod \"collect-profiles-29494530-zp45v\" (UID: \"927e0d81-7950-4dec-a31a-88b7fff7b462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.454116 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfs5s\" (UniqueName: \"kubernetes.io/projected/927e0d81-7950-4dec-a31a-88b7fff7b462-kube-api-access-kfs5s\") pod \"collect-profiles-29494530-zp45v\" (UID: \"927e0d81-7950-4dec-a31a-88b7fff7b462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.456204 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927e0d81-7950-4dec-a31a-88b7fff7b462-config-volume\") pod \"collect-profiles-29494530-zp45v\" (UID: \"927e0d81-7950-4dec-a31a-88b7fff7b462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.477587 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927e0d81-7950-4dec-a31a-88b7fff7b462-secret-volume\") pod \"collect-profiles-29494530-zp45v\" (UID: \"927e0d81-7950-4dec-a31a-88b7fff7b462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.482500 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfs5s\" (UniqueName: \"kubernetes.io/projected/927e0d81-7950-4dec-a31a-88b7fff7b462-kube-api-access-kfs5s\") pod \"collect-profiles-29494530-zp45v\" (UID: \"927e0d81-7950-4dec-a31a-88b7fff7b462\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.530795 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" Jan 29 07:30:00 crc kubenswrapper[4826]: I0129 07:30:00.978452 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v"] Jan 29 07:30:01 crc kubenswrapper[4826]: I0129 07:30:01.686718 4826 generic.go:334] "Generic (PLEG): container finished" podID="927e0d81-7950-4dec-a31a-88b7fff7b462" containerID="7940dff32b057f62728bc628f3fadc620a74b1c0ebd2d239b3b128d23d0396f7" exitCode=0 Jan 29 07:30:01 crc kubenswrapper[4826]: I0129 07:30:01.687012 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" event={"ID":"927e0d81-7950-4dec-a31a-88b7fff7b462","Type":"ContainerDied","Data":"7940dff32b057f62728bc628f3fadc620a74b1c0ebd2d239b3b128d23d0396f7"} Jan 29 07:30:01 crc kubenswrapper[4826]: I0129 07:30:01.687058 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" event={"ID":"927e0d81-7950-4dec-a31a-88b7fff7b462","Type":"ContainerStarted","Data":"527d9ccb13bfefa759a8531f09b4f81aa42e109593aee7c80f355d3a788f6d99"} Jan 29 07:30:03 crc kubenswrapper[4826]: I0129 07:30:03.057903 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" Jan 29 07:30:03 crc kubenswrapper[4826]: I0129 07:30:03.202740 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfs5s\" (UniqueName: \"kubernetes.io/projected/927e0d81-7950-4dec-a31a-88b7fff7b462-kube-api-access-kfs5s\") pod \"927e0d81-7950-4dec-a31a-88b7fff7b462\" (UID: \"927e0d81-7950-4dec-a31a-88b7fff7b462\") " Jan 29 07:30:03 crc kubenswrapper[4826]: I0129 07:30:03.202850 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927e0d81-7950-4dec-a31a-88b7fff7b462-secret-volume\") pod \"927e0d81-7950-4dec-a31a-88b7fff7b462\" (UID: \"927e0d81-7950-4dec-a31a-88b7fff7b462\") " Jan 29 07:30:03 crc kubenswrapper[4826]: I0129 07:30:03.202907 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927e0d81-7950-4dec-a31a-88b7fff7b462-config-volume\") pod \"927e0d81-7950-4dec-a31a-88b7fff7b462\" (UID: \"927e0d81-7950-4dec-a31a-88b7fff7b462\") " Jan 29 07:30:03 crc kubenswrapper[4826]: I0129 07:30:03.204171 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/927e0d81-7950-4dec-a31a-88b7fff7b462-config-volume" (OuterVolumeSpecName: "config-volume") pod "927e0d81-7950-4dec-a31a-88b7fff7b462" (UID: "927e0d81-7950-4dec-a31a-88b7fff7b462"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:30:03 crc kubenswrapper[4826]: I0129 07:30:03.211623 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927e0d81-7950-4dec-a31a-88b7fff7b462-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "927e0d81-7950-4dec-a31a-88b7fff7b462" (UID: "927e0d81-7950-4dec-a31a-88b7fff7b462"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:30:03 crc kubenswrapper[4826]: I0129 07:30:03.211715 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927e0d81-7950-4dec-a31a-88b7fff7b462-kube-api-access-kfs5s" (OuterVolumeSpecName: "kube-api-access-kfs5s") pod "927e0d81-7950-4dec-a31a-88b7fff7b462" (UID: "927e0d81-7950-4dec-a31a-88b7fff7b462"). InnerVolumeSpecName "kube-api-access-kfs5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:30:03 crc kubenswrapper[4826]: I0129 07:30:03.305395 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfs5s\" (UniqueName: \"kubernetes.io/projected/927e0d81-7950-4dec-a31a-88b7fff7b462-kube-api-access-kfs5s\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:03 crc kubenswrapper[4826]: I0129 07:30:03.305503 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927e0d81-7950-4dec-a31a-88b7fff7b462-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:03 crc kubenswrapper[4826]: I0129 07:30:03.305526 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927e0d81-7950-4dec-a31a-88b7fff7b462-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:03 crc kubenswrapper[4826]: I0129 07:30:03.708521 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" event={"ID":"927e0d81-7950-4dec-a31a-88b7fff7b462","Type":"ContainerDied","Data":"527d9ccb13bfefa759a8531f09b4f81aa42e109593aee7c80f355d3a788f6d99"} Jan 29 07:30:03 crc kubenswrapper[4826]: I0129 07:30:03.708589 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527d9ccb13bfefa759a8531f09b4f81aa42e109593aee7c80f355d3a788f6d99" Jan 29 07:30:03 crc kubenswrapper[4826]: I0129 07:30:03.708643 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v" Jan 29 07:30:04 crc kubenswrapper[4826]: I0129 07:30:04.156444 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl"] Jan 29 07:30:04 crc kubenswrapper[4826]: I0129 07:30:04.163354 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494485-bfbgl"] Jan 29 07:30:04 crc kubenswrapper[4826]: I0129 07:30:04.824930 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a5f5c9-454a-476e-9bfb-1fe5abcf95f9" path="/var/lib/kubelet/pods/18a5f5c9-454a-476e-9bfb-1fe5abcf95f9/volumes" Jan 29 07:30:05 crc kubenswrapper[4826]: I0129 07:30:05.657008 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:30:05 crc kubenswrapper[4826]: I0129 07:30:05.657102 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:30:07 crc kubenswrapper[4826]: I0129 07:30:07.848545 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dq4t2"] Jan 29 07:30:07 crc kubenswrapper[4826]: E0129 07:30:07.849670 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927e0d81-7950-4dec-a31a-88b7fff7b462" containerName="collect-profiles" Jan 29 07:30:07 crc kubenswrapper[4826]: I0129 07:30:07.849708 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="927e0d81-7950-4dec-a31a-88b7fff7b462" containerName="collect-profiles" Jan 29 07:30:07 crc kubenswrapper[4826]: I0129 07:30:07.850102 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="927e0d81-7950-4dec-a31a-88b7fff7b462" containerName="collect-profiles" Jan 29 07:30:07 crc kubenswrapper[4826]: I0129 07:30:07.852614 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:07 crc kubenswrapper[4826]: I0129 07:30:07.862460 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dq4t2"] Jan 29 07:30:07 crc kubenswrapper[4826]: I0129 07:30:07.990318 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a6f3f3-c805-412e-8ad0-9774dba31990-catalog-content\") pod \"redhat-marketplace-dq4t2\" (UID: \"f3a6f3f3-c805-412e-8ad0-9774dba31990\") " pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:07 crc kubenswrapper[4826]: I0129 07:30:07.990835 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a6f3f3-c805-412e-8ad0-9774dba31990-utilities\") pod \"redhat-marketplace-dq4t2\" (UID: \"f3a6f3f3-c805-412e-8ad0-9774dba31990\") " pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:07 crc kubenswrapper[4826]: I0129 07:30:07.990947 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw9jp\" (UniqueName: \"kubernetes.io/projected/f3a6f3f3-c805-412e-8ad0-9774dba31990-kube-api-access-mw9jp\") pod \"redhat-marketplace-dq4t2\" (UID: \"f3a6f3f3-c805-412e-8ad0-9774dba31990\") " pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:08 crc kubenswrapper[4826]: I0129 07:30:08.092948 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a6f3f3-c805-412e-8ad0-9774dba31990-utilities\") pod \"redhat-marketplace-dq4t2\" (UID: \"f3a6f3f3-c805-412e-8ad0-9774dba31990\") " pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:08 crc kubenswrapper[4826]: I0129 07:30:08.093086 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw9jp\" (UniqueName: \"kubernetes.io/projected/f3a6f3f3-c805-412e-8ad0-9774dba31990-kube-api-access-mw9jp\") pod \"redhat-marketplace-dq4t2\" (UID: \"f3a6f3f3-c805-412e-8ad0-9774dba31990\") " pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:08 crc kubenswrapper[4826]: I0129 07:30:08.093463 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a6f3f3-c805-412e-8ad0-9774dba31990-catalog-content\") pod \"redhat-marketplace-dq4t2\" (UID: \"f3a6f3f3-c805-412e-8ad0-9774dba31990\") " pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:08 crc kubenswrapper[4826]: I0129 07:30:08.093942 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a6f3f3-c805-412e-8ad0-9774dba31990-utilities\") pod \"redhat-marketplace-dq4t2\" (UID: \"f3a6f3f3-c805-412e-8ad0-9774dba31990\") " pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:08 crc kubenswrapper[4826]: I0129 07:30:08.094256 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a6f3f3-c805-412e-8ad0-9774dba31990-catalog-content\") pod \"redhat-marketplace-dq4t2\" (UID: \"f3a6f3f3-c805-412e-8ad0-9774dba31990\") " pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:08 crc kubenswrapper[4826]: I0129 07:30:08.127961 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw9jp\" (UniqueName: \"kubernetes.io/projected/f3a6f3f3-c805-412e-8ad0-9774dba31990-kube-api-access-mw9jp\") pod \"redhat-marketplace-dq4t2\" (UID: \"f3a6f3f3-c805-412e-8ad0-9774dba31990\") " pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:08 crc kubenswrapper[4826]: I0129 07:30:08.202711 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:08 crc kubenswrapper[4826]: I0129 07:30:08.640049 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dq4t2"] Jan 29 07:30:08 crc kubenswrapper[4826]: I0129 07:30:08.759482 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dq4t2" event={"ID":"f3a6f3f3-c805-412e-8ad0-9774dba31990","Type":"ContainerStarted","Data":"77e8ca80f083a67dc0dce8a69e9535a5a897bd94da5e12e5e023b65391186c8b"} Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.632530 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-27944"] Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.635967 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.642742 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-27944"] Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.732255 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8189a2aa-3bdd-4616-b700-0afcc2363e57-utilities\") pod \"certified-operators-27944\" (UID: \"8189a2aa-3bdd-4616-b700-0afcc2363e57\") " pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.732664 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8189a2aa-3bdd-4616-b700-0afcc2363e57-catalog-content\") pod \"certified-operators-27944\" (UID: \"8189a2aa-3bdd-4616-b700-0afcc2363e57\") " pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.732689 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djgk5\" (UniqueName: \"kubernetes.io/projected/8189a2aa-3bdd-4616-b700-0afcc2363e57-kube-api-access-djgk5\") pod \"certified-operators-27944\" (UID: \"8189a2aa-3bdd-4616-b700-0afcc2363e57\") " pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.766274 4826 generic.go:334] "Generic (PLEG): container finished" podID="f3a6f3f3-c805-412e-8ad0-9774dba31990" containerID="a11fe3f193d5690552d42902640c63b8ce38adf5691dc7f41d9e1b0a5dba4fbd" exitCode=0 Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.766347 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dq4t2" event={"ID":"f3a6f3f3-c805-412e-8ad0-9774dba31990","Type":"ContainerDied","Data":"a11fe3f193d5690552d42902640c63b8ce38adf5691dc7f41d9e1b0a5dba4fbd"} Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.834248 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8189a2aa-3bdd-4616-b700-0afcc2363e57-utilities\") pod \"certified-operators-27944\" (UID: \"8189a2aa-3bdd-4616-b700-0afcc2363e57\") " pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.834715 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8189a2aa-3bdd-4616-b700-0afcc2363e57-utilities\") pod \"certified-operators-27944\" (UID: \"8189a2aa-3bdd-4616-b700-0afcc2363e57\") " pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.835585 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8189a2aa-3bdd-4616-b700-0afcc2363e57-catalog-content\") pod \"certified-operators-27944\" (UID: \"8189a2aa-3bdd-4616-b700-0afcc2363e57\") " pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.835620 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djgk5\" (UniqueName: \"kubernetes.io/projected/8189a2aa-3bdd-4616-b700-0afcc2363e57-kube-api-access-djgk5\") pod \"certified-operators-27944\" (UID: \"8189a2aa-3bdd-4616-b700-0afcc2363e57\") " pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.836043 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8189a2aa-3bdd-4616-b700-0afcc2363e57-catalog-content\") pod \"certified-operators-27944\" (UID: \"8189a2aa-3bdd-4616-b700-0afcc2363e57\") " pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.855906 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djgk5\" (UniqueName: \"kubernetes.io/projected/8189a2aa-3bdd-4616-b700-0afcc2363e57-kube-api-access-djgk5\") pod \"certified-operators-27944\" (UID: \"8189a2aa-3bdd-4616-b700-0afcc2363e57\") " pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:09 crc kubenswrapper[4826]: I0129 07:30:09.979573 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:10 crc kubenswrapper[4826]: I0129 07:30:10.407215 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-27944"] Jan 29 07:30:10 crc kubenswrapper[4826]: W0129 07:30:10.411494 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8189a2aa_3bdd_4616_b700_0afcc2363e57.slice/crio-c3906d87133990e1dc0504096e535069fedb88205438ab1030826bafc374c362 WatchSource:0}: Error finding container c3906d87133990e1dc0504096e535069fedb88205438ab1030826bafc374c362: Status 404 returned error can't find the container with id c3906d87133990e1dc0504096e535069fedb88205438ab1030826bafc374c362 Jan 29 07:30:10 crc kubenswrapper[4826]: I0129 07:30:10.777192 4826 generic.go:334] "Generic (PLEG): container finished" podID="8189a2aa-3bdd-4616-b700-0afcc2363e57" containerID="9fd1d01e776b3e2a9fee746f713ab0ea868132f23b66b5a7513cb57c2e5620d0" exitCode=0 Jan 29 07:30:10 crc kubenswrapper[4826]: I0129 07:30:10.777262 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27944" event={"ID":"8189a2aa-3bdd-4616-b700-0afcc2363e57","Type":"ContainerDied","Data":"9fd1d01e776b3e2a9fee746f713ab0ea868132f23b66b5a7513cb57c2e5620d0"} Jan 29 07:30:10 crc kubenswrapper[4826]: I0129 07:30:10.777287 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27944" event={"ID":"8189a2aa-3bdd-4616-b700-0afcc2363e57","Type":"ContainerStarted","Data":"c3906d87133990e1dc0504096e535069fedb88205438ab1030826bafc374c362"} Jan 29 07:30:10 crc kubenswrapper[4826]: I0129 07:30:10.792958 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dq4t2" event={"ID":"f3a6f3f3-c805-412e-8ad0-9774dba31990","Type":"ContainerStarted","Data":"7168692a6db450aca2eda48919e10ffcb11f071e9de5f469050117beed9cfdb8"} Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.030267 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mh5s7"] Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.034507 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.035946 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mh5s7"] Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.050140 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-catalog-content\") pod \"community-operators-mh5s7\" (UID: \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\") " pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.050191 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bg9j\" (UniqueName: \"kubernetes.io/projected/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-kube-api-access-7bg9j\") pod \"community-operators-mh5s7\" (UID: \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\") " pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.050234 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-utilities\") pod \"community-operators-mh5s7\" (UID: \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\") " pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.151610 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-catalog-content\") pod \"community-operators-mh5s7\" (UID: \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\") " pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.151685 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bg9j\" (UniqueName: \"kubernetes.io/projected/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-kube-api-access-7bg9j\") pod \"community-operators-mh5s7\" (UID: \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\") " pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.151737 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-utilities\") pod \"community-operators-mh5s7\" (UID: \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\") " pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.152373 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-catalog-content\") pod \"community-operators-mh5s7\" (UID: \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\") " pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.152523 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-utilities\") pod \"community-operators-mh5s7\" (UID: \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\") " pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.172953 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bg9j\" (UniqueName: \"kubernetes.io/projected/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-kube-api-access-7bg9j\") pod \"community-operators-mh5s7\" (UID: \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\") " pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.365748 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.804906 4826 generic.go:334] "Generic (PLEG): container finished" podID="f3a6f3f3-c805-412e-8ad0-9774dba31990" containerID="7168692a6db450aca2eda48919e10ffcb11f071e9de5f469050117beed9cfdb8" exitCode=0 Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.805483 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dq4t2" event={"ID":"f3a6f3f3-c805-412e-8ad0-9774dba31990","Type":"ContainerDied","Data":"7168692a6db450aca2eda48919e10ffcb11f071e9de5f469050117beed9cfdb8"} Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.808153 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27944" event={"ID":"8189a2aa-3bdd-4616-b700-0afcc2363e57","Type":"ContainerStarted","Data":"eae256b4e57a86629e8180281a3cff87ea92ae6859500a282730d404b9cc4aa6"} Jan 29 07:30:11 crc kubenswrapper[4826]: I0129 07:30:11.872008 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mh5s7"] Jan 29 07:30:12 crc kubenswrapper[4826]: I0129 07:30:12.817953 4826 generic.go:334] "Generic (PLEG): container finished" podID="a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" containerID="4afab7329d4a4c6d6549c5e8911d967ac1e56a65d65316369a40fa910aad8a4f" exitCode=0 Jan 29 07:30:12 crc kubenswrapper[4826]: I0129 07:30:12.821069 4826 generic.go:334] "Generic (PLEG): container finished" podID="8189a2aa-3bdd-4616-b700-0afcc2363e57" containerID="eae256b4e57a86629e8180281a3cff87ea92ae6859500a282730d404b9cc4aa6" exitCode=0 Jan 29 07:30:12 crc kubenswrapper[4826]: I0129 07:30:12.823889 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh5s7" event={"ID":"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad","Type":"ContainerDied","Data":"4afab7329d4a4c6d6549c5e8911d967ac1e56a65d65316369a40fa910aad8a4f"} Jan 29 07:30:12 crc kubenswrapper[4826]: I0129 07:30:12.823944 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh5s7" event={"ID":"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad","Type":"ContainerStarted","Data":"350335aef77ab244fa4be2e92f390a166e0fee9841e376e48d67884d3ca1f94e"} Jan 29 07:30:12 crc kubenswrapper[4826]: I0129 07:30:12.823960 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27944" event={"ID":"8189a2aa-3bdd-4616-b700-0afcc2363e57","Type":"ContainerDied","Data":"eae256b4e57a86629e8180281a3cff87ea92ae6859500a282730d404b9cc4aa6"} Jan 29 07:30:12 crc kubenswrapper[4826]: I0129 07:30:12.826326 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dq4t2" event={"ID":"f3a6f3f3-c805-412e-8ad0-9774dba31990","Type":"ContainerStarted","Data":"2e2dbf61ae5816174f7ba0506aa1f5eaf3797f73dffa21d9cf2e9de6a826baf3"} Jan 29 07:30:12 crc kubenswrapper[4826]: I0129 07:30:12.908173 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dq4t2" podStartSLOduration=3.469010842 podStartE2EDuration="5.908141817s" podCreationTimestamp="2026-01-29 07:30:07 +0000 UTC" firstStartedPulling="2026-01-29 07:30:09.768120365 +0000 UTC m=+2793.629913434" lastFinishedPulling="2026-01-29 07:30:12.20725134 +0000 UTC m=+2796.069044409" observedRunningTime="2026-01-29 07:30:12.897176484 +0000 UTC m=+2796.758969563" watchObservedRunningTime="2026-01-29 07:30:12.908141817 +0000 UTC m=+2796.769934926" Jan 29 07:30:14 crc kubenswrapper[4826]: I0129 07:30:14.491405 4826 generic.go:334] "Generic (PLEG): container finished" podID="a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" containerID="643439f39fb67a04ae22b90be02b4a8e096caf7f4f420a1bb64e346f00f1e307" exitCode=0 Jan 29 07:30:14 crc kubenswrapper[4826]: I0129 07:30:14.491712 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh5s7" event={"ID":"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad","Type":"ContainerDied","Data":"643439f39fb67a04ae22b90be02b4a8e096caf7f4f420a1bb64e346f00f1e307"} Jan 29 07:30:14 crc kubenswrapper[4826]: I0129 07:30:14.502185 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27944" event={"ID":"8189a2aa-3bdd-4616-b700-0afcc2363e57","Type":"ContainerStarted","Data":"bdfd10ec2bbf59eaa88a5fefc47e5a3d478297cd2f1e8a27ad1844fc879fa3c7"} Jan 29 07:30:14 crc kubenswrapper[4826]: I0129 07:30:14.539806 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-27944" podStartSLOduration=3.094438189 podStartE2EDuration="5.539782848s" podCreationTimestamp="2026-01-29 07:30:09 +0000 UTC" firstStartedPulling="2026-01-29 07:30:10.780543159 +0000 UTC m=+2794.642336218" lastFinishedPulling="2026-01-29 07:30:13.225887808 +0000 UTC m=+2797.087680877" observedRunningTime="2026-01-29 07:30:14.529999597 +0000 UTC m=+2798.391792706" watchObservedRunningTime="2026-01-29 07:30:14.539782848 +0000 UTC m=+2798.401575957" Jan 29 07:30:15 crc kubenswrapper[4826]: I0129 07:30:15.512250 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh5s7" event={"ID":"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad","Type":"ContainerStarted","Data":"f87955c36693567d69b8b8c209a8c9195f4e1dad3dc9a4c08e9d25d3f3283078"} Jan 29 07:30:15 crc kubenswrapper[4826]: I0129 07:30:15.537215 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mh5s7" podStartSLOduration=3.43687665 podStartE2EDuration="5.537192931s" podCreationTimestamp="2026-01-29 07:30:10 +0000 UTC" firstStartedPulling="2026-01-29 07:30:12.819954743 +0000 UTC m=+2796.681747822" lastFinishedPulling="2026-01-29 07:30:14.920271034 +0000 UTC m=+2798.782064103" observedRunningTime="2026-01-29 07:30:15.534509539 +0000 UTC m=+2799.396302638" watchObservedRunningTime="2026-01-29 07:30:15.537192931 +0000 UTC m=+2799.398986030" Jan 29 07:30:18 crc kubenswrapper[4826]: I0129 07:30:18.204592 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:18 crc kubenswrapper[4826]: I0129 07:30:18.204941 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:18 crc kubenswrapper[4826]: I0129 07:30:18.283389 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:18 crc kubenswrapper[4826]: I0129 07:30:18.604793 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:19 crc kubenswrapper[4826]: I0129 07:30:19.610353 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dq4t2"] Jan 29 07:30:19 crc kubenswrapper[4826]: I0129 07:30:19.980501 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:19 crc kubenswrapper[4826]: I0129 07:30:19.980599 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:20 crc kubenswrapper[4826]: I0129 07:30:20.051407 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:20 crc kubenswrapper[4826]: I0129 07:30:20.551012 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dq4t2" podUID="f3a6f3f3-c805-412e-8ad0-9774dba31990" containerName="registry-server" containerID="cri-o://2e2dbf61ae5816174f7ba0506aa1f5eaf3797f73dffa21d9cf2e9de6a826baf3" gracePeriod=2 Jan 29 07:30:20 crc kubenswrapper[4826]: I0129 07:30:20.623888 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:20 crc kubenswrapper[4826]: I0129 07:30:20.971740 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.080860 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw9jp\" (UniqueName: \"kubernetes.io/projected/f3a6f3f3-c805-412e-8ad0-9774dba31990-kube-api-access-mw9jp\") pod \"f3a6f3f3-c805-412e-8ad0-9774dba31990\" (UID: \"f3a6f3f3-c805-412e-8ad0-9774dba31990\") " Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.080957 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a6f3f3-c805-412e-8ad0-9774dba31990-catalog-content\") pod \"f3a6f3f3-c805-412e-8ad0-9774dba31990\" (UID: \"f3a6f3f3-c805-412e-8ad0-9774dba31990\") " Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.080998 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a6f3f3-c805-412e-8ad0-9774dba31990-utilities\") pod \"f3a6f3f3-c805-412e-8ad0-9774dba31990\" (UID: \"f3a6f3f3-c805-412e-8ad0-9774dba31990\") " Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.082014 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a6f3f3-c805-412e-8ad0-9774dba31990-utilities" (OuterVolumeSpecName: "utilities") pod "f3a6f3f3-c805-412e-8ad0-9774dba31990" (UID: "f3a6f3f3-c805-412e-8ad0-9774dba31990"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.090192 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a6f3f3-c805-412e-8ad0-9774dba31990-kube-api-access-mw9jp" (OuterVolumeSpecName: "kube-api-access-mw9jp") pod "f3a6f3f3-c805-412e-8ad0-9774dba31990" (UID: "f3a6f3f3-c805-412e-8ad0-9774dba31990"). InnerVolumeSpecName "kube-api-access-mw9jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.105616 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a6f3f3-c805-412e-8ad0-9774dba31990-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3a6f3f3-c805-412e-8ad0-9774dba31990" (UID: "f3a6f3f3-c805-412e-8ad0-9774dba31990"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.182688 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw9jp\" (UniqueName: \"kubernetes.io/projected/f3a6f3f3-c805-412e-8ad0-9774dba31990-kube-api-access-mw9jp\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.182998 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a6f3f3-c805-412e-8ad0-9774dba31990-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.183060 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a6f3f3-c805-412e-8ad0-9774dba31990-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.366131 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.366271 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.437196 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.560637 4826 generic.go:334] "Generic (PLEG): container finished" podID="f3a6f3f3-c805-412e-8ad0-9774dba31990" containerID="2e2dbf61ae5816174f7ba0506aa1f5eaf3797f73dffa21d9cf2e9de6a826baf3" exitCode=0 Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.560703 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dq4t2" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.560724 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dq4t2" event={"ID":"f3a6f3f3-c805-412e-8ad0-9774dba31990","Type":"ContainerDied","Data":"2e2dbf61ae5816174f7ba0506aa1f5eaf3797f73dffa21d9cf2e9de6a826baf3"} Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.560815 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dq4t2" event={"ID":"f3a6f3f3-c805-412e-8ad0-9774dba31990","Type":"ContainerDied","Data":"77e8ca80f083a67dc0dce8a69e9535a5a897bd94da5e12e5e023b65391186c8b"} Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.560848 4826 scope.go:117] "RemoveContainer" containerID="2e2dbf61ae5816174f7ba0506aa1f5eaf3797f73dffa21d9cf2e9de6a826baf3" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.601045 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dq4t2"] Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.601098 4826 scope.go:117] "RemoveContainer" containerID="7168692a6db450aca2eda48919e10ffcb11f071e9de5f469050117beed9cfdb8" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.609170 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dq4t2"] Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.623965 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.631327 4826 scope.go:117] "RemoveContainer" containerID="a11fe3f193d5690552d42902640c63b8ce38adf5691dc7f41d9e1b0a5dba4fbd" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.656676 4826 scope.go:117] "RemoveContainer" containerID="2e2dbf61ae5816174f7ba0506aa1f5eaf3797f73dffa21d9cf2e9de6a826baf3" Jan 29 07:30:21 crc kubenswrapper[4826]: E0129 07:30:21.657243 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e2dbf61ae5816174f7ba0506aa1f5eaf3797f73dffa21d9cf2e9de6a826baf3\": container with ID starting with 2e2dbf61ae5816174f7ba0506aa1f5eaf3797f73dffa21d9cf2e9de6a826baf3 not found: ID does not exist" containerID="2e2dbf61ae5816174f7ba0506aa1f5eaf3797f73dffa21d9cf2e9de6a826baf3" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.657284 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e2dbf61ae5816174f7ba0506aa1f5eaf3797f73dffa21d9cf2e9de6a826baf3"} err="failed to get container status \"2e2dbf61ae5816174f7ba0506aa1f5eaf3797f73dffa21d9cf2e9de6a826baf3\": rpc error: code = NotFound desc = could not find container \"2e2dbf61ae5816174f7ba0506aa1f5eaf3797f73dffa21d9cf2e9de6a826baf3\": container with ID starting with 2e2dbf61ae5816174f7ba0506aa1f5eaf3797f73dffa21d9cf2e9de6a826baf3 not found: ID does not exist" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.657326 4826 scope.go:117] "RemoveContainer" containerID="7168692a6db450aca2eda48919e10ffcb11f071e9de5f469050117beed9cfdb8" Jan 29 07:30:21 crc kubenswrapper[4826]: E0129 07:30:21.657759 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7168692a6db450aca2eda48919e10ffcb11f071e9de5f469050117beed9cfdb8\": container with ID starting with 7168692a6db450aca2eda48919e10ffcb11f071e9de5f469050117beed9cfdb8 not found: ID does not exist" containerID="7168692a6db450aca2eda48919e10ffcb11f071e9de5f469050117beed9cfdb8" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.657804 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7168692a6db450aca2eda48919e10ffcb11f071e9de5f469050117beed9cfdb8"} err="failed to get container status \"7168692a6db450aca2eda48919e10ffcb11f071e9de5f469050117beed9cfdb8\": rpc error: code = NotFound desc = could not find container \"7168692a6db450aca2eda48919e10ffcb11f071e9de5f469050117beed9cfdb8\": container with ID starting with 7168692a6db450aca2eda48919e10ffcb11f071e9de5f469050117beed9cfdb8 not found: ID does not exist" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.657832 4826 scope.go:117] "RemoveContainer" containerID="a11fe3f193d5690552d42902640c63b8ce38adf5691dc7f41d9e1b0a5dba4fbd" Jan 29 07:30:21 crc kubenswrapper[4826]: E0129 07:30:21.658121 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11fe3f193d5690552d42902640c63b8ce38adf5691dc7f41d9e1b0a5dba4fbd\": container with ID starting with a11fe3f193d5690552d42902640c63b8ce38adf5691dc7f41d9e1b0a5dba4fbd not found: ID does not exist" containerID="a11fe3f193d5690552d42902640c63b8ce38adf5691dc7f41d9e1b0a5dba4fbd" Jan 29 07:30:21 crc kubenswrapper[4826]: I0129 07:30:21.658151 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11fe3f193d5690552d42902640c63b8ce38adf5691dc7f41d9e1b0a5dba4fbd"} err="failed to get container status \"a11fe3f193d5690552d42902640c63b8ce38adf5691dc7f41d9e1b0a5dba4fbd\": rpc error: code = NotFound desc = could not find container \"a11fe3f193d5690552d42902640c63b8ce38adf5691dc7f41d9e1b0a5dba4fbd\": container with ID starting with a11fe3f193d5690552d42902640c63b8ce38adf5691dc7f41d9e1b0a5dba4fbd not found: ID does not exist" Jan 29 07:30:22 crc kubenswrapper[4826]: I0129 07:30:22.409096 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-27944"] Jan 29 07:30:22 crc kubenswrapper[4826]: I0129 07:30:22.570031 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-27944" podUID="8189a2aa-3bdd-4616-b700-0afcc2363e57" containerName="registry-server" containerID="cri-o://bdfd10ec2bbf59eaa88a5fefc47e5a3d478297cd2f1e8a27ad1844fc879fa3c7" gracePeriod=2 Jan 29 07:30:22 crc kubenswrapper[4826]: I0129 07:30:22.826590 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a6f3f3-c805-412e-8ad0-9774dba31990" path="/var/lib/kubelet/pods/f3a6f3f3-c805-412e-8ad0-9774dba31990/volumes" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.022794 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.113532 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8189a2aa-3bdd-4616-b700-0afcc2363e57-utilities\") pod \"8189a2aa-3bdd-4616-b700-0afcc2363e57\" (UID: \"8189a2aa-3bdd-4616-b700-0afcc2363e57\") " Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.113617 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8189a2aa-3bdd-4616-b700-0afcc2363e57-catalog-content\") pod \"8189a2aa-3bdd-4616-b700-0afcc2363e57\" (UID: \"8189a2aa-3bdd-4616-b700-0afcc2363e57\") " Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.113756 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djgk5\" (UniqueName: \"kubernetes.io/projected/8189a2aa-3bdd-4616-b700-0afcc2363e57-kube-api-access-djgk5\") pod \"8189a2aa-3bdd-4616-b700-0afcc2363e57\" (UID: \"8189a2aa-3bdd-4616-b700-0afcc2363e57\") " Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.115289 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8189a2aa-3bdd-4616-b700-0afcc2363e57-utilities" (OuterVolumeSpecName: "utilities") pod "8189a2aa-3bdd-4616-b700-0afcc2363e57" (UID: "8189a2aa-3bdd-4616-b700-0afcc2363e57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.119457 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8189a2aa-3bdd-4616-b700-0afcc2363e57-kube-api-access-djgk5" (OuterVolumeSpecName: "kube-api-access-djgk5") pod "8189a2aa-3bdd-4616-b700-0afcc2363e57" (UID: "8189a2aa-3bdd-4616-b700-0afcc2363e57"). InnerVolumeSpecName "kube-api-access-djgk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.168279 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8189a2aa-3bdd-4616-b700-0afcc2363e57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8189a2aa-3bdd-4616-b700-0afcc2363e57" (UID: "8189a2aa-3bdd-4616-b700-0afcc2363e57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.215157 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djgk5\" (UniqueName: \"kubernetes.io/projected/8189a2aa-3bdd-4616-b700-0afcc2363e57-kube-api-access-djgk5\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.215200 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8189a2aa-3bdd-4616-b700-0afcc2363e57-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.215218 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8189a2aa-3bdd-4616-b700-0afcc2363e57-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.582483 4826 generic.go:334] "Generic (PLEG): container finished" podID="8189a2aa-3bdd-4616-b700-0afcc2363e57" containerID="bdfd10ec2bbf59eaa88a5fefc47e5a3d478297cd2f1e8a27ad1844fc879fa3c7" exitCode=0 Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.582602 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27944" event={"ID":"8189a2aa-3bdd-4616-b700-0afcc2363e57","Type":"ContainerDied","Data":"bdfd10ec2bbf59eaa88a5fefc47e5a3d478297cd2f1e8a27ad1844fc879fa3c7"} Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.582675 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-27944" event={"ID":"8189a2aa-3bdd-4616-b700-0afcc2363e57","Type":"ContainerDied","Data":"c3906d87133990e1dc0504096e535069fedb88205438ab1030826bafc374c362"} Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.582708 4826 scope.go:117] "RemoveContainer" containerID="bdfd10ec2bbf59eaa88a5fefc47e5a3d478297cd2f1e8a27ad1844fc879fa3c7" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.582621 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-27944" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.624964 4826 scope.go:117] "RemoveContainer" containerID="eae256b4e57a86629e8180281a3cff87ea92ae6859500a282730d404b9cc4aa6" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.630467 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-27944"] Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.639836 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-27944"] Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.667482 4826 scope.go:117] "RemoveContainer" containerID="9fd1d01e776b3e2a9fee746f713ab0ea868132f23b66b5a7513cb57c2e5620d0" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.705646 4826 scope.go:117] "RemoveContainer" containerID="bdfd10ec2bbf59eaa88a5fefc47e5a3d478297cd2f1e8a27ad1844fc879fa3c7" Jan 29 07:30:23 crc kubenswrapper[4826]: E0129 07:30:23.706599 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdfd10ec2bbf59eaa88a5fefc47e5a3d478297cd2f1e8a27ad1844fc879fa3c7\": container with ID starting with bdfd10ec2bbf59eaa88a5fefc47e5a3d478297cd2f1e8a27ad1844fc879fa3c7 not found: ID does not exist" containerID="bdfd10ec2bbf59eaa88a5fefc47e5a3d478297cd2f1e8a27ad1844fc879fa3c7" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.706628 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdfd10ec2bbf59eaa88a5fefc47e5a3d478297cd2f1e8a27ad1844fc879fa3c7"} err="failed to get container status \"bdfd10ec2bbf59eaa88a5fefc47e5a3d478297cd2f1e8a27ad1844fc879fa3c7\": rpc error: code = NotFound desc = could not find container \"bdfd10ec2bbf59eaa88a5fefc47e5a3d478297cd2f1e8a27ad1844fc879fa3c7\": container with ID starting with bdfd10ec2bbf59eaa88a5fefc47e5a3d478297cd2f1e8a27ad1844fc879fa3c7 not found: ID does not exist" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.706648 4826 scope.go:117] "RemoveContainer" containerID="eae256b4e57a86629e8180281a3cff87ea92ae6859500a282730d404b9cc4aa6" Jan 29 07:30:23 crc kubenswrapper[4826]: E0129 07:30:23.710695 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eae256b4e57a86629e8180281a3cff87ea92ae6859500a282730d404b9cc4aa6\": container with ID starting with eae256b4e57a86629e8180281a3cff87ea92ae6859500a282730d404b9cc4aa6 not found: ID does not exist" containerID="eae256b4e57a86629e8180281a3cff87ea92ae6859500a282730d404b9cc4aa6" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.710725 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eae256b4e57a86629e8180281a3cff87ea92ae6859500a282730d404b9cc4aa6"} err="failed to get container status \"eae256b4e57a86629e8180281a3cff87ea92ae6859500a282730d404b9cc4aa6\": rpc error: code = NotFound desc = could not find container \"eae256b4e57a86629e8180281a3cff87ea92ae6859500a282730d404b9cc4aa6\": container with ID starting with eae256b4e57a86629e8180281a3cff87ea92ae6859500a282730d404b9cc4aa6 not found: ID does not exist" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.710744 4826 scope.go:117] "RemoveContainer" containerID="9fd1d01e776b3e2a9fee746f713ab0ea868132f23b66b5a7513cb57c2e5620d0" Jan 29 07:30:23 crc kubenswrapper[4826]: E0129 07:30:23.711113 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fd1d01e776b3e2a9fee746f713ab0ea868132f23b66b5a7513cb57c2e5620d0\": container with ID starting with 9fd1d01e776b3e2a9fee746f713ab0ea868132f23b66b5a7513cb57c2e5620d0 not found: ID does not exist" containerID="9fd1d01e776b3e2a9fee746f713ab0ea868132f23b66b5a7513cb57c2e5620d0" Jan 29 07:30:23 crc kubenswrapper[4826]: I0129 07:30:23.711143 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd1d01e776b3e2a9fee746f713ab0ea868132f23b66b5a7513cb57c2e5620d0"} err="failed to get container status \"9fd1d01e776b3e2a9fee746f713ab0ea868132f23b66b5a7513cb57c2e5620d0\": rpc error: code = NotFound desc = could not find container \"9fd1d01e776b3e2a9fee746f713ab0ea868132f23b66b5a7513cb57c2e5620d0\": container with ID starting with 9fd1d01e776b3e2a9fee746f713ab0ea868132f23b66b5a7513cb57c2e5620d0 not found: ID does not exist" Jan 29 07:30:24 crc kubenswrapper[4826]: I0129 07:30:24.823529 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8189a2aa-3bdd-4616-b700-0afcc2363e57" path="/var/lib/kubelet/pods/8189a2aa-3bdd-4616-b700-0afcc2363e57/volumes" Jan 29 07:30:24 crc kubenswrapper[4826]: I0129 07:30:24.824741 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mh5s7"] Jan 29 07:30:24 crc kubenswrapper[4826]: I0129 07:30:24.825153 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mh5s7" podUID="a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" containerName="registry-server" containerID="cri-o://f87955c36693567d69b8b8c209a8c9195f4e1dad3dc9a4c08e9d25d3f3283078" gracePeriod=2 Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.315446 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.449451 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-catalog-content\") pod \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\" (UID: \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\") " Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.449626 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-utilities\") pod \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\" (UID: \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\") " Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.449693 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bg9j\" (UniqueName: \"kubernetes.io/projected/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-kube-api-access-7bg9j\") pod \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\" (UID: \"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad\") " Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.451070 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-utilities" (OuterVolumeSpecName: "utilities") pod "a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" (UID: "a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.456476 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-kube-api-access-7bg9j" (OuterVolumeSpecName: "kube-api-access-7bg9j") pod "a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" (UID: "a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad"). InnerVolumeSpecName "kube-api-access-7bg9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.545869 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" (UID: "a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.551478 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bg9j\" (UniqueName: \"kubernetes.io/projected/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-kube-api-access-7bg9j\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.551511 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.551524 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.603487 4826 generic.go:334] "Generic (PLEG): container finished" podID="a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" containerID="f87955c36693567d69b8b8c209a8c9195f4e1dad3dc9a4c08e9d25d3f3283078" exitCode=0 Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.603530 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh5s7" event={"ID":"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad","Type":"ContainerDied","Data":"f87955c36693567d69b8b8c209a8c9195f4e1dad3dc9a4c08e9d25d3f3283078"} Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.603560 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh5s7" event={"ID":"a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad","Type":"ContainerDied","Data":"350335aef77ab244fa4be2e92f390a166e0fee9841e376e48d67884d3ca1f94e"} Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.603574 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mh5s7" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.603585 4826 scope.go:117] "RemoveContainer" containerID="f87955c36693567d69b8b8c209a8c9195f4e1dad3dc9a4c08e9d25d3f3283078" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.627525 4826 scope.go:117] "RemoveContainer" containerID="643439f39fb67a04ae22b90be02b4a8e096caf7f4f420a1bb64e346f00f1e307" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.662242 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mh5s7"] Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.671448 4826 scope.go:117] "RemoveContainer" containerID="4afab7329d4a4c6d6549c5e8911d967ac1e56a65d65316369a40fa910aad8a4f" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.672944 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mh5s7"] Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.701428 4826 scope.go:117] "RemoveContainer" containerID="f87955c36693567d69b8b8c209a8c9195f4e1dad3dc9a4c08e9d25d3f3283078" Jan 29 07:30:25 crc kubenswrapper[4826]: E0129 07:30:25.702130 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f87955c36693567d69b8b8c209a8c9195f4e1dad3dc9a4c08e9d25d3f3283078\": container with ID starting with f87955c36693567d69b8b8c209a8c9195f4e1dad3dc9a4c08e9d25d3f3283078 not found: ID does not exist" containerID="f87955c36693567d69b8b8c209a8c9195f4e1dad3dc9a4c08e9d25d3f3283078" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.702183 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87955c36693567d69b8b8c209a8c9195f4e1dad3dc9a4c08e9d25d3f3283078"} err="failed to get container status \"f87955c36693567d69b8b8c209a8c9195f4e1dad3dc9a4c08e9d25d3f3283078\": rpc error: code = NotFound desc = could not find container \"f87955c36693567d69b8b8c209a8c9195f4e1dad3dc9a4c08e9d25d3f3283078\": container with ID starting with f87955c36693567d69b8b8c209a8c9195f4e1dad3dc9a4c08e9d25d3f3283078 not found: ID does not exist" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.702224 4826 scope.go:117] "RemoveContainer" containerID="643439f39fb67a04ae22b90be02b4a8e096caf7f4f420a1bb64e346f00f1e307" Jan 29 07:30:25 crc kubenswrapper[4826]: E0129 07:30:25.703203 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"643439f39fb67a04ae22b90be02b4a8e096caf7f4f420a1bb64e346f00f1e307\": container with ID starting with 643439f39fb67a04ae22b90be02b4a8e096caf7f4f420a1bb64e346f00f1e307 not found: ID does not exist" containerID="643439f39fb67a04ae22b90be02b4a8e096caf7f4f420a1bb64e346f00f1e307" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.703273 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"643439f39fb67a04ae22b90be02b4a8e096caf7f4f420a1bb64e346f00f1e307"} err="failed to get container status \"643439f39fb67a04ae22b90be02b4a8e096caf7f4f420a1bb64e346f00f1e307\": rpc error: code = NotFound desc = could not find container \"643439f39fb67a04ae22b90be02b4a8e096caf7f4f420a1bb64e346f00f1e307\": container with ID starting with 643439f39fb67a04ae22b90be02b4a8e096caf7f4f420a1bb64e346f00f1e307 not found: ID does not exist" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.703344 4826 scope.go:117] "RemoveContainer" containerID="4afab7329d4a4c6d6549c5e8911d967ac1e56a65d65316369a40fa910aad8a4f" Jan 29 07:30:25 crc kubenswrapper[4826]: E0129 07:30:25.703728 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4afab7329d4a4c6d6549c5e8911d967ac1e56a65d65316369a40fa910aad8a4f\": container with ID starting with 4afab7329d4a4c6d6549c5e8911d967ac1e56a65d65316369a40fa910aad8a4f not found: ID does not exist" containerID="4afab7329d4a4c6d6549c5e8911d967ac1e56a65d65316369a40fa910aad8a4f" Jan 29 07:30:25 crc kubenswrapper[4826]: I0129 07:30:25.703775 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4afab7329d4a4c6d6549c5e8911d967ac1e56a65d65316369a40fa910aad8a4f"} err="failed to get container status \"4afab7329d4a4c6d6549c5e8911d967ac1e56a65d65316369a40fa910aad8a4f\": rpc error: code = NotFound desc = could not find container \"4afab7329d4a4c6d6549c5e8911d967ac1e56a65d65316369a40fa910aad8a4f\": container with ID starting with 4afab7329d4a4c6d6549c5e8911d967ac1e56a65d65316369a40fa910aad8a4f not found: ID does not exist" Jan 29 07:30:26 crc kubenswrapper[4826]: I0129 07:30:26.830758 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" path="/var/lib/kubelet/pods/a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad/volumes" Jan 29 07:30:35 crc kubenswrapper[4826]: I0129 07:30:35.656760 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:30:35 crc kubenswrapper[4826]: I0129 07:30:35.657441 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:30:35 crc kubenswrapper[4826]: I0129 07:30:35.657505 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 07:30:35 crc kubenswrapper[4826]: I0129 07:30:35.658250 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cdf0954d547df06d481856ed121f2f9ee617f65f5bcb0c58b6a4636bb42863aa"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:30:35 crc kubenswrapper[4826]: I0129 07:30:35.658388 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://cdf0954d547df06d481856ed121f2f9ee617f65f5bcb0c58b6a4636bb42863aa" gracePeriod=600 Jan 29 07:30:36 crc kubenswrapper[4826]: I0129 07:30:36.699073 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="cdf0954d547df06d481856ed121f2f9ee617f65f5bcb0c58b6a4636bb42863aa" exitCode=0 Jan 29 07:30:36 crc kubenswrapper[4826]: I0129 07:30:36.699159 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"cdf0954d547df06d481856ed121f2f9ee617f65f5bcb0c58b6a4636bb42863aa"} Jan 29 07:30:36 crc kubenswrapper[4826]: I0129 07:30:36.700581 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88"} Jan 29 07:30:36 crc kubenswrapper[4826]: I0129 07:30:36.700642 4826 scope.go:117] "RemoveContainer" containerID="d8cd3b515c3f79f8ee54f918524910942356a8f519f05d6543e0f77e254b72cc" Jan 29 07:31:01 crc kubenswrapper[4826]: I0129 07:31:01.952651 4826 scope.go:117] "RemoveContainer" containerID="d3ad29132ec253628533207eb0006a9734d96a886c878edc8148e899ebeab089" Jan 29 07:33:05 crc kubenswrapper[4826]: I0129 07:33:05.656423 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:33:05 crc kubenswrapper[4826]: I0129 07:33:05.657252 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:33:35 crc kubenswrapper[4826]: I0129 07:33:35.656244 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:33:35 crc kubenswrapper[4826]: I0129 07:33:35.657021 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:34:05 crc kubenswrapper[4826]: I0129 07:34:05.657158 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:34:05 crc kubenswrapper[4826]: I0129 07:34:05.657804 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:34:05 crc kubenswrapper[4826]: I0129 07:34:05.657866 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 07:34:05 crc kubenswrapper[4826]: I0129 07:34:05.659166 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:34:05 crc kubenswrapper[4826]: I0129 07:34:05.659506 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" gracePeriod=600 Jan 29 07:34:05 crc kubenswrapper[4826]: E0129 07:34:05.822385 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:34:06 crc kubenswrapper[4826]: I0129 07:34:06.074707 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" exitCode=0 Jan 29 07:34:06 crc kubenswrapper[4826]: I0129 07:34:06.074813 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88"} Jan 29 07:34:06 crc kubenswrapper[4826]: I0129 07:34:06.074908 4826 scope.go:117] "RemoveContainer" containerID="cdf0954d547df06d481856ed121f2f9ee617f65f5bcb0c58b6a4636bb42863aa" Jan 29 07:34:06 crc kubenswrapper[4826]: I0129 07:34:06.076100 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:34:06 crc kubenswrapper[4826]: E0129 07:34:06.076674 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:34:20 crc kubenswrapper[4826]: I0129 07:34:20.809401 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:34:20 crc kubenswrapper[4826]: E0129 07:34:20.810589 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:34:33 crc kubenswrapper[4826]: I0129 07:34:33.810379 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:34:33 crc kubenswrapper[4826]: E0129 07:34:33.811627 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:34:46 crc kubenswrapper[4826]: I0129 07:34:46.814667 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:34:46 crc kubenswrapper[4826]: E0129 07:34:46.815634 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:34:58 crc kubenswrapper[4826]: I0129 07:34:58.809334 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:34:58 crc kubenswrapper[4826]: E0129 07:34:58.810231 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:35:09 crc kubenswrapper[4826]: I0129 07:35:09.809453 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:35:09 crc kubenswrapper[4826]: E0129 07:35:09.810518 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:35:23 crc kubenswrapper[4826]: I0129 07:35:23.809159 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:35:23 crc kubenswrapper[4826]: E0129 07:35:23.810485 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:35:35 crc kubenswrapper[4826]: I0129 07:35:35.808842 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:35:35 crc kubenswrapper[4826]: E0129 07:35:35.809845 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:35:49 crc kubenswrapper[4826]: I0129 07:35:49.809466 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:35:49 crc kubenswrapper[4826]: E0129 07:35:49.810537 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:36:03 crc kubenswrapper[4826]: I0129 07:36:03.809193 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:36:03 crc kubenswrapper[4826]: E0129 07:36:03.810439 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:36:17 crc kubenswrapper[4826]: I0129 07:36:17.808751 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:36:17 crc kubenswrapper[4826]: E0129 07:36:17.809669 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:36:30 crc kubenswrapper[4826]: I0129 07:36:30.809736 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:36:30 crc kubenswrapper[4826]: E0129 07:36:30.810970 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:36:44 crc kubenswrapper[4826]: I0129 07:36:44.808847 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:36:44 crc kubenswrapper[4826]: E0129 07:36:44.809746 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:36:58 crc kubenswrapper[4826]: I0129 07:36:58.809105 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:36:58 crc kubenswrapper[4826]: E0129 07:36:58.810078 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:37:12 crc kubenswrapper[4826]: I0129 07:37:12.808756 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:37:12 crc kubenswrapper[4826]: E0129 07:37:12.809436 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:37:24 crc kubenswrapper[4826]: I0129 07:37:24.809151 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:37:24 crc kubenswrapper[4826]: E0129 07:37:24.809962 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:37:37 crc kubenswrapper[4826]: I0129 07:37:37.809426 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:37:37 crc kubenswrapper[4826]: E0129 07:37:37.810357 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:37:51 crc kubenswrapper[4826]: I0129 07:37:51.809075 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:37:51 crc kubenswrapper[4826]: E0129 07:37:51.809969 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:38:04 crc kubenswrapper[4826]: I0129 07:38:04.809148 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:38:04 crc kubenswrapper[4826]: E0129 07:38:04.809691 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:38:17 crc kubenswrapper[4826]: I0129 07:38:17.809424 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:38:17 crc kubenswrapper[4826]: E0129 07:38:17.810509 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:38:30 crc kubenswrapper[4826]: I0129 07:38:30.809332 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:38:30 crc kubenswrapper[4826]: E0129 07:38:30.810142 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:38:45 crc kubenswrapper[4826]: I0129 07:38:45.809213 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:38:45 crc kubenswrapper[4826]: E0129 07:38:45.810264 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:38:58 crc kubenswrapper[4826]: I0129 07:38:58.810563 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:38:58 crc kubenswrapper[4826]: E0129 07:38:58.811652 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:39:12 crc kubenswrapper[4826]: I0129 07:39:12.808318 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:39:13 crc kubenswrapper[4826]: I0129 07:39:13.220476 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"6367fbfc28c163bfbd82327957921632863c9214b9f6e813fb3474721055374b"} Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.710017 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d8pwg"] Jan 29 07:40:40 crc kubenswrapper[4826]: E0129 07:40:40.711014 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8189a2aa-3bdd-4616-b700-0afcc2363e57" containerName="extract-content" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.711035 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8189a2aa-3bdd-4616-b700-0afcc2363e57" containerName="extract-content" Jan 29 07:40:40 crc kubenswrapper[4826]: E0129 07:40:40.711059 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a6f3f3-c805-412e-8ad0-9774dba31990" containerName="extract-content" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.711075 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a6f3f3-c805-412e-8ad0-9774dba31990" containerName="extract-content" Jan 29 07:40:40 crc kubenswrapper[4826]: E0129 07:40:40.711104 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" containerName="extract-utilities" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.711119 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" containerName="extract-utilities" Jan 29 07:40:40 crc kubenswrapper[4826]: E0129 07:40:40.711138 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a6f3f3-c805-412e-8ad0-9774dba31990" containerName="extract-utilities" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.711151 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a6f3f3-c805-412e-8ad0-9774dba31990" containerName="extract-utilities" Jan 29 07:40:40 crc kubenswrapper[4826]: E0129 07:40:40.711179 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8189a2aa-3bdd-4616-b700-0afcc2363e57" containerName="extract-utilities" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.711191 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8189a2aa-3bdd-4616-b700-0afcc2363e57" containerName="extract-utilities" Jan 29 07:40:40 crc kubenswrapper[4826]: E0129 07:40:40.711208 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8189a2aa-3bdd-4616-b700-0afcc2363e57" containerName="registry-server" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.711221 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8189a2aa-3bdd-4616-b700-0afcc2363e57" containerName="registry-server" Jan 29 07:40:40 crc kubenswrapper[4826]: E0129 07:40:40.711242 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" containerName="extract-content" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.711254 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" containerName="extract-content" Jan 29 07:40:40 crc kubenswrapper[4826]: E0129 07:40:40.711284 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" containerName="registry-server" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.711324 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" containerName="registry-server" Jan 29 07:40:40 crc kubenswrapper[4826]: E0129 07:40:40.711340 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a6f3f3-c805-412e-8ad0-9774dba31990" containerName="registry-server" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.711355 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a6f3f3-c805-412e-8ad0-9774dba31990" containerName="registry-server" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.711596 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a6f3f3-c805-412e-8ad0-9774dba31990" containerName="registry-server" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.711623 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9bf7e8f-923d-4cd7-88e6-a4bcdd54f3ad" containerName="registry-server" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.711652 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8189a2aa-3bdd-4616-b700-0afcc2363e57" containerName="registry-server" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.713672 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.724941 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d8pwg"] Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.807152 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2zdq\" (UniqueName: \"kubernetes.io/projected/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-kube-api-access-x2zdq\") pod \"community-operators-d8pwg\" (UID: \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\") " pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.807532 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-utilities\") pod \"community-operators-d8pwg\" (UID: \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\") " pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.807666 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-catalog-content\") pod \"community-operators-d8pwg\" (UID: \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\") " pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.909380 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2zdq\" (UniqueName: \"kubernetes.io/projected/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-kube-api-access-x2zdq\") pod \"community-operators-d8pwg\" (UID: \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\") " pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.909472 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-utilities\") pod \"community-operators-d8pwg\" (UID: \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\") " pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.909511 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-catalog-content\") pod \"community-operators-d8pwg\" (UID: \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\") " pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.910336 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-utilities\") pod \"community-operators-d8pwg\" (UID: \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\") " pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.910652 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-catalog-content\") pod \"community-operators-d8pwg\" (UID: \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\") " pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:40 crc kubenswrapper[4826]: I0129 07:40:40.934283 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2zdq\" (UniqueName: \"kubernetes.io/projected/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-kube-api-access-x2zdq\") pod \"community-operators-d8pwg\" (UID: \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\") " pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:41 crc kubenswrapper[4826]: I0129 07:40:41.047070 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:41 crc kubenswrapper[4826]: I0129 07:40:41.539677 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d8pwg"] Jan 29 07:40:41 crc kubenswrapper[4826]: I0129 07:40:41.986723 4826 generic.go:334] "Generic (PLEG): container finished" podID="56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" containerID="a28d89294fcff0779daf2dc70888bc6d46ff0016ccfc13b3e84f815e2aefaa75" exitCode=0 Jan 29 07:40:41 crc kubenswrapper[4826]: I0129 07:40:41.986806 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pwg" event={"ID":"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9","Type":"ContainerDied","Data":"a28d89294fcff0779daf2dc70888bc6d46ff0016ccfc13b3e84f815e2aefaa75"} Jan 29 07:40:41 crc kubenswrapper[4826]: I0129 07:40:41.986879 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pwg" event={"ID":"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9","Type":"ContainerStarted","Data":"626429b9e7619afc549f10bebd5e96aa657531fd68ad3c301290e46d02b39bb6"} Jan 29 07:40:41 crc kubenswrapper[4826]: I0129 07:40:41.989137 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 07:40:42 crc kubenswrapper[4826]: I0129 07:40:42.996482 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pwg" event={"ID":"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9","Type":"ContainerStarted","Data":"e3657b904bd48bff5f0425797af5965b7cfcafcfb34993fbf1db19792c0f470a"} Jan 29 07:40:44 crc kubenswrapper[4826]: I0129 07:40:44.006908 4826 generic.go:334] "Generic (PLEG): container finished" podID="56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" containerID="e3657b904bd48bff5f0425797af5965b7cfcafcfb34993fbf1db19792c0f470a" exitCode=0 Jan 29 07:40:44 crc kubenswrapper[4826]: I0129 07:40:44.006969 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pwg" event={"ID":"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9","Type":"ContainerDied","Data":"e3657b904bd48bff5f0425797af5965b7cfcafcfb34993fbf1db19792c0f470a"} Jan 29 07:40:45 crc kubenswrapper[4826]: I0129 07:40:45.013412 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pwg" event={"ID":"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9","Type":"ContainerStarted","Data":"70e09bd94e7046ea9a15f7f1c5761e4843436cda8192f54d36ddf0ce94003317"} Jan 29 07:40:45 crc kubenswrapper[4826]: I0129 07:40:45.036860 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d8pwg" podStartSLOduration=2.60907692 podStartE2EDuration="5.036843984s" podCreationTimestamp="2026-01-29 07:40:40 +0000 UTC" firstStartedPulling="2026-01-29 07:40:41.988725473 +0000 UTC m=+3425.850518582" lastFinishedPulling="2026-01-29 07:40:44.416492547 +0000 UTC m=+3428.278285646" observedRunningTime="2026-01-29 07:40:45.033353602 +0000 UTC m=+3428.895146671" watchObservedRunningTime="2026-01-29 07:40:45.036843984 +0000 UTC m=+3428.898637053" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.047742 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.048465 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.120784 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.541367 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-st5wh"] Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.543190 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.570959 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-st5wh"] Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.695460 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vptrt\" (UniqueName: \"kubernetes.io/projected/9bfeea0d-e54d-48f0-8758-a15707da76c9-kube-api-access-vptrt\") pod \"certified-operators-st5wh\" (UID: \"9bfeea0d-e54d-48f0-8758-a15707da76c9\") " pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.695576 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bfeea0d-e54d-48f0-8758-a15707da76c9-utilities\") pod \"certified-operators-st5wh\" (UID: \"9bfeea0d-e54d-48f0-8758-a15707da76c9\") " pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.695650 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bfeea0d-e54d-48f0-8758-a15707da76c9-catalog-content\") pod \"certified-operators-st5wh\" (UID: \"9bfeea0d-e54d-48f0-8758-a15707da76c9\") " pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.797233 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vptrt\" (UniqueName: \"kubernetes.io/projected/9bfeea0d-e54d-48f0-8758-a15707da76c9-kube-api-access-vptrt\") pod \"certified-operators-st5wh\" (UID: \"9bfeea0d-e54d-48f0-8758-a15707da76c9\") " pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.797332 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bfeea0d-e54d-48f0-8758-a15707da76c9-utilities\") pod \"certified-operators-st5wh\" (UID: \"9bfeea0d-e54d-48f0-8758-a15707da76c9\") " pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.797381 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bfeea0d-e54d-48f0-8758-a15707da76c9-catalog-content\") pod \"certified-operators-st5wh\" (UID: \"9bfeea0d-e54d-48f0-8758-a15707da76c9\") " pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.797942 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bfeea0d-e54d-48f0-8758-a15707da76c9-utilities\") pod \"certified-operators-st5wh\" (UID: \"9bfeea0d-e54d-48f0-8758-a15707da76c9\") " pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.797980 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bfeea0d-e54d-48f0-8758-a15707da76c9-catalog-content\") pod \"certified-operators-st5wh\" (UID: \"9bfeea0d-e54d-48f0-8758-a15707da76c9\") " pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.822994 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vptrt\" (UniqueName: \"kubernetes.io/projected/9bfeea0d-e54d-48f0-8758-a15707da76c9-kube-api-access-vptrt\") pod \"certified-operators-st5wh\" (UID: \"9bfeea0d-e54d-48f0-8758-a15707da76c9\") " pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:40:51 crc kubenswrapper[4826]: I0129 07:40:51.901392 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:40:52 crc kubenswrapper[4826]: I0129 07:40:52.137158 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:52 crc kubenswrapper[4826]: I0129 07:40:52.386085 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-st5wh"] Jan 29 07:40:52 crc kubenswrapper[4826]: W0129 07:40:52.397598 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bfeea0d_e54d_48f0_8758_a15707da76c9.slice/crio-a6a8395980b62a3a9d2a4e25ed2e5c23d338df12cdab5e058dc3cbb4b453d241 WatchSource:0}: Error finding container a6a8395980b62a3a9d2a4e25ed2e5c23d338df12cdab5e058dc3cbb4b453d241: Status 404 returned error can't find the container with id a6a8395980b62a3a9d2a4e25ed2e5c23d338df12cdab5e058dc3cbb4b453d241 Jan 29 07:40:53 crc kubenswrapper[4826]: I0129 07:40:53.079477 4826 generic.go:334] "Generic (PLEG): container finished" podID="9bfeea0d-e54d-48f0-8758-a15707da76c9" containerID="5055f1f9f81ba88db41ad45c03653cfcf91b52b63ca7349cbe7036685e9082f3" exitCode=0 Jan 29 07:40:53 crc kubenswrapper[4826]: I0129 07:40:53.079596 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st5wh" event={"ID":"9bfeea0d-e54d-48f0-8758-a15707da76c9","Type":"ContainerDied","Data":"5055f1f9f81ba88db41ad45c03653cfcf91b52b63ca7349cbe7036685e9082f3"} Jan 29 07:40:53 crc kubenswrapper[4826]: I0129 07:40:53.080093 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st5wh" event={"ID":"9bfeea0d-e54d-48f0-8758-a15707da76c9","Type":"ContainerStarted","Data":"a6a8395980b62a3a9d2a4e25ed2e5c23d338df12cdab5e058dc3cbb4b453d241"} Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.090088 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st5wh" event={"ID":"9bfeea0d-e54d-48f0-8758-a15707da76c9","Type":"ContainerStarted","Data":"735077b8946c4ee8f77dc2c2226b3c403db15b66c42264dd861b63fe7e6721f9"} Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.553370 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7t9xh"] Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.555411 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.562876 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7t9xh"] Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.656614 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-utilities\") pod \"redhat-marketplace-7t9xh\" (UID: \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\") " pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.656746 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljdzh\" (UniqueName: \"kubernetes.io/projected/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-kube-api-access-ljdzh\") pod \"redhat-marketplace-7t9xh\" (UID: \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\") " pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.656784 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-catalog-content\") pod \"redhat-marketplace-7t9xh\" (UID: \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\") " pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.736853 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mlpm2"] Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.740217 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.761262 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-utilities\") pod \"redhat-marketplace-7t9xh\" (UID: \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\") " pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.761410 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljdzh\" (UniqueName: \"kubernetes.io/projected/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-kube-api-access-ljdzh\") pod \"redhat-marketplace-7t9xh\" (UID: \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\") " pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.761452 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-catalog-content\") pod \"redhat-marketplace-7t9xh\" (UID: \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\") " pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.762072 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-catalog-content\") pod \"redhat-marketplace-7t9xh\" (UID: \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\") " pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.762721 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mlpm2"] Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.763767 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-utilities\") pod \"redhat-marketplace-7t9xh\" (UID: \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\") " pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.787154 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljdzh\" (UniqueName: \"kubernetes.io/projected/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-kube-api-access-ljdzh\") pod \"redhat-marketplace-7t9xh\" (UID: \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\") " pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.862380 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gkf6\" (UniqueName: \"kubernetes.io/projected/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-kube-api-access-7gkf6\") pod \"redhat-operators-mlpm2\" (UID: \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\") " pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.862516 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-catalog-content\") pod \"redhat-operators-mlpm2\" (UID: \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\") " pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.862551 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-utilities\") pod \"redhat-operators-mlpm2\" (UID: \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\") " pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.919214 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.964222 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gkf6\" (UniqueName: \"kubernetes.io/projected/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-kube-api-access-7gkf6\") pod \"redhat-operators-mlpm2\" (UID: \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\") " pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.964364 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-catalog-content\") pod \"redhat-operators-mlpm2\" (UID: \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\") " pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.964418 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-utilities\") pod \"redhat-operators-mlpm2\" (UID: \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\") " pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.965007 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-utilities\") pod \"redhat-operators-mlpm2\" (UID: \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\") " pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.965099 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-catalog-content\") pod \"redhat-operators-mlpm2\" (UID: \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\") " pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:40:54 crc kubenswrapper[4826]: I0129 07:40:54.981974 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gkf6\" (UniqueName: \"kubernetes.io/projected/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-kube-api-access-7gkf6\") pod \"redhat-operators-mlpm2\" (UID: \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\") " pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:40:55 crc kubenswrapper[4826]: I0129 07:40:55.071667 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:40:55 crc kubenswrapper[4826]: I0129 07:40:55.102018 4826 generic.go:334] "Generic (PLEG): container finished" podID="9bfeea0d-e54d-48f0-8758-a15707da76c9" containerID="735077b8946c4ee8f77dc2c2226b3c403db15b66c42264dd861b63fe7e6721f9" exitCode=0 Jan 29 07:40:55 crc kubenswrapper[4826]: I0129 07:40:55.102061 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st5wh" event={"ID":"9bfeea0d-e54d-48f0-8758-a15707da76c9","Type":"ContainerDied","Data":"735077b8946c4ee8f77dc2c2226b3c403db15b66c42264dd861b63fe7e6721f9"} Jan 29 07:40:55 crc kubenswrapper[4826]: I0129 07:40:55.166955 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7t9xh"] Jan 29 07:40:55 crc kubenswrapper[4826]: W0129 07:40:55.182594 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ac70e0b_b8ef_4f3b_8cea_c4ff54dfd542.slice/crio-426bb950b5c7806cd351c5c4b2914f82f912544b8a965dacbf5bc51cfac3421e WatchSource:0}: Error finding container 426bb950b5c7806cd351c5c4b2914f82f912544b8a965dacbf5bc51cfac3421e: Status 404 returned error can't find the container with id 426bb950b5c7806cd351c5c4b2914f82f912544b8a965dacbf5bc51cfac3421e Jan 29 07:40:55 crc kubenswrapper[4826]: I0129 07:40:55.346966 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mlpm2"] Jan 29 07:40:55 crc kubenswrapper[4826]: W0129 07:40:55.430471 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48a0e9a8_0223_4290_a8c5_97fac7a4ba02.slice/crio-2d7b111e1075a18d0fe9ef0af21f17217d3b2a08fd43bf3b369a01355764d2c7 WatchSource:0}: Error finding container 2d7b111e1075a18d0fe9ef0af21f17217d3b2a08fd43bf3b369a01355764d2c7: Status 404 returned error can't find the container with id 2d7b111e1075a18d0fe9ef0af21f17217d3b2a08fd43bf3b369a01355764d2c7 Jan 29 07:40:56 crc kubenswrapper[4826]: I0129 07:40:56.110606 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st5wh" event={"ID":"9bfeea0d-e54d-48f0-8758-a15707da76c9","Type":"ContainerStarted","Data":"861a0ada051cd78032044fbd70064a06624ff4623674424aeff589507cbb38c9"} Jan 29 07:40:56 crc kubenswrapper[4826]: I0129 07:40:56.113164 4826 generic.go:334] "Generic (PLEG): container finished" podID="48a0e9a8-0223-4290-a8c5-97fac7a4ba02" containerID="48e4cddf76de45dde2afbfa7e13b87484d8e42abfec61bd9286feec8323247ad" exitCode=0 Jan 29 07:40:56 crc kubenswrapper[4826]: I0129 07:40:56.113260 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlpm2" event={"ID":"48a0e9a8-0223-4290-a8c5-97fac7a4ba02","Type":"ContainerDied","Data":"48e4cddf76de45dde2afbfa7e13b87484d8e42abfec61bd9286feec8323247ad"} Jan 29 07:40:56 crc kubenswrapper[4826]: I0129 07:40:56.113370 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlpm2" event={"ID":"48a0e9a8-0223-4290-a8c5-97fac7a4ba02","Type":"ContainerStarted","Data":"2d7b111e1075a18d0fe9ef0af21f17217d3b2a08fd43bf3b369a01355764d2c7"} Jan 29 07:40:56 crc kubenswrapper[4826]: I0129 07:40:56.115547 4826 generic.go:334] "Generic (PLEG): container finished" podID="2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" containerID="444c4a5d1783841adb9d7eb990f5069159a13a4d1378b37f55826b439e7ca58f" exitCode=0 Jan 29 07:40:56 crc kubenswrapper[4826]: I0129 07:40:56.115592 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7t9xh" event={"ID":"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542","Type":"ContainerDied","Data":"444c4a5d1783841adb9d7eb990f5069159a13a4d1378b37f55826b439e7ca58f"} Jan 29 07:40:56 crc kubenswrapper[4826]: I0129 07:40:56.115623 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7t9xh" event={"ID":"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542","Type":"ContainerStarted","Data":"426bb950b5c7806cd351c5c4b2914f82f912544b8a965dacbf5bc51cfac3421e"} Jan 29 07:40:56 crc kubenswrapper[4826]: I0129 07:40:56.130444 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-st5wh" podStartSLOduration=2.276582245 podStartE2EDuration="5.130426713s" podCreationTimestamp="2026-01-29 07:40:51 +0000 UTC" firstStartedPulling="2026-01-29 07:40:53.082001484 +0000 UTC m=+3436.943794583" lastFinishedPulling="2026-01-29 07:40:55.935845982 +0000 UTC m=+3439.797639051" observedRunningTime="2026-01-29 07:40:56.128489592 +0000 UTC m=+3439.990282661" watchObservedRunningTime="2026-01-29 07:40:56.130426713 +0000 UTC m=+3439.992219792" Jan 29 07:40:56 crc kubenswrapper[4826]: I0129 07:40:56.931150 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d8pwg"] Jan 29 07:40:56 crc kubenswrapper[4826]: I0129 07:40:56.931564 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d8pwg" podUID="56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" containerName="registry-server" containerID="cri-o://70e09bd94e7046ea9a15f7f1c5761e4843436cda8192f54d36ddf0ce94003317" gracePeriod=2 Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.126187 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlpm2" event={"ID":"48a0e9a8-0223-4290-a8c5-97fac7a4ba02","Type":"ContainerStarted","Data":"8cb083743d4ba94895ea604432cad1d28bb5fb2c56766563cb4f05d89e3ff7e8"} Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.136820 4826 generic.go:334] "Generic (PLEG): container finished" podID="2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" containerID="a0f0119d9bdb4dd67677da88e6ea2b8836d4f8a0256188a9232cb8db1b6207d8" exitCode=0 Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.136914 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7t9xh" event={"ID":"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542","Type":"ContainerDied","Data":"a0f0119d9bdb4dd67677da88e6ea2b8836d4f8a0256188a9232cb8db1b6207d8"} Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.139327 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pwg" event={"ID":"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9","Type":"ContainerDied","Data":"70e09bd94e7046ea9a15f7f1c5761e4843436cda8192f54d36ddf0ce94003317"} Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.139283 4826 generic.go:334] "Generic (PLEG): container finished" podID="56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" containerID="70e09bd94e7046ea9a15f7f1c5761e4843436cda8192f54d36ddf0ce94003317" exitCode=0 Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.322847 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.345587 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-catalog-content\") pod \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\" (UID: \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\") " Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.345651 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-utilities\") pod \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\" (UID: \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\") " Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.345772 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2zdq\" (UniqueName: \"kubernetes.io/projected/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-kube-api-access-x2zdq\") pod \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\" (UID: \"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9\") " Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.346612 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-utilities" (OuterVolumeSpecName: "utilities") pod "56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" (UID: "56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.352495 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-kube-api-access-x2zdq" (OuterVolumeSpecName: "kube-api-access-x2zdq") pod "56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" (UID: "56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9"). InnerVolumeSpecName "kube-api-access-x2zdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.395844 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" (UID: "56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.447166 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2zdq\" (UniqueName: \"kubernetes.io/projected/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-kube-api-access-x2zdq\") on node \"crc\" DevicePath \"\"" Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.447531 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:40:57 crc kubenswrapper[4826]: I0129 07:40:57.447610 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:40:58 crc kubenswrapper[4826]: I0129 07:40:58.154999 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7t9xh" event={"ID":"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542","Type":"ContainerStarted","Data":"78beeb1b531e7cdc4dfd955ba818173316f775df061c7f3f49ea751843eb4751"} Jan 29 07:40:58 crc kubenswrapper[4826]: I0129 07:40:58.163535 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8pwg" Jan 29 07:40:58 crc kubenswrapper[4826]: I0129 07:40:58.163700 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pwg" event={"ID":"56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9","Type":"ContainerDied","Data":"626429b9e7619afc549f10bebd5e96aa657531fd68ad3c301290e46d02b39bb6"} Jan 29 07:40:58 crc kubenswrapper[4826]: I0129 07:40:58.163758 4826 scope.go:117] "RemoveContainer" containerID="70e09bd94e7046ea9a15f7f1c5761e4843436cda8192f54d36ddf0ce94003317" Jan 29 07:40:58 crc kubenswrapper[4826]: I0129 07:40:58.166125 4826 generic.go:334] "Generic (PLEG): container finished" podID="48a0e9a8-0223-4290-a8c5-97fac7a4ba02" containerID="8cb083743d4ba94895ea604432cad1d28bb5fb2c56766563cb4f05d89e3ff7e8" exitCode=0 Jan 29 07:40:58 crc kubenswrapper[4826]: I0129 07:40:58.166157 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlpm2" event={"ID":"48a0e9a8-0223-4290-a8c5-97fac7a4ba02","Type":"ContainerDied","Data":"8cb083743d4ba94895ea604432cad1d28bb5fb2c56766563cb4f05d89e3ff7e8"} Jan 29 07:40:58 crc kubenswrapper[4826]: I0129 07:40:58.185256 4826 scope.go:117] "RemoveContainer" containerID="e3657b904bd48bff5f0425797af5965b7cfcafcfb34993fbf1db19792c0f470a" Jan 29 07:40:58 crc kubenswrapper[4826]: I0129 07:40:58.187781 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7t9xh" podStartSLOduration=2.588378108 podStartE2EDuration="4.187755179s" podCreationTimestamp="2026-01-29 07:40:54 +0000 UTC" firstStartedPulling="2026-01-29 07:40:56.11856069 +0000 UTC m=+3439.980353759" lastFinishedPulling="2026-01-29 07:40:57.717937761 +0000 UTC m=+3441.579730830" observedRunningTime="2026-01-29 07:40:58.182786547 +0000 UTC m=+3442.044579646" watchObservedRunningTime="2026-01-29 07:40:58.187755179 +0000 UTC m=+3442.049548248" Jan 29 07:40:58 crc kubenswrapper[4826]: I0129 07:40:58.217582 4826 scope.go:117] "RemoveContainer" containerID="a28d89294fcff0779daf2dc70888bc6d46ff0016ccfc13b3e84f815e2aefaa75" Jan 29 07:40:58 crc kubenswrapper[4826]: I0129 07:40:58.227710 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d8pwg"] Jan 29 07:40:58 crc kubenswrapper[4826]: I0129 07:40:58.255853 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d8pwg"] Jan 29 07:40:58 crc kubenswrapper[4826]: I0129 07:40:58.816740 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" path="/var/lib/kubelet/pods/56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9/volumes" Jan 29 07:40:59 crc kubenswrapper[4826]: I0129 07:40:59.194930 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlpm2" event={"ID":"48a0e9a8-0223-4290-a8c5-97fac7a4ba02","Type":"ContainerStarted","Data":"a702a0006762d52042654c308daa7b890ec546f03bfba83c0dd3a4c1f1d155d9"} Jan 29 07:40:59 crc kubenswrapper[4826]: I0129 07:40:59.215393 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mlpm2" podStartSLOduration=2.806446997 podStartE2EDuration="5.215372094s" podCreationTimestamp="2026-01-29 07:40:54 +0000 UTC" firstStartedPulling="2026-01-29 07:40:56.114529003 +0000 UTC m=+3439.976322072" lastFinishedPulling="2026-01-29 07:40:58.5234541 +0000 UTC m=+3442.385247169" observedRunningTime="2026-01-29 07:40:59.214998925 +0000 UTC m=+3443.076792014" watchObservedRunningTime="2026-01-29 07:40:59.215372094 +0000 UTC m=+3443.077165163" Jan 29 07:41:01 crc kubenswrapper[4826]: I0129 07:41:01.902107 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:41:01 crc kubenswrapper[4826]: I0129 07:41:01.902161 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:41:01 crc kubenswrapper[4826]: I0129 07:41:01.963689 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:41:02 crc kubenswrapper[4826]: I0129 07:41:02.299544 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:41:04 crc kubenswrapper[4826]: I0129 07:41:04.137093 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-st5wh"] Jan 29 07:41:04 crc kubenswrapper[4826]: I0129 07:41:04.256280 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-st5wh" podUID="9bfeea0d-e54d-48f0-8758-a15707da76c9" containerName="registry-server" containerID="cri-o://861a0ada051cd78032044fbd70064a06624ff4623674424aeff589507cbb38c9" gracePeriod=2 Jan 29 07:41:04 crc kubenswrapper[4826]: I0129 07:41:04.919441 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:41:04 crc kubenswrapper[4826]: I0129 07:41:04.920489 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:41:04 crc kubenswrapper[4826]: I0129 07:41:04.988469 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:41:05 crc kubenswrapper[4826]: I0129 07:41:05.072857 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:41:05 crc kubenswrapper[4826]: I0129 07:41:05.072976 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:41:05 crc kubenswrapper[4826]: I0129 07:41:05.311910 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:41:06 crc kubenswrapper[4826]: I0129 07:41:06.122136 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mlpm2" podUID="48a0e9a8-0223-4290-a8c5-97fac7a4ba02" containerName="registry-server" probeResult="failure" output=< Jan 29 07:41:06 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 07:41:06 crc kubenswrapper[4826]: > Jan 29 07:41:06 crc kubenswrapper[4826]: I0129 07:41:06.281040 4826 generic.go:334] "Generic (PLEG): container finished" podID="9bfeea0d-e54d-48f0-8758-a15707da76c9" containerID="861a0ada051cd78032044fbd70064a06624ff4623674424aeff589507cbb38c9" exitCode=0 Jan 29 07:41:06 crc kubenswrapper[4826]: I0129 07:41:06.281195 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st5wh" event={"ID":"9bfeea0d-e54d-48f0-8758-a15707da76c9","Type":"ContainerDied","Data":"861a0ada051cd78032044fbd70064a06624ff4623674424aeff589507cbb38c9"} Jan 29 07:41:06 crc kubenswrapper[4826]: I0129 07:41:06.509928 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:41:06 crc kubenswrapper[4826]: I0129 07:41:06.680898 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vptrt\" (UniqueName: \"kubernetes.io/projected/9bfeea0d-e54d-48f0-8758-a15707da76c9-kube-api-access-vptrt\") pod \"9bfeea0d-e54d-48f0-8758-a15707da76c9\" (UID: \"9bfeea0d-e54d-48f0-8758-a15707da76c9\") " Jan 29 07:41:06 crc kubenswrapper[4826]: I0129 07:41:06.681018 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bfeea0d-e54d-48f0-8758-a15707da76c9-utilities\") pod \"9bfeea0d-e54d-48f0-8758-a15707da76c9\" (UID: \"9bfeea0d-e54d-48f0-8758-a15707da76c9\") " Jan 29 07:41:06 crc kubenswrapper[4826]: I0129 07:41:06.681049 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bfeea0d-e54d-48f0-8758-a15707da76c9-catalog-content\") pod \"9bfeea0d-e54d-48f0-8758-a15707da76c9\" (UID: \"9bfeea0d-e54d-48f0-8758-a15707da76c9\") " Jan 29 07:41:06 crc kubenswrapper[4826]: I0129 07:41:06.682799 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bfeea0d-e54d-48f0-8758-a15707da76c9-utilities" (OuterVolumeSpecName: "utilities") pod "9bfeea0d-e54d-48f0-8758-a15707da76c9" (UID: "9bfeea0d-e54d-48f0-8758-a15707da76c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:41:06 crc kubenswrapper[4826]: I0129 07:41:06.689762 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bfeea0d-e54d-48f0-8758-a15707da76c9-kube-api-access-vptrt" (OuterVolumeSpecName: "kube-api-access-vptrt") pod "9bfeea0d-e54d-48f0-8758-a15707da76c9" (UID: "9bfeea0d-e54d-48f0-8758-a15707da76c9"). InnerVolumeSpecName "kube-api-access-vptrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:41:06 crc kubenswrapper[4826]: I0129 07:41:06.755531 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bfeea0d-e54d-48f0-8758-a15707da76c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bfeea0d-e54d-48f0-8758-a15707da76c9" (UID: "9bfeea0d-e54d-48f0-8758-a15707da76c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:41:06 crc kubenswrapper[4826]: I0129 07:41:06.782948 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vptrt\" (UniqueName: \"kubernetes.io/projected/9bfeea0d-e54d-48f0-8758-a15707da76c9-kube-api-access-vptrt\") on node \"crc\" DevicePath \"\"" Jan 29 07:41:06 crc kubenswrapper[4826]: I0129 07:41:06.782985 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bfeea0d-e54d-48f0-8758-a15707da76c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:41:06 crc kubenswrapper[4826]: I0129 07:41:06.782994 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bfeea0d-e54d-48f0-8758-a15707da76c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:41:07 crc kubenswrapper[4826]: I0129 07:41:07.294393 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st5wh" event={"ID":"9bfeea0d-e54d-48f0-8758-a15707da76c9","Type":"ContainerDied","Data":"a6a8395980b62a3a9d2a4e25ed2e5c23d338df12cdab5e058dc3cbb4b453d241"} Jan 29 07:41:07 crc kubenswrapper[4826]: I0129 07:41:07.294425 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-st5wh" Jan 29 07:41:07 crc kubenswrapper[4826]: I0129 07:41:07.294530 4826 scope.go:117] "RemoveContainer" containerID="861a0ada051cd78032044fbd70064a06624ff4623674424aeff589507cbb38c9" Jan 29 07:41:07 crc kubenswrapper[4826]: I0129 07:41:07.321711 4826 scope.go:117] "RemoveContainer" containerID="735077b8946c4ee8f77dc2c2226b3c403db15b66c42264dd861b63fe7e6721f9" Jan 29 07:41:07 crc kubenswrapper[4826]: I0129 07:41:07.330861 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-st5wh"] Jan 29 07:41:07 crc kubenswrapper[4826]: I0129 07:41:07.344784 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-st5wh"] Jan 29 07:41:07 crc kubenswrapper[4826]: I0129 07:41:07.350490 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7t9xh"] Jan 29 07:41:07 crc kubenswrapper[4826]: I0129 07:41:07.351893 4826 scope.go:117] "RemoveContainer" containerID="5055f1f9f81ba88db41ad45c03653cfcf91b52b63ca7349cbe7036685e9082f3" Jan 29 07:41:08 crc kubenswrapper[4826]: I0129 07:41:08.307331 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7t9xh" podUID="2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" containerName="registry-server" containerID="cri-o://78beeb1b531e7cdc4dfd955ba818173316f775df061c7f3f49ea751843eb4751" gracePeriod=2 Jan 29 07:41:08 crc kubenswrapper[4826]: I0129 07:41:08.708622 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:41:08 crc kubenswrapper[4826]: I0129 07:41:08.811870 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-utilities\") pod \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\" (UID: \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\") " Jan 29 07:41:08 crc kubenswrapper[4826]: I0129 07:41:08.811929 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljdzh\" (UniqueName: \"kubernetes.io/projected/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-kube-api-access-ljdzh\") pod \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\" (UID: \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\") " Jan 29 07:41:08 crc kubenswrapper[4826]: I0129 07:41:08.812019 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-catalog-content\") pod \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\" (UID: \"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542\") " Jan 29 07:41:08 crc kubenswrapper[4826]: I0129 07:41:08.812662 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-utilities" (OuterVolumeSpecName: "utilities") pod "2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" (UID: "2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:41:08 crc kubenswrapper[4826]: I0129 07:41:08.817654 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bfeea0d-e54d-48f0-8758-a15707da76c9" path="/var/lib/kubelet/pods/9bfeea0d-e54d-48f0-8758-a15707da76c9/volumes" Jan 29 07:41:08 crc kubenswrapper[4826]: I0129 07:41:08.820501 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-kube-api-access-ljdzh" (OuterVolumeSpecName: "kube-api-access-ljdzh") pod "2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" (UID: "2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542"). InnerVolumeSpecName "kube-api-access-ljdzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:41:08 crc kubenswrapper[4826]: I0129 07:41:08.838822 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" (UID: "2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:41:08 crc kubenswrapper[4826]: I0129 07:41:08.914036 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:41:08 crc kubenswrapper[4826]: I0129 07:41:08.914071 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljdzh\" (UniqueName: \"kubernetes.io/projected/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-kube-api-access-ljdzh\") on node \"crc\" DevicePath \"\"" Jan 29 07:41:08 crc kubenswrapper[4826]: I0129 07:41:08.914081 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.320388 4826 generic.go:334] "Generic (PLEG): container finished" podID="2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" containerID="78beeb1b531e7cdc4dfd955ba818173316f775df061c7f3f49ea751843eb4751" exitCode=0 Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.320482 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7t9xh" event={"ID":"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542","Type":"ContainerDied","Data":"78beeb1b531e7cdc4dfd955ba818173316f775df061c7f3f49ea751843eb4751"} Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.320521 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7t9xh" event={"ID":"2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542","Type":"ContainerDied","Data":"426bb950b5c7806cd351c5c4b2914f82f912544b8a965dacbf5bc51cfac3421e"} Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.320556 4826 scope.go:117] "RemoveContainer" containerID="78beeb1b531e7cdc4dfd955ba818173316f775df061c7f3f49ea751843eb4751" Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.320566 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7t9xh" Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.357991 4826 scope.go:117] "RemoveContainer" containerID="a0f0119d9bdb4dd67677da88e6ea2b8836d4f8a0256188a9232cb8db1b6207d8" Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.366444 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7t9xh"] Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.375491 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7t9xh"] Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.383516 4826 scope.go:117] "RemoveContainer" containerID="444c4a5d1783841adb9d7eb990f5069159a13a4d1378b37f55826b439e7ca58f" Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.401466 4826 scope.go:117] "RemoveContainer" containerID="78beeb1b531e7cdc4dfd955ba818173316f775df061c7f3f49ea751843eb4751" Jan 29 07:41:09 crc kubenswrapper[4826]: E0129 07:41:09.402155 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78beeb1b531e7cdc4dfd955ba818173316f775df061c7f3f49ea751843eb4751\": container with ID starting with 78beeb1b531e7cdc4dfd955ba818173316f775df061c7f3f49ea751843eb4751 not found: ID does not exist" containerID="78beeb1b531e7cdc4dfd955ba818173316f775df061c7f3f49ea751843eb4751" Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.402218 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78beeb1b531e7cdc4dfd955ba818173316f775df061c7f3f49ea751843eb4751"} err="failed to get container status \"78beeb1b531e7cdc4dfd955ba818173316f775df061c7f3f49ea751843eb4751\": rpc error: code = NotFound desc = could not find container \"78beeb1b531e7cdc4dfd955ba818173316f775df061c7f3f49ea751843eb4751\": container with ID starting with 78beeb1b531e7cdc4dfd955ba818173316f775df061c7f3f49ea751843eb4751 not found: ID does not exist" Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.402270 4826 scope.go:117] "RemoveContainer" containerID="a0f0119d9bdb4dd67677da88e6ea2b8836d4f8a0256188a9232cb8db1b6207d8" Jan 29 07:41:09 crc kubenswrapper[4826]: E0129 07:41:09.402843 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f0119d9bdb4dd67677da88e6ea2b8836d4f8a0256188a9232cb8db1b6207d8\": container with ID starting with a0f0119d9bdb4dd67677da88e6ea2b8836d4f8a0256188a9232cb8db1b6207d8 not found: ID does not exist" containerID="a0f0119d9bdb4dd67677da88e6ea2b8836d4f8a0256188a9232cb8db1b6207d8" Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.402878 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f0119d9bdb4dd67677da88e6ea2b8836d4f8a0256188a9232cb8db1b6207d8"} err="failed to get container status \"a0f0119d9bdb4dd67677da88e6ea2b8836d4f8a0256188a9232cb8db1b6207d8\": rpc error: code = NotFound desc = could not find container \"a0f0119d9bdb4dd67677da88e6ea2b8836d4f8a0256188a9232cb8db1b6207d8\": container with ID starting with a0f0119d9bdb4dd67677da88e6ea2b8836d4f8a0256188a9232cb8db1b6207d8 not found: ID does not exist" Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.402897 4826 scope.go:117] "RemoveContainer" containerID="444c4a5d1783841adb9d7eb990f5069159a13a4d1378b37f55826b439e7ca58f" Jan 29 07:41:09 crc kubenswrapper[4826]: E0129 07:41:09.403455 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"444c4a5d1783841adb9d7eb990f5069159a13a4d1378b37f55826b439e7ca58f\": container with ID starting with 444c4a5d1783841adb9d7eb990f5069159a13a4d1378b37f55826b439e7ca58f not found: ID does not exist" containerID="444c4a5d1783841adb9d7eb990f5069159a13a4d1378b37f55826b439e7ca58f" Jan 29 07:41:09 crc kubenswrapper[4826]: I0129 07:41:09.403488 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444c4a5d1783841adb9d7eb990f5069159a13a4d1378b37f55826b439e7ca58f"} err="failed to get container status \"444c4a5d1783841adb9d7eb990f5069159a13a4d1378b37f55826b439e7ca58f\": rpc error: code = NotFound desc = could not find container \"444c4a5d1783841adb9d7eb990f5069159a13a4d1378b37f55826b439e7ca58f\": container with ID starting with 444c4a5d1783841adb9d7eb990f5069159a13a4d1378b37f55826b439e7ca58f not found: ID does not exist" Jan 29 07:41:10 crc kubenswrapper[4826]: I0129 07:41:10.831948 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" path="/var/lib/kubelet/pods/2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542/volumes" Jan 29 07:41:15 crc kubenswrapper[4826]: I0129 07:41:15.127144 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:41:15 crc kubenswrapper[4826]: I0129 07:41:15.190193 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:41:15 crc kubenswrapper[4826]: I0129 07:41:15.368755 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mlpm2"] Jan 29 07:41:16 crc kubenswrapper[4826]: I0129 07:41:16.377892 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mlpm2" podUID="48a0e9a8-0223-4290-a8c5-97fac7a4ba02" containerName="registry-server" containerID="cri-o://a702a0006762d52042654c308daa7b890ec546f03bfba83c0dd3a4c1f1d155d9" gracePeriod=2 Jan 29 07:41:16 crc kubenswrapper[4826]: I0129 07:41:16.819268 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:41:16 crc kubenswrapper[4826]: I0129 07:41:16.848147 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-catalog-content\") pod \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\" (UID: \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\") " Jan 29 07:41:16 crc kubenswrapper[4826]: I0129 07:41:16.848283 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gkf6\" (UniqueName: \"kubernetes.io/projected/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-kube-api-access-7gkf6\") pod \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\" (UID: \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\") " Jan 29 07:41:16 crc kubenswrapper[4826]: I0129 07:41:16.848431 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-utilities\") pod \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\" (UID: \"48a0e9a8-0223-4290-a8c5-97fac7a4ba02\") " Jan 29 07:41:16 crc kubenswrapper[4826]: I0129 07:41:16.849557 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-utilities" (OuterVolumeSpecName: "utilities") pod "48a0e9a8-0223-4290-a8c5-97fac7a4ba02" (UID: "48a0e9a8-0223-4290-a8c5-97fac7a4ba02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:41:16 crc kubenswrapper[4826]: I0129 07:41:16.849794 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:41:16 crc kubenswrapper[4826]: I0129 07:41:16.887229 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-kube-api-access-7gkf6" (OuterVolumeSpecName: "kube-api-access-7gkf6") pod "48a0e9a8-0223-4290-a8c5-97fac7a4ba02" (UID: "48a0e9a8-0223-4290-a8c5-97fac7a4ba02"). InnerVolumeSpecName "kube-api-access-7gkf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:41:16 crc kubenswrapper[4826]: I0129 07:41:16.950692 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gkf6\" (UniqueName: \"kubernetes.io/projected/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-kube-api-access-7gkf6\") on node \"crc\" DevicePath \"\"" Jan 29 07:41:16 crc kubenswrapper[4826]: I0129 07:41:16.989427 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48a0e9a8-0223-4290-a8c5-97fac7a4ba02" (UID: "48a0e9a8-0223-4290-a8c5-97fac7a4ba02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.052939 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a0e9a8-0223-4290-a8c5-97fac7a4ba02-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.390496 4826 generic.go:334] "Generic (PLEG): container finished" podID="48a0e9a8-0223-4290-a8c5-97fac7a4ba02" containerID="a702a0006762d52042654c308daa7b890ec546f03bfba83c0dd3a4c1f1d155d9" exitCode=0 Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.390599 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mlpm2" Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.391527 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlpm2" event={"ID":"48a0e9a8-0223-4290-a8c5-97fac7a4ba02","Type":"ContainerDied","Data":"a702a0006762d52042654c308daa7b890ec546f03bfba83c0dd3a4c1f1d155d9"} Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.391728 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mlpm2" event={"ID":"48a0e9a8-0223-4290-a8c5-97fac7a4ba02","Type":"ContainerDied","Data":"2d7b111e1075a18d0fe9ef0af21f17217d3b2a08fd43bf3b369a01355764d2c7"} Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.391766 4826 scope.go:117] "RemoveContainer" containerID="a702a0006762d52042654c308daa7b890ec546f03bfba83c0dd3a4c1f1d155d9" Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.424431 4826 scope.go:117] "RemoveContainer" containerID="8cb083743d4ba94895ea604432cad1d28bb5fb2c56766563cb4f05d89e3ff7e8" Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.428929 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mlpm2"] Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.435905 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mlpm2"] Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.472490 4826 scope.go:117] "RemoveContainer" containerID="48e4cddf76de45dde2afbfa7e13b87484d8e42abfec61bd9286feec8323247ad" Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.493277 4826 scope.go:117] "RemoveContainer" containerID="a702a0006762d52042654c308daa7b890ec546f03bfba83c0dd3a4c1f1d155d9" Jan 29 07:41:17 crc kubenswrapper[4826]: E0129 07:41:17.493803 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a702a0006762d52042654c308daa7b890ec546f03bfba83c0dd3a4c1f1d155d9\": container with ID starting with a702a0006762d52042654c308daa7b890ec546f03bfba83c0dd3a4c1f1d155d9 not found: ID does not exist" containerID="a702a0006762d52042654c308daa7b890ec546f03bfba83c0dd3a4c1f1d155d9" Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.493844 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a702a0006762d52042654c308daa7b890ec546f03bfba83c0dd3a4c1f1d155d9"} err="failed to get container status \"a702a0006762d52042654c308daa7b890ec546f03bfba83c0dd3a4c1f1d155d9\": rpc error: code = NotFound desc = could not find container \"a702a0006762d52042654c308daa7b890ec546f03bfba83c0dd3a4c1f1d155d9\": container with ID starting with a702a0006762d52042654c308daa7b890ec546f03bfba83c0dd3a4c1f1d155d9 not found: ID does not exist" Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.493870 4826 scope.go:117] "RemoveContainer" containerID="8cb083743d4ba94895ea604432cad1d28bb5fb2c56766563cb4f05d89e3ff7e8" Jan 29 07:41:17 crc kubenswrapper[4826]: E0129 07:41:17.494719 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cb083743d4ba94895ea604432cad1d28bb5fb2c56766563cb4f05d89e3ff7e8\": container with ID starting with 8cb083743d4ba94895ea604432cad1d28bb5fb2c56766563cb4f05d89e3ff7e8 not found: ID does not exist" containerID="8cb083743d4ba94895ea604432cad1d28bb5fb2c56766563cb4f05d89e3ff7e8" Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.494761 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb083743d4ba94895ea604432cad1d28bb5fb2c56766563cb4f05d89e3ff7e8"} err="failed to get container status \"8cb083743d4ba94895ea604432cad1d28bb5fb2c56766563cb4f05d89e3ff7e8\": rpc error: code = NotFound desc = could not find container \"8cb083743d4ba94895ea604432cad1d28bb5fb2c56766563cb4f05d89e3ff7e8\": container with ID starting with 8cb083743d4ba94895ea604432cad1d28bb5fb2c56766563cb4f05d89e3ff7e8 not found: ID does not exist" Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.494781 4826 scope.go:117] "RemoveContainer" containerID="48e4cddf76de45dde2afbfa7e13b87484d8e42abfec61bd9286feec8323247ad" Jan 29 07:41:17 crc kubenswrapper[4826]: E0129 07:41:17.495343 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48e4cddf76de45dde2afbfa7e13b87484d8e42abfec61bd9286feec8323247ad\": container with ID starting with 48e4cddf76de45dde2afbfa7e13b87484d8e42abfec61bd9286feec8323247ad not found: ID does not exist" containerID="48e4cddf76de45dde2afbfa7e13b87484d8e42abfec61bd9286feec8323247ad" Jan 29 07:41:17 crc kubenswrapper[4826]: I0129 07:41:17.495441 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48e4cddf76de45dde2afbfa7e13b87484d8e42abfec61bd9286feec8323247ad"} err="failed to get container status \"48e4cddf76de45dde2afbfa7e13b87484d8e42abfec61bd9286feec8323247ad\": rpc error: code = NotFound desc = could not find container \"48e4cddf76de45dde2afbfa7e13b87484d8e42abfec61bd9286feec8323247ad\": container with ID starting with 48e4cddf76de45dde2afbfa7e13b87484d8e42abfec61bd9286feec8323247ad not found: ID does not exist" Jan 29 07:41:18 crc kubenswrapper[4826]: I0129 07:41:18.819523 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a0e9a8-0223-4290-a8c5-97fac7a4ba02" path="/var/lib/kubelet/pods/48a0e9a8-0223-4290-a8c5-97fac7a4ba02/volumes" Jan 29 07:41:35 crc kubenswrapper[4826]: I0129 07:41:35.656056 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:41:35 crc kubenswrapper[4826]: I0129 07:41:35.656769 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:42:05 crc kubenswrapper[4826]: I0129 07:42:05.656735 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:42:05 crc kubenswrapper[4826]: I0129 07:42:05.657221 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:42:35 crc kubenswrapper[4826]: I0129 07:42:35.656952 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:42:35 crc kubenswrapper[4826]: I0129 07:42:35.659937 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:42:35 crc kubenswrapper[4826]: I0129 07:42:35.660206 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 07:42:35 crc kubenswrapper[4826]: I0129 07:42:35.661277 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6367fbfc28c163bfbd82327957921632863c9214b9f6e813fb3474721055374b"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:42:35 crc kubenswrapper[4826]: I0129 07:42:35.661629 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://6367fbfc28c163bfbd82327957921632863c9214b9f6e813fb3474721055374b" gracePeriod=600 Jan 29 07:42:36 crc kubenswrapper[4826]: I0129 07:42:36.127097 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="6367fbfc28c163bfbd82327957921632863c9214b9f6e813fb3474721055374b" exitCode=0 Jan 29 07:42:36 crc kubenswrapper[4826]: I0129 07:42:36.127185 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"6367fbfc28c163bfbd82327957921632863c9214b9f6e813fb3474721055374b"} Jan 29 07:42:36 crc kubenswrapper[4826]: I0129 07:42:36.127524 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71"} Jan 29 07:42:36 crc kubenswrapper[4826]: I0129 07:42:36.127556 4826 scope.go:117] "RemoveContainer" containerID="d2fd7f64ecc62fc03e15853e261af4caebe3db2298a46a4c012da43d34be2d88" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.192641 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl"] Jan 29 07:45:00 crc kubenswrapper[4826]: E0129 07:45:00.193971 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" containerName="extract-utilities" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.193998 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" containerName="extract-utilities" Jan 29 07:45:00 crc kubenswrapper[4826]: E0129 07:45:00.194017 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a0e9a8-0223-4290-a8c5-97fac7a4ba02" containerName="extract-content" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194028 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a0e9a8-0223-4290-a8c5-97fac7a4ba02" containerName="extract-content" Jan 29 07:45:00 crc kubenswrapper[4826]: E0129 07:45:00.194049 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bfeea0d-e54d-48f0-8758-a15707da76c9" containerName="extract-content" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194063 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bfeea0d-e54d-48f0-8758-a15707da76c9" containerName="extract-content" Jan 29 07:45:00 crc kubenswrapper[4826]: E0129 07:45:00.194076 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" containerName="registry-server" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194087 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" containerName="registry-server" Jan 29 07:45:00 crc kubenswrapper[4826]: E0129 07:45:00.194102 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" containerName="extract-content" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194127 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" containerName="extract-content" Jan 29 07:45:00 crc kubenswrapper[4826]: E0129 07:45:00.194146 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a0e9a8-0223-4290-a8c5-97fac7a4ba02" containerName="extract-utilities" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194156 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a0e9a8-0223-4290-a8c5-97fac7a4ba02" containerName="extract-utilities" Jan 29 07:45:00 crc kubenswrapper[4826]: E0129 07:45:00.194171 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bfeea0d-e54d-48f0-8758-a15707da76c9" containerName="extract-utilities" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194180 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bfeea0d-e54d-48f0-8758-a15707da76c9" containerName="extract-utilities" Jan 29 07:45:00 crc kubenswrapper[4826]: E0129 07:45:00.194201 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bfeea0d-e54d-48f0-8758-a15707da76c9" containerName="registry-server" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194211 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bfeea0d-e54d-48f0-8758-a15707da76c9" containerName="registry-server" Jan 29 07:45:00 crc kubenswrapper[4826]: E0129 07:45:00.194231 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" containerName="registry-server" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194241 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" containerName="registry-server" Jan 29 07:45:00 crc kubenswrapper[4826]: E0129 07:45:00.194258 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" containerName="extract-utilities" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194269 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" containerName="extract-utilities" Jan 29 07:45:00 crc kubenswrapper[4826]: E0129 07:45:00.194289 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" containerName="extract-content" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194320 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" containerName="extract-content" Jan 29 07:45:00 crc kubenswrapper[4826]: E0129 07:45:00.194334 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a0e9a8-0223-4290-a8c5-97fac7a4ba02" containerName="registry-server" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194344 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a0e9a8-0223-4290-a8c5-97fac7a4ba02" containerName="registry-server" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194554 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac70e0b-b8ef-4f3b-8cea-c4ff54dfd542" containerName="registry-server" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194589 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ebb7c3-4957-4aae-8a00-b14e8a8d5eb9" containerName="registry-server" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194604 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bfeea0d-e54d-48f0-8758-a15707da76c9" containerName="registry-server" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.194631 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a0e9a8-0223-4290-a8c5-97fac7a4ba02" containerName="registry-server" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.195347 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.200622 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl"] Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.202818 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.203032 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.337350 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-config-volume\") pod \"collect-profiles-29494545-wk5wl\" (UID: \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.337405 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-secret-volume\") pod \"collect-profiles-29494545-wk5wl\" (UID: \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.337492 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l82lf\" (UniqueName: \"kubernetes.io/projected/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-kube-api-access-l82lf\") pod \"collect-profiles-29494545-wk5wl\" (UID: \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.438461 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-config-volume\") pod \"collect-profiles-29494545-wk5wl\" (UID: \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.438501 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-secret-volume\") pod \"collect-profiles-29494545-wk5wl\" (UID: \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.438569 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l82lf\" (UniqueName: \"kubernetes.io/projected/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-kube-api-access-l82lf\") pod \"collect-profiles-29494545-wk5wl\" (UID: \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.439726 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-config-volume\") pod \"collect-profiles-29494545-wk5wl\" (UID: \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.450079 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-secret-volume\") pod \"collect-profiles-29494545-wk5wl\" (UID: \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.468902 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l82lf\" (UniqueName: \"kubernetes.io/projected/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-kube-api-access-l82lf\") pod \"collect-profiles-29494545-wk5wl\" (UID: \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.533015 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" Jan 29 07:45:00 crc kubenswrapper[4826]: I0129 07:45:00.986820 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl"] Jan 29 07:45:01 crc kubenswrapper[4826]: I0129 07:45:01.432091 4826 generic.go:334] "Generic (PLEG): container finished" podID="7a9d9073-69e4-4d4b-92cc-da505d00c9f8" containerID="0dfe868933abf2f50e60ac2d60fd97f504e8848de44102585ebb7bd454da9c19" exitCode=0 Jan 29 07:45:01 crc kubenswrapper[4826]: I0129 07:45:01.432245 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" event={"ID":"7a9d9073-69e4-4d4b-92cc-da505d00c9f8","Type":"ContainerDied","Data":"0dfe868933abf2f50e60ac2d60fd97f504e8848de44102585ebb7bd454da9c19"} Jan 29 07:45:01 crc kubenswrapper[4826]: I0129 07:45:01.432397 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" event={"ID":"7a9d9073-69e4-4d4b-92cc-da505d00c9f8","Type":"ContainerStarted","Data":"81062c0496b290deee8ff9c6c720c5c60d7fd795a6464d0e03edf39f11bfe3ea"} Jan 29 07:45:02 crc kubenswrapper[4826]: I0129 07:45:02.767815 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" Jan 29 07:45:02 crc kubenswrapper[4826]: I0129 07:45:02.773217 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-config-volume\") pod \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\" (UID: \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\") " Jan 29 07:45:02 crc kubenswrapper[4826]: I0129 07:45:02.773273 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-secret-volume\") pod \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\" (UID: \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\") " Jan 29 07:45:02 crc kubenswrapper[4826]: I0129 07:45:02.773348 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l82lf\" (UniqueName: \"kubernetes.io/projected/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-kube-api-access-l82lf\") pod \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\" (UID: \"7a9d9073-69e4-4d4b-92cc-da505d00c9f8\") " Jan 29 07:45:02 crc kubenswrapper[4826]: I0129 07:45:02.774272 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a9d9073-69e4-4d4b-92cc-da505d00c9f8" (UID: "7a9d9073-69e4-4d4b-92cc-da505d00c9f8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:45:02 crc kubenswrapper[4826]: I0129 07:45:02.778863 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a9d9073-69e4-4d4b-92cc-da505d00c9f8" (UID: "7a9d9073-69e4-4d4b-92cc-da505d00c9f8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 07:45:02 crc kubenswrapper[4826]: I0129 07:45:02.779257 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-kube-api-access-l82lf" (OuterVolumeSpecName: "kube-api-access-l82lf") pod "7a9d9073-69e4-4d4b-92cc-da505d00c9f8" (UID: "7a9d9073-69e4-4d4b-92cc-da505d00c9f8"). InnerVolumeSpecName "kube-api-access-l82lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:45:02 crc kubenswrapper[4826]: I0129 07:45:02.874406 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:45:02 crc kubenswrapper[4826]: I0129 07:45:02.874676 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 07:45:02 crc kubenswrapper[4826]: I0129 07:45:02.874773 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l82lf\" (UniqueName: \"kubernetes.io/projected/7a9d9073-69e4-4d4b-92cc-da505d00c9f8-kube-api-access-l82lf\") on node \"crc\" DevicePath \"\"" Jan 29 07:45:03 crc kubenswrapper[4826]: I0129 07:45:03.468760 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" event={"ID":"7a9d9073-69e4-4d4b-92cc-da505d00c9f8","Type":"ContainerDied","Data":"81062c0496b290deee8ff9c6c720c5c60d7fd795a6464d0e03edf39f11bfe3ea"} Jan 29 07:45:03 crc kubenswrapper[4826]: I0129 07:45:03.469039 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81062c0496b290deee8ff9c6c720c5c60d7fd795a6464d0e03edf39f11bfe3ea" Jan 29 07:45:03 crc kubenswrapper[4826]: I0129 07:45:03.469105 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl" Jan 29 07:45:03 crc kubenswrapper[4826]: I0129 07:45:03.848603 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff"] Jan 29 07:45:03 crc kubenswrapper[4826]: I0129 07:45:03.855729 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494500-k6wff"] Jan 29 07:45:04 crc kubenswrapper[4826]: I0129 07:45:04.821241 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cb515ad-6a84-4938-908a-6dc478741980" path="/var/lib/kubelet/pods/5cb515ad-6a84-4938-908a-6dc478741980/volumes" Jan 29 07:45:05 crc kubenswrapper[4826]: I0129 07:45:05.656573 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:45:05 crc kubenswrapper[4826]: I0129 07:45:05.656650 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:45:35 crc kubenswrapper[4826]: I0129 07:45:35.657076 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:45:35 crc kubenswrapper[4826]: I0129 07:45:35.657844 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:46:02 crc kubenswrapper[4826]: I0129 07:46:02.319408 4826 scope.go:117] "RemoveContainer" containerID="bace1320263e2e5532c3e0a676af294fb7a50246d49111ab9c57c2bae4e29017" Jan 29 07:46:05 crc kubenswrapper[4826]: I0129 07:46:05.656428 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:46:05 crc kubenswrapper[4826]: I0129 07:46:05.656985 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:46:05 crc kubenswrapper[4826]: I0129 07:46:05.657036 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 07:46:05 crc kubenswrapper[4826]: I0129 07:46:05.657664 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:46:05 crc kubenswrapper[4826]: I0129 07:46:05.657709 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" gracePeriod=600 Jan 29 07:46:05 crc kubenswrapper[4826]: E0129 07:46:05.787393 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:46:05 crc kubenswrapper[4826]: I0129 07:46:05.995175 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" exitCode=0 Jan 29 07:46:05 crc kubenswrapper[4826]: I0129 07:46:05.995794 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71"} Jan 29 07:46:05 crc kubenswrapper[4826]: I0129 07:46:05.995874 4826 scope.go:117] "RemoveContainer" containerID="6367fbfc28c163bfbd82327957921632863c9214b9f6e813fb3474721055374b" Jan 29 07:46:05 crc kubenswrapper[4826]: I0129 07:46:05.996774 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:46:05 crc kubenswrapper[4826]: E0129 07:46:05.997166 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:46:20 crc kubenswrapper[4826]: I0129 07:46:20.809337 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:46:20 crc kubenswrapper[4826]: E0129 07:46:20.811271 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:46:34 crc kubenswrapper[4826]: I0129 07:46:34.808651 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:46:34 crc kubenswrapper[4826]: E0129 07:46:34.809375 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:46:46 crc kubenswrapper[4826]: I0129 07:46:46.813048 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:46:46 crc kubenswrapper[4826]: E0129 07:46:46.813826 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:47:01 crc kubenswrapper[4826]: I0129 07:47:01.809619 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:47:01 crc kubenswrapper[4826]: E0129 07:47:01.810476 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:47:14 crc kubenswrapper[4826]: I0129 07:47:14.808586 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:47:14 crc kubenswrapper[4826]: E0129 07:47:14.809328 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:47:26 crc kubenswrapper[4826]: I0129 07:47:26.817833 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:47:26 crc kubenswrapper[4826]: E0129 07:47:26.818712 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:47:39 crc kubenswrapper[4826]: I0129 07:47:39.811332 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:47:39 crc kubenswrapper[4826]: E0129 07:47:39.812463 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:47:50 crc kubenswrapper[4826]: I0129 07:47:50.808955 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:47:50 crc kubenswrapper[4826]: E0129 07:47:50.811977 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:48:02 crc kubenswrapper[4826]: I0129 07:48:02.808912 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:48:02 crc kubenswrapper[4826]: E0129 07:48:02.809833 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:48:14 crc kubenswrapper[4826]: I0129 07:48:14.809647 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:48:14 crc kubenswrapper[4826]: E0129 07:48:14.810564 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:48:29 crc kubenswrapper[4826]: I0129 07:48:29.809841 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:48:29 crc kubenswrapper[4826]: E0129 07:48:29.810717 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:48:44 crc kubenswrapper[4826]: I0129 07:48:44.809230 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:48:44 crc kubenswrapper[4826]: E0129 07:48:44.809943 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:48:59 crc kubenswrapper[4826]: I0129 07:48:59.809099 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:48:59 crc kubenswrapper[4826]: E0129 07:48:59.809709 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:49:12 crc kubenswrapper[4826]: I0129 07:49:12.809548 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:49:12 crc kubenswrapper[4826]: E0129 07:49:12.810783 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:49:23 crc kubenswrapper[4826]: I0129 07:49:23.808891 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:49:23 crc kubenswrapper[4826]: E0129 07:49:23.810007 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:49:36 crc kubenswrapper[4826]: I0129 07:49:36.812032 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:49:36 crc kubenswrapper[4826]: E0129 07:49:36.813045 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:49:48 crc kubenswrapper[4826]: I0129 07:49:48.809899 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:49:48 crc kubenswrapper[4826]: E0129 07:49:48.811179 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:50:01 crc kubenswrapper[4826]: I0129 07:50:01.809061 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:50:01 crc kubenswrapper[4826]: E0129 07:50:01.811075 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:50:15 crc kubenswrapper[4826]: I0129 07:50:15.809401 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:50:15 crc kubenswrapper[4826]: E0129 07:50:15.810460 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:50:29 crc kubenswrapper[4826]: I0129 07:50:29.808982 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:50:29 crc kubenswrapper[4826]: E0129 07:50:29.809967 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:50:42 crc kubenswrapper[4826]: I0129 07:50:42.809919 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:50:42 crc kubenswrapper[4826]: E0129 07:50:42.811189 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.685962 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lh6fr"] Jan 29 07:50:55 crc kubenswrapper[4826]: E0129 07:50:55.687255 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9d9073-69e4-4d4b-92cc-da505d00c9f8" containerName="collect-profiles" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.687279 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9d9073-69e4-4d4b-92cc-da505d00c9f8" containerName="collect-profiles" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.687575 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9d9073-69e4-4d4b-92cc-da505d00c9f8" containerName="collect-profiles" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.689672 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.721077 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lh6fr"] Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.786178 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2699fd23-307a-48bd-8f37-f96472c832c5-catalog-content\") pod \"community-operators-lh6fr\" (UID: \"2699fd23-307a-48bd-8f37-f96472c832c5\") " pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.786269 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2699fd23-307a-48bd-8f37-f96472c832c5-utilities\") pod \"community-operators-lh6fr\" (UID: \"2699fd23-307a-48bd-8f37-f96472c832c5\") " pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.786513 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kttph\" (UniqueName: \"kubernetes.io/projected/2699fd23-307a-48bd-8f37-f96472c832c5-kube-api-access-kttph\") pod \"community-operators-lh6fr\" (UID: \"2699fd23-307a-48bd-8f37-f96472c832c5\") " pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.879255 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2c229"] Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.881548 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.887624 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2699fd23-307a-48bd-8f37-f96472c832c5-catalog-content\") pod \"community-operators-lh6fr\" (UID: \"2699fd23-307a-48bd-8f37-f96472c832c5\") " pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.887707 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd26d70a-0057-4168-8a7c-12fb272ac5e4-utilities\") pod \"redhat-operators-2c229\" (UID: \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\") " pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.887741 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd26d70a-0057-4168-8a7c-12fb272ac5e4-catalog-content\") pod \"redhat-operators-2c229\" (UID: \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\") " pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.888193 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2699fd23-307a-48bd-8f37-f96472c832c5-utilities\") pod \"community-operators-lh6fr\" (UID: \"2699fd23-307a-48bd-8f37-f96472c832c5\") " pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.888388 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2cr2\" (UniqueName: \"kubernetes.io/projected/bd26d70a-0057-4168-8a7c-12fb272ac5e4-kube-api-access-c2cr2\") pod \"redhat-operators-2c229\" (UID: \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\") " pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.888599 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2699fd23-307a-48bd-8f37-f96472c832c5-catalog-content\") pod \"community-operators-lh6fr\" (UID: \"2699fd23-307a-48bd-8f37-f96472c832c5\") " pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.888688 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kttph\" (UniqueName: \"kubernetes.io/projected/2699fd23-307a-48bd-8f37-f96472c832c5-kube-api-access-kttph\") pod \"community-operators-lh6fr\" (UID: \"2699fd23-307a-48bd-8f37-f96472c832c5\") " pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.888719 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2699fd23-307a-48bd-8f37-f96472c832c5-utilities\") pod \"community-operators-lh6fr\" (UID: \"2699fd23-307a-48bd-8f37-f96472c832c5\") " pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.889525 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2c229"] Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.918252 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kttph\" (UniqueName: \"kubernetes.io/projected/2699fd23-307a-48bd-8f37-f96472c832c5-kube-api-access-kttph\") pod \"community-operators-lh6fr\" (UID: \"2699fd23-307a-48bd-8f37-f96472c832c5\") " pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.989883 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd26d70a-0057-4168-8a7c-12fb272ac5e4-utilities\") pod \"redhat-operators-2c229\" (UID: \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\") " pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.990495 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd26d70a-0057-4168-8a7c-12fb272ac5e4-utilities\") pod \"redhat-operators-2c229\" (UID: \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\") " pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.990522 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd26d70a-0057-4168-8a7c-12fb272ac5e4-catalog-content\") pod \"redhat-operators-2c229\" (UID: \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\") " pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.990772 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2cr2\" (UniqueName: \"kubernetes.io/projected/bd26d70a-0057-4168-8a7c-12fb272ac5e4-kube-api-access-c2cr2\") pod \"redhat-operators-2c229\" (UID: \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\") " pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:50:55 crc kubenswrapper[4826]: I0129 07:50:55.991081 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd26d70a-0057-4168-8a7c-12fb272ac5e4-catalog-content\") pod \"redhat-operators-2c229\" (UID: \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\") " pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:50:56 crc kubenswrapper[4826]: I0129 07:50:56.011582 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2cr2\" (UniqueName: \"kubernetes.io/projected/bd26d70a-0057-4168-8a7c-12fb272ac5e4-kube-api-access-c2cr2\") pod \"redhat-operators-2c229\" (UID: \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\") " pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:50:56 crc kubenswrapper[4826]: I0129 07:50:56.021213 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:50:56 crc kubenswrapper[4826]: I0129 07:50:56.199744 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:50:56 crc kubenswrapper[4826]: I0129 07:50:56.487917 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2c229"] Jan 29 07:50:56 crc kubenswrapper[4826]: I0129 07:50:56.538511 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lh6fr"] Jan 29 07:50:56 crc kubenswrapper[4826]: I0129 07:50:56.710529 4826 generic.go:334] "Generic (PLEG): container finished" podID="2699fd23-307a-48bd-8f37-f96472c832c5" containerID="6ad3609b4912fbc8b4d5af464e73f0724623106247b4e13425f0d50e79a3438e" exitCode=0 Jan 29 07:50:56 crc kubenswrapper[4826]: I0129 07:50:56.710586 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh6fr" event={"ID":"2699fd23-307a-48bd-8f37-f96472c832c5","Type":"ContainerDied","Data":"6ad3609b4912fbc8b4d5af464e73f0724623106247b4e13425f0d50e79a3438e"} Jan 29 07:50:56 crc kubenswrapper[4826]: I0129 07:50:56.710610 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh6fr" event={"ID":"2699fd23-307a-48bd-8f37-f96472c832c5","Type":"ContainerStarted","Data":"f4507e41c9654db501b4a8eb811ca4beaf8d39f33ec4bd428b2f3c45ec4bc76a"} Jan 29 07:50:56 crc kubenswrapper[4826]: I0129 07:50:56.714734 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 07:50:56 crc kubenswrapper[4826]: I0129 07:50:56.716646 4826 generic.go:334] "Generic (PLEG): container finished" podID="bd26d70a-0057-4168-8a7c-12fb272ac5e4" containerID="670e9f785c9156704716473954a3d62cdc8062b20df17445d0ad75a17cca3b59" exitCode=0 Jan 29 07:50:56 crc kubenswrapper[4826]: I0129 07:50:56.716684 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c229" event={"ID":"bd26d70a-0057-4168-8a7c-12fb272ac5e4","Type":"ContainerDied","Data":"670e9f785c9156704716473954a3d62cdc8062b20df17445d0ad75a17cca3b59"} Jan 29 07:50:56 crc kubenswrapper[4826]: I0129 07:50:56.716709 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c229" event={"ID":"bd26d70a-0057-4168-8a7c-12fb272ac5e4","Type":"ContainerStarted","Data":"1da90e18ef8c8f37c0661656f6e601252c36fbf6f394a11947f23d9eedb26032"} Jan 29 07:50:56 crc kubenswrapper[4826]: I0129 07:50:56.813002 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:50:56 crc kubenswrapper[4826]: E0129 07:50:56.813515 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:50:57 crc kubenswrapper[4826]: I0129 07:50:57.724915 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh6fr" event={"ID":"2699fd23-307a-48bd-8f37-f96472c832c5","Type":"ContainerStarted","Data":"f8f9c04ef4270c5f65bbe4041f8fef17b0bedd8ae5200e4a5acd577e4a6c8789"} Jan 29 07:50:57 crc kubenswrapper[4826]: I0129 07:50:57.726844 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c229" event={"ID":"bd26d70a-0057-4168-8a7c-12fb272ac5e4","Type":"ContainerStarted","Data":"265b91a5b584e9a74cfd888f4255bad2e135f19c2b5cfd0d559a0799dfec2f17"} Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.279014 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q7fbc"] Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.281574 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.316019 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7fbc"] Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.423139 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-utilities\") pod \"certified-operators-q7fbc\" (UID: \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\") " pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.423215 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6l2d\" (UniqueName: \"kubernetes.io/projected/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-kube-api-access-v6l2d\") pod \"certified-operators-q7fbc\" (UID: \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\") " pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.423282 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-catalog-content\") pod \"certified-operators-q7fbc\" (UID: \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\") " pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.524256 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-utilities\") pod \"certified-operators-q7fbc\" (UID: \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\") " pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.524337 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6l2d\" (UniqueName: \"kubernetes.io/projected/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-kube-api-access-v6l2d\") pod \"certified-operators-q7fbc\" (UID: \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\") " pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.524384 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-catalog-content\") pod \"certified-operators-q7fbc\" (UID: \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\") " pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.524996 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-utilities\") pod \"certified-operators-q7fbc\" (UID: \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\") " pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.525115 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-catalog-content\") pod \"certified-operators-q7fbc\" (UID: \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\") " pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.543155 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6l2d\" (UniqueName: \"kubernetes.io/projected/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-kube-api-access-v6l2d\") pod \"certified-operators-q7fbc\" (UID: \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\") " pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.602523 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.752620 4826 generic.go:334] "Generic (PLEG): container finished" podID="2699fd23-307a-48bd-8f37-f96472c832c5" containerID="f8f9c04ef4270c5f65bbe4041f8fef17b0bedd8ae5200e4a5acd577e4a6c8789" exitCode=0 Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.752916 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh6fr" event={"ID":"2699fd23-307a-48bd-8f37-f96472c832c5","Type":"ContainerDied","Data":"f8f9c04ef4270c5f65bbe4041f8fef17b0bedd8ae5200e4a5acd577e4a6c8789"} Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.758428 4826 generic.go:334] "Generic (PLEG): container finished" podID="bd26d70a-0057-4168-8a7c-12fb272ac5e4" containerID="265b91a5b584e9a74cfd888f4255bad2e135f19c2b5cfd0d559a0799dfec2f17" exitCode=0 Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.758469 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c229" event={"ID":"bd26d70a-0057-4168-8a7c-12fb272ac5e4","Type":"ContainerDied","Data":"265b91a5b584e9a74cfd888f4255bad2e135f19c2b5cfd0d559a0799dfec2f17"} Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.894367 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bd8sn"] Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.934378 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:50:58 crc kubenswrapper[4826]: I0129 07:50:58.937246 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7fbc"] Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:58.999893 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bd8sn"] Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.048848 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-catalog-content\") pod \"redhat-marketplace-bd8sn\" (UID: \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\") " pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.048919 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fz68\" (UniqueName: \"kubernetes.io/projected/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-kube-api-access-8fz68\") pod \"redhat-marketplace-bd8sn\" (UID: \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\") " pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.048961 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-utilities\") pod \"redhat-marketplace-bd8sn\" (UID: \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\") " pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.149748 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-utilities\") pod \"redhat-marketplace-bd8sn\" (UID: \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\") " pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.149813 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-catalog-content\") pod \"redhat-marketplace-bd8sn\" (UID: \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\") " pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.149874 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fz68\" (UniqueName: \"kubernetes.io/projected/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-kube-api-access-8fz68\") pod \"redhat-marketplace-bd8sn\" (UID: \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\") " pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.150178 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-utilities\") pod \"redhat-marketplace-bd8sn\" (UID: \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\") " pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.150508 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-catalog-content\") pod \"redhat-marketplace-bd8sn\" (UID: \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\") " pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.171888 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fz68\" (UniqueName: \"kubernetes.io/projected/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-kube-api-access-8fz68\") pod \"redhat-marketplace-bd8sn\" (UID: \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\") " pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.348584 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.604146 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bd8sn"] Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.771371 4826 generic.go:334] "Generic (PLEG): container finished" podID="a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" containerID="792144b425f1ef9260719c14c8d91dd5b4ef23098cc2cdee92f1307d43b99c7b" exitCode=0 Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.771472 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7fbc" event={"ID":"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274","Type":"ContainerDied","Data":"792144b425f1ef9260719c14c8d91dd5b4ef23098cc2cdee92f1307d43b99c7b"} Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.771502 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7fbc" event={"ID":"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274","Type":"ContainerStarted","Data":"8e9ed5aed4c29030cdec5d81b70793f880ee01abcecedcc02adba13163e1f993"} Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.777111 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh6fr" event={"ID":"2699fd23-307a-48bd-8f37-f96472c832c5","Type":"ContainerStarted","Data":"e420f99d9a2e63889997d1890cc29e96b8374570307bc08804146f27f86cb082"} Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.787486 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c229" event={"ID":"bd26d70a-0057-4168-8a7c-12fb272ac5e4","Type":"ContainerStarted","Data":"2a2b764f66db92ebc89b6e9501fd5fe06a4c70c421f17f99d563cd384dd7ef16"} Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.789359 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bd8sn" event={"ID":"66632125-cfa7-49d5-8bf4-1ed4d57e5f30","Type":"ContainerStarted","Data":"beef7f0547969d6b9357646dcaa040b705ac482cb98338e610bafc179733bd04"} Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.822694 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2c229" podStartSLOduration=2.38512948 podStartE2EDuration="4.822679808s" podCreationTimestamp="2026-01-29 07:50:55 +0000 UTC" firstStartedPulling="2026-01-29 07:50:56.717977918 +0000 UTC m=+4040.579770987" lastFinishedPulling="2026-01-29 07:50:59.155528236 +0000 UTC m=+4043.017321315" observedRunningTime="2026-01-29 07:50:59.81743769 +0000 UTC m=+4043.679230759" watchObservedRunningTime="2026-01-29 07:50:59.822679808 +0000 UTC m=+4043.684472877" Jan 29 07:50:59 crc kubenswrapper[4826]: I0129 07:50:59.863077 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lh6fr" podStartSLOduration=2.350163379 podStartE2EDuration="4.863057631s" podCreationTimestamp="2026-01-29 07:50:55 +0000 UTC" firstStartedPulling="2026-01-29 07:50:56.714510017 +0000 UTC m=+4040.576303086" lastFinishedPulling="2026-01-29 07:50:59.227404269 +0000 UTC m=+4043.089197338" observedRunningTime="2026-01-29 07:50:59.856668143 +0000 UTC m=+4043.718461232" watchObservedRunningTime="2026-01-29 07:50:59.863057631 +0000 UTC m=+4043.724850710" Jan 29 07:51:00 crc kubenswrapper[4826]: I0129 07:51:00.799533 4826 generic.go:334] "Generic (PLEG): container finished" podID="66632125-cfa7-49d5-8bf4-1ed4d57e5f30" containerID="6a45c8537367ca7b2e25c75821f811aca9b16108c613c3e2f06b75f308bc5d28" exitCode=0 Jan 29 07:51:00 crc kubenswrapper[4826]: I0129 07:51:00.799697 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bd8sn" event={"ID":"66632125-cfa7-49d5-8bf4-1ed4d57e5f30","Type":"ContainerDied","Data":"6a45c8537367ca7b2e25c75821f811aca9b16108c613c3e2f06b75f308bc5d28"} Jan 29 07:51:00 crc kubenswrapper[4826]: I0129 07:51:00.802915 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7fbc" event={"ID":"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274","Type":"ContainerStarted","Data":"8eaff784b39f74e66d8dd4a009f14f35dee63ab520a4f9ba87a9a3b6811cd9a4"} Jan 29 07:51:01 crc kubenswrapper[4826]: I0129 07:51:01.811315 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bd8sn" event={"ID":"66632125-cfa7-49d5-8bf4-1ed4d57e5f30","Type":"ContainerStarted","Data":"ab3b42803747f07c276dd76bc1a660ed15804290fbc82796f73f027a2a1c2a7f"} Jan 29 07:51:01 crc kubenswrapper[4826]: I0129 07:51:01.813633 4826 generic.go:334] "Generic (PLEG): container finished" podID="a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" containerID="8eaff784b39f74e66d8dd4a009f14f35dee63ab520a4f9ba87a9a3b6811cd9a4" exitCode=0 Jan 29 07:51:01 crc kubenswrapper[4826]: I0129 07:51:01.813677 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7fbc" event={"ID":"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274","Type":"ContainerDied","Data":"8eaff784b39f74e66d8dd4a009f14f35dee63ab520a4f9ba87a9a3b6811cd9a4"} Jan 29 07:51:02 crc kubenswrapper[4826]: I0129 07:51:02.846110 4826 generic.go:334] "Generic (PLEG): container finished" podID="66632125-cfa7-49d5-8bf4-1ed4d57e5f30" containerID="ab3b42803747f07c276dd76bc1a660ed15804290fbc82796f73f027a2a1c2a7f" exitCode=0 Jan 29 07:51:02 crc kubenswrapper[4826]: I0129 07:51:02.847314 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bd8sn" event={"ID":"66632125-cfa7-49d5-8bf4-1ed4d57e5f30","Type":"ContainerDied","Data":"ab3b42803747f07c276dd76bc1a660ed15804290fbc82796f73f027a2a1c2a7f"} Jan 29 07:51:02 crc kubenswrapper[4826]: I0129 07:51:02.854876 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7fbc" event={"ID":"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274","Type":"ContainerStarted","Data":"821c64f34c382b61c8c445542c7496781af0c8ae9ac2136b6348b8c329c39a41"} Jan 29 07:51:02 crc kubenswrapper[4826]: I0129 07:51:02.904070 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q7fbc" podStartSLOduration=2.453308127 podStartE2EDuration="4.904049523s" podCreationTimestamp="2026-01-29 07:50:58 +0000 UTC" firstStartedPulling="2026-01-29 07:50:59.773751139 +0000 UTC m=+4043.635544208" lastFinishedPulling="2026-01-29 07:51:02.224492535 +0000 UTC m=+4046.086285604" observedRunningTime="2026-01-29 07:51:02.902711907 +0000 UTC m=+4046.764504976" watchObservedRunningTime="2026-01-29 07:51:02.904049523 +0000 UTC m=+4046.765842592" Jan 29 07:51:03 crc kubenswrapper[4826]: I0129 07:51:03.863782 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bd8sn" event={"ID":"66632125-cfa7-49d5-8bf4-1ed4d57e5f30","Type":"ContainerStarted","Data":"31255dc3ba7a43c43163a949532ce5c186981a36a7f2c3c7675b6d54a07078c3"} Jan 29 07:51:03 crc kubenswrapper[4826]: I0129 07:51:03.886841 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bd8sn" podStartSLOduration=3.379710266 podStartE2EDuration="5.886825077s" podCreationTimestamp="2026-01-29 07:50:58 +0000 UTC" firstStartedPulling="2026-01-29 07:51:00.803197762 +0000 UTC m=+4044.664990841" lastFinishedPulling="2026-01-29 07:51:03.310312563 +0000 UTC m=+4047.172105652" observedRunningTime="2026-01-29 07:51:03.880515691 +0000 UTC m=+4047.742308760" watchObservedRunningTime="2026-01-29 07:51:03.886825077 +0000 UTC m=+4047.748618146" Jan 29 07:51:06 crc kubenswrapper[4826]: I0129 07:51:06.022160 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:51:06 crc kubenswrapper[4826]: I0129 07:51:06.022219 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:51:06 crc kubenswrapper[4826]: I0129 07:51:06.086550 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:51:06 crc kubenswrapper[4826]: I0129 07:51:06.200934 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:51:06 crc kubenswrapper[4826]: I0129 07:51:06.201614 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:51:06 crc kubenswrapper[4826]: I0129 07:51:06.269127 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:51:06 crc kubenswrapper[4826]: I0129 07:51:06.952204 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:51:06 crc kubenswrapper[4826]: I0129 07:51:06.960821 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:51:08 crc kubenswrapper[4826]: I0129 07:51:08.602683 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:51:08 crc kubenswrapper[4826]: I0129 07:51:08.603604 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:51:08 crc kubenswrapper[4826]: I0129 07:51:08.669470 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:51:08 crc kubenswrapper[4826]: I0129 07:51:08.671956 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lh6fr"] Jan 29 07:51:08 crc kubenswrapper[4826]: I0129 07:51:08.809703 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:51:08 crc kubenswrapper[4826]: I0129 07:51:08.905433 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lh6fr" podUID="2699fd23-307a-48bd-8f37-f96472c832c5" containerName="registry-server" containerID="cri-o://e420f99d9a2e63889997d1890cc29e96b8374570307bc08804146f27f86cb082" gracePeriod=2 Jan 29 07:51:09 crc kubenswrapper[4826]: I0129 07:51:09.271065 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2c229"] Jan 29 07:51:09 crc kubenswrapper[4826]: I0129 07:51:09.349583 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:51:09 crc kubenswrapper[4826]: I0129 07:51:09.349675 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:51:09 crc kubenswrapper[4826]: I0129 07:51:09.444598 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:51:09 crc kubenswrapper[4826]: I0129 07:51:09.449937 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:51:09 crc kubenswrapper[4826]: I0129 07:51:09.919201 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"f80afad52e1ff4b5e6ce1a0e6aeec4ffdbea690f31e442a801b15dad967bf2ed"} Jan 29 07:51:09 crc kubenswrapper[4826]: I0129 07:51:09.923877 4826 generic.go:334] "Generic (PLEG): container finished" podID="2699fd23-307a-48bd-8f37-f96472c832c5" containerID="e420f99d9a2e63889997d1890cc29e96b8374570307bc08804146f27f86cb082" exitCode=0 Jan 29 07:51:09 crc kubenswrapper[4826]: I0129 07:51:09.924065 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh6fr" event={"ID":"2699fd23-307a-48bd-8f37-f96472c832c5","Type":"ContainerDied","Data":"e420f99d9a2e63889997d1890cc29e96b8374570307bc08804146f27f86cb082"} Jan 29 07:51:09 crc kubenswrapper[4826]: I0129 07:51:09.924828 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2c229" podUID="bd26d70a-0057-4168-8a7c-12fb272ac5e4" containerName="registry-server" containerID="cri-o://2a2b764f66db92ebc89b6e9501fd5fe06a4c70c421f17f99d563cd384dd7ef16" gracePeriod=2 Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.012024 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.325887 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.422667 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd26d70a-0057-4168-8a7c-12fb272ac5e4-utilities\") pod \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\" (UID: \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\") " Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.422730 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd26d70a-0057-4168-8a7c-12fb272ac5e4-catalog-content\") pod \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\" (UID: \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\") " Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.422844 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2cr2\" (UniqueName: \"kubernetes.io/projected/bd26d70a-0057-4168-8a7c-12fb272ac5e4-kube-api-access-c2cr2\") pod \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\" (UID: \"bd26d70a-0057-4168-8a7c-12fb272ac5e4\") " Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.423915 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd26d70a-0057-4168-8a7c-12fb272ac5e4-utilities" (OuterVolumeSpecName: "utilities") pod "bd26d70a-0057-4168-8a7c-12fb272ac5e4" (UID: "bd26d70a-0057-4168-8a7c-12fb272ac5e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.429314 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd26d70a-0057-4168-8a7c-12fb272ac5e4-kube-api-access-c2cr2" (OuterVolumeSpecName: "kube-api-access-c2cr2") pod "bd26d70a-0057-4168-8a7c-12fb272ac5e4" (UID: "bd26d70a-0057-4168-8a7c-12fb272ac5e4"). InnerVolumeSpecName "kube-api-access-c2cr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.525331 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2cr2\" (UniqueName: \"kubernetes.io/projected/bd26d70a-0057-4168-8a7c-12fb272ac5e4-kube-api-access-c2cr2\") on node \"crc\" DevicePath \"\"" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.525383 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd26d70a-0057-4168-8a7c-12fb272ac5e4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.565748 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd26d70a-0057-4168-8a7c-12fb272ac5e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd26d70a-0057-4168-8a7c-12fb272ac5e4" (UID: "bd26d70a-0057-4168-8a7c-12fb272ac5e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.626945 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd26d70a-0057-4168-8a7c-12fb272ac5e4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.738682 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.830166 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2699fd23-307a-48bd-8f37-f96472c832c5-utilities\") pod \"2699fd23-307a-48bd-8f37-f96472c832c5\" (UID: \"2699fd23-307a-48bd-8f37-f96472c832c5\") " Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.830227 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2699fd23-307a-48bd-8f37-f96472c832c5-catalog-content\") pod \"2699fd23-307a-48bd-8f37-f96472c832c5\" (UID: \"2699fd23-307a-48bd-8f37-f96472c832c5\") " Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.830352 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kttph\" (UniqueName: \"kubernetes.io/projected/2699fd23-307a-48bd-8f37-f96472c832c5-kube-api-access-kttph\") pod \"2699fd23-307a-48bd-8f37-f96472c832c5\" (UID: \"2699fd23-307a-48bd-8f37-f96472c832c5\") " Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.831565 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2699fd23-307a-48bd-8f37-f96472c832c5-utilities" (OuterVolumeSpecName: "utilities") pod "2699fd23-307a-48bd-8f37-f96472c832c5" (UID: "2699fd23-307a-48bd-8f37-f96472c832c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.885902 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2699fd23-307a-48bd-8f37-f96472c832c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2699fd23-307a-48bd-8f37-f96472c832c5" (UID: "2699fd23-307a-48bd-8f37-f96472c832c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.932428 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2699fd23-307a-48bd-8f37-f96472c832c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.934013 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2699fd23-307a-48bd-8f37-f96472c832c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.936376 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh6fr" event={"ID":"2699fd23-307a-48bd-8f37-f96472c832c5","Type":"ContainerDied","Data":"f4507e41c9654db501b4a8eb811ca4beaf8d39f33ec4bd428b2f3c45ec4bc76a"} Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.936392 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lh6fr" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.936424 4826 scope.go:117] "RemoveContainer" containerID="e420f99d9a2e63889997d1890cc29e96b8374570307bc08804146f27f86cb082" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.938905 4826 generic.go:334] "Generic (PLEG): container finished" podID="bd26d70a-0057-4168-8a7c-12fb272ac5e4" containerID="2a2b764f66db92ebc89b6e9501fd5fe06a4c70c421f17f99d563cd384dd7ef16" exitCode=0 Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.939023 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c229" event={"ID":"bd26d70a-0057-4168-8a7c-12fb272ac5e4","Type":"ContainerDied","Data":"2a2b764f66db92ebc89b6e9501fd5fe06a4c70c421f17f99d563cd384dd7ef16"} Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.939093 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c229" event={"ID":"bd26d70a-0057-4168-8a7c-12fb272ac5e4","Type":"ContainerDied","Data":"1da90e18ef8c8f37c0661656f6e601252c36fbf6f394a11947f23d9eedb26032"} Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.940404 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c229" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.969664 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2c229"] Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.971505 4826 scope.go:117] "RemoveContainer" containerID="f8f9c04ef4270c5f65bbe4041f8fef17b0bedd8ae5200e4a5acd577e4a6c8789" Jan 29 07:51:10 crc kubenswrapper[4826]: I0129 07:51:10.978475 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2c229"] Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.299354 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2699fd23-307a-48bd-8f37-f96472c832c5-kube-api-access-kttph" (OuterVolumeSpecName: "kube-api-access-kttph") pod "2699fd23-307a-48bd-8f37-f96472c832c5" (UID: "2699fd23-307a-48bd-8f37-f96472c832c5"). InnerVolumeSpecName "kube-api-access-kttph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.312567 4826 scope.go:117] "RemoveContainer" containerID="6ad3609b4912fbc8b4d5af464e73f0724623106247b4e13425f0d50e79a3438e" Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.340791 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kttph\" (UniqueName: \"kubernetes.io/projected/2699fd23-307a-48bd-8f37-f96472c832c5-kube-api-access-kttph\") on node \"crc\" DevicePath \"\"" Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.341755 4826 scope.go:117] "RemoveContainer" containerID="2a2b764f66db92ebc89b6e9501fd5fe06a4c70c421f17f99d563cd384dd7ef16" Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.366077 4826 scope.go:117] "RemoveContainer" containerID="265b91a5b584e9a74cfd888f4255bad2e135f19c2b5cfd0d559a0799dfec2f17" Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.386830 4826 scope.go:117] "RemoveContainer" containerID="670e9f785c9156704716473954a3d62cdc8062b20df17445d0ad75a17cca3b59" Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.421916 4826 scope.go:117] "RemoveContainer" containerID="2a2b764f66db92ebc89b6e9501fd5fe06a4c70c421f17f99d563cd384dd7ef16" Jan 29 07:51:11 crc kubenswrapper[4826]: E0129 07:51:11.422644 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a2b764f66db92ebc89b6e9501fd5fe06a4c70c421f17f99d563cd384dd7ef16\": container with ID starting with 2a2b764f66db92ebc89b6e9501fd5fe06a4c70c421f17f99d563cd384dd7ef16 not found: ID does not exist" containerID="2a2b764f66db92ebc89b6e9501fd5fe06a4c70c421f17f99d563cd384dd7ef16" Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.422698 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a2b764f66db92ebc89b6e9501fd5fe06a4c70c421f17f99d563cd384dd7ef16"} err="failed to get container status \"2a2b764f66db92ebc89b6e9501fd5fe06a4c70c421f17f99d563cd384dd7ef16\": rpc error: code = NotFound desc = could not find container \"2a2b764f66db92ebc89b6e9501fd5fe06a4c70c421f17f99d563cd384dd7ef16\": container with ID starting with 2a2b764f66db92ebc89b6e9501fd5fe06a4c70c421f17f99d563cd384dd7ef16 not found: ID does not exist" Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.422732 4826 scope.go:117] "RemoveContainer" containerID="265b91a5b584e9a74cfd888f4255bad2e135f19c2b5cfd0d559a0799dfec2f17" Jan 29 07:51:11 crc kubenswrapper[4826]: E0129 07:51:11.423256 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"265b91a5b584e9a74cfd888f4255bad2e135f19c2b5cfd0d559a0799dfec2f17\": container with ID starting with 265b91a5b584e9a74cfd888f4255bad2e135f19c2b5cfd0d559a0799dfec2f17 not found: ID does not exist" containerID="265b91a5b584e9a74cfd888f4255bad2e135f19c2b5cfd0d559a0799dfec2f17" Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.423339 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"265b91a5b584e9a74cfd888f4255bad2e135f19c2b5cfd0d559a0799dfec2f17"} err="failed to get container status \"265b91a5b584e9a74cfd888f4255bad2e135f19c2b5cfd0d559a0799dfec2f17\": rpc error: code = NotFound desc = could not find container \"265b91a5b584e9a74cfd888f4255bad2e135f19c2b5cfd0d559a0799dfec2f17\": container with ID starting with 265b91a5b584e9a74cfd888f4255bad2e135f19c2b5cfd0d559a0799dfec2f17 not found: ID does not exist" Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.423366 4826 scope.go:117] "RemoveContainer" containerID="670e9f785c9156704716473954a3d62cdc8062b20df17445d0ad75a17cca3b59" Jan 29 07:51:11 crc kubenswrapper[4826]: E0129 07:51:11.423851 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"670e9f785c9156704716473954a3d62cdc8062b20df17445d0ad75a17cca3b59\": container with ID starting with 670e9f785c9156704716473954a3d62cdc8062b20df17445d0ad75a17cca3b59 not found: ID does not exist" containerID="670e9f785c9156704716473954a3d62cdc8062b20df17445d0ad75a17cca3b59" Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.423897 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"670e9f785c9156704716473954a3d62cdc8062b20df17445d0ad75a17cca3b59"} err="failed to get container status \"670e9f785c9156704716473954a3d62cdc8062b20df17445d0ad75a17cca3b59\": rpc error: code = NotFound desc = could not find container \"670e9f785c9156704716473954a3d62cdc8062b20df17445d0ad75a17cca3b59\": container with ID starting with 670e9f785c9156704716473954a3d62cdc8062b20df17445d0ad75a17cca3b59 not found: ID does not exist" Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.572119 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lh6fr"] Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.580892 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lh6fr"] Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.669751 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7fbc"] Jan 29 07:51:11 crc kubenswrapper[4826]: I0129 07:51:11.951467 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q7fbc" podUID="a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" containerName="registry-server" containerID="cri-o://821c64f34c382b61c8c445542c7496781af0c8ae9ac2136b6348b8c329c39a41" gracePeriod=2 Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.395408 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.557723 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6l2d\" (UniqueName: \"kubernetes.io/projected/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-kube-api-access-v6l2d\") pod \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\" (UID: \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\") " Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.557879 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-catalog-content\") pod \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\" (UID: \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\") " Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.557906 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-utilities\") pod \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\" (UID: \"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274\") " Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.558807 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-utilities" (OuterVolumeSpecName: "utilities") pod "a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" (UID: "a2c1a7ca-39cf-4d13-a70a-3bb998e0f274"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.562824 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-kube-api-access-v6l2d" (OuterVolumeSpecName: "kube-api-access-v6l2d") pod "a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" (UID: "a2c1a7ca-39cf-4d13-a70a-3bb998e0f274"). InnerVolumeSpecName "kube-api-access-v6l2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.603436 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" (UID: "a2c1a7ca-39cf-4d13-a70a-3bb998e0f274"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.659273 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6l2d\" (UniqueName: \"kubernetes.io/projected/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-kube-api-access-v6l2d\") on node \"crc\" DevicePath \"\"" Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.659364 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.659391 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.819182 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2699fd23-307a-48bd-8f37-f96472c832c5" path="/var/lib/kubelet/pods/2699fd23-307a-48bd-8f37-f96472c832c5/volumes" Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.820195 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd26d70a-0057-4168-8a7c-12fb272ac5e4" path="/var/lib/kubelet/pods/bd26d70a-0057-4168-8a7c-12fb272ac5e4/volumes" Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.959960 4826 generic.go:334] "Generic (PLEG): container finished" podID="a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" containerID="821c64f34c382b61c8c445542c7496781af0c8ae9ac2136b6348b8c329c39a41" exitCode=0 Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.960000 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7fbc" event={"ID":"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274","Type":"ContainerDied","Data":"821c64f34c382b61c8c445542c7496781af0c8ae9ac2136b6348b8c329c39a41"} Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.960013 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7fbc" Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.960027 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7fbc" event={"ID":"a2c1a7ca-39cf-4d13-a70a-3bb998e0f274","Type":"ContainerDied","Data":"8e9ed5aed4c29030cdec5d81b70793f880ee01abcecedcc02adba13163e1f993"} Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.960047 4826 scope.go:117] "RemoveContainer" containerID="821c64f34c382b61c8c445542c7496781af0c8ae9ac2136b6348b8c329c39a41" Jan 29 07:51:12 crc kubenswrapper[4826]: I0129 07:51:12.983944 4826 scope.go:117] "RemoveContainer" containerID="8eaff784b39f74e66d8dd4a009f14f35dee63ab520a4f9ba87a9a3b6811cd9a4" Jan 29 07:51:13 crc kubenswrapper[4826]: I0129 07:51:13.003657 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7fbc"] Jan 29 07:51:13 crc kubenswrapper[4826]: I0129 07:51:13.014364 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q7fbc"] Jan 29 07:51:13 crc kubenswrapper[4826]: I0129 07:51:13.024744 4826 scope.go:117] "RemoveContainer" containerID="792144b425f1ef9260719c14c8d91dd5b4ef23098cc2cdee92f1307d43b99c7b" Jan 29 07:51:13 crc kubenswrapper[4826]: I0129 07:51:13.056878 4826 scope.go:117] "RemoveContainer" containerID="821c64f34c382b61c8c445542c7496781af0c8ae9ac2136b6348b8c329c39a41" Jan 29 07:51:13 crc kubenswrapper[4826]: E0129 07:51:13.057397 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"821c64f34c382b61c8c445542c7496781af0c8ae9ac2136b6348b8c329c39a41\": container with ID starting with 821c64f34c382b61c8c445542c7496781af0c8ae9ac2136b6348b8c329c39a41 not found: ID does not exist" containerID="821c64f34c382b61c8c445542c7496781af0c8ae9ac2136b6348b8c329c39a41" Jan 29 07:51:13 crc kubenswrapper[4826]: I0129 07:51:13.057434 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"821c64f34c382b61c8c445542c7496781af0c8ae9ac2136b6348b8c329c39a41"} err="failed to get container status \"821c64f34c382b61c8c445542c7496781af0c8ae9ac2136b6348b8c329c39a41\": rpc error: code = NotFound desc = could not find container \"821c64f34c382b61c8c445542c7496781af0c8ae9ac2136b6348b8c329c39a41\": container with ID starting with 821c64f34c382b61c8c445542c7496781af0c8ae9ac2136b6348b8c329c39a41 not found: ID does not exist" Jan 29 07:51:13 crc kubenswrapper[4826]: I0129 07:51:13.057457 4826 scope.go:117] "RemoveContainer" containerID="8eaff784b39f74e66d8dd4a009f14f35dee63ab520a4f9ba87a9a3b6811cd9a4" Jan 29 07:51:13 crc kubenswrapper[4826]: E0129 07:51:13.057720 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eaff784b39f74e66d8dd4a009f14f35dee63ab520a4f9ba87a9a3b6811cd9a4\": container with ID starting with 8eaff784b39f74e66d8dd4a009f14f35dee63ab520a4f9ba87a9a3b6811cd9a4 not found: ID does not exist" containerID="8eaff784b39f74e66d8dd4a009f14f35dee63ab520a4f9ba87a9a3b6811cd9a4" Jan 29 07:51:13 crc kubenswrapper[4826]: I0129 07:51:13.057743 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eaff784b39f74e66d8dd4a009f14f35dee63ab520a4f9ba87a9a3b6811cd9a4"} err="failed to get container status \"8eaff784b39f74e66d8dd4a009f14f35dee63ab520a4f9ba87a9a3b6811cd9a4\": rpc error: code = NotFound desc = could not find container \"8eaff784b39f74e66d8dd4a009f14f35dee63ab520a4f9ba87a9a3b6811cd9a4\": container with ID starting with 8eaff784b39f74e66d8dd4a009f14f35dee63ab520a4f9ba87a9a3b6811cd9a4 not found: ID does not exist" Jan 29 07:51:13 crc kubenswrapper[4826]: I0129 07:51:13.057756 4826 scope.go:117] "RemoveContainer" containerID="792144b425f1ef9260719c14c8d91dd5b4ef23098cc2cdee92f1307d43b99c7b" Jan 29 07:51:13 crc kubenswrapper[4826]: E0129 07:51:13.057962 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"792144b425f1ef9260719c14c8d91dd5b4ef23098cc2cdee92f1307d43b99c7b\": container with ID starting with 792144b425f1ef9260719c14c8d91dd5b4ef23098cc2cdee92f1307d43b99c7b not found: ID does not exist" containerID="792144b425f1ef9260719c14c8d91dd5b4ef23098cc2cdee92f1307d43b99c7b" Jan 29 07:51:13 crc kubenswrapper[4826]: I0129 07:51:13.057981 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792144b425f1ef9260719c14c8d91dd5b4ef23098cc2cdee92f1307d43b99c7b"} err="failed to get container status \"792144b425f1ef9260719c14c8d91dd5b4ef23098cc2cdee92f1307d43b99c7b\": rpc error: code = NotFound desc = could not find container \"792144b425f1ef9260719c14c8d91dd5b4ef23098cc2cdee92f1307d43b99c7b\": container with ID starting with 792144b425f1ef9260719c14c8d91dd5b4ef23098cc2cdee92f1307d43b99c7b not found: ID does not exist" Jan 29 07:51:14 crc kubenswrapper[4826]: I0129 07:51:14.061917 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bd8sn"] Jan 29 07:51:14 crc kubenswrapper[4826]: I0129 07:51:14.062168 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bd8sn" podUID="66632125-cfa7-49d5-8bf4-1ed4d57e5f30" containerName="registry-server" containerID="cri-o://31255dc3ba7a43c43163a949532ce5c186981a36a7f2c3c7675b6d54a07078c3" gracePeriod=2 Jan 29 07:51:14 crc kubenswrapper[4826]: I0129 07:51:14.818181 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" path="/var/lib/kubelet/pods/a2c1a7ca-39cf-4d13-a70a-3bb998e0f274/volumes" Jan 29 07:51:14 crc kubenswrapper[4826]: I0129 07:51:14.978254 4826 generic.go:334] "Generic (PLEG): container finished" podID="66632125-cfa7-49d5-8bf4-1ed4d57e5f30" containerID="31255dc3ba7a43c43163a949532ce5c186981a36a7f2c3c7675b6d54a07078c3" exitCode=0 Jan 29 07:51:14 crc kubenswrapper[4826]: I0129 07:51:14.978333 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bd8sn" event={"ID":"66632125-cfa7-49d5-8bf4-1ed4d57e5f30","Type":"ContainerDied","Data":"31255dc3ba7a43c43163a949532ce5c186981a36a7f2c3c7675b6d54a07078c3"} Jan 29 07:51:15 crc kubenswrapper[4826]: I0129 07:51:15.275275 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:51:15 crc kubenswrapper[4826]: I0129 07:51:15.408240 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-utilities\") pod \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\" (UID: \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\") " Jan 29 07:51:15 crc kubenswrapper[4826]: I0129 07:51:15.410531 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fz68\" (UniqueName: \"kubernetes.io/projected/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-kube-api-access-8fz68\") pod \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\" (UID: \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\") " Jan 29 07:51:15 crc kubenswrapper[4826]: I0129 07:51:15.410673 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-catalog-content\") pod \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\" (UID: \"66632125-cfa7-49d5-8bf4-1ed4d57e5f30\") " Jan 29 07:51:15 crc kubenswrapper[4826]: I0129 07:51:15.421949 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-utilities" (OuterVolumeSpecName: "utilities") pod "66632125-cfa7-49d5-8bf4-1ed4d57e5f30" (UID: "66632125-cfa7-49d5-8bf4-1ed4d57e5f30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:51:15 crc kubenswrapper[4826]: I0129 07:51:15.436443 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-kube-api-access-8fz68" (OuterVolumeSpecName: "kube-api-access-8fz68") pod "66632125-cfa7-49d5-8bf4-1ed4d57e5f30" (UID: "66632125-cfa7-49d5-8bf4-1ed4d57e5f30"). InnerVolumeSpecName "kube-api-access-8fz68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:51:15 crc kubenswrapper[4826]: I0129 07:51:15.448230 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66632125-cfa7-49d5-8bf4-1ed4d57e5f30" (UID: "66632125-cfa7-49d5-8bf4-1ed4d57e5f30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 07:51:15 crc kubenswrapper[4826]: I0129 07:51:15.513456 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 07:51:15 crc kubenswrapper[4826]: I0129 07:51:15.513512 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fz68\" (UniqueName: \"kubernetes.io/projected/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-kube-api-access-8fz68\") on node \"crc\" DevicePath \"\"" Jan 29 07:51:15 crc kubenswrapper[4826]: I0129 07:51:15.513528 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66632125-cfa7-49d5-8bf4-1ed4d57e5f30-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 07:51:15 crc kubenswrapper[4826]: I0129 07:51:15.992480 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bd8sn" event={"ID":"66632125-cfa7-49d5-8bf4-1ed4d57e5f30","Type":"ContainerDied","Data":"beef7f0547969d6b9357646dcaa040b705ac482cb98338e610bafc179733bd04"} Jan 29 07:51:15 crc kubenswrapper[4826]: I0129 07:51:15.992552 4826 scope.go:117] "RemoveContainer" containerID="31255dc3ba7a43c43163a949532ce5c186981a36a7f2c3c7675b6d54a07078c3" Jan 29 07:51:15 crc kubenswrapper[4826]: I0129 07:51:15.992646 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bd8sn" Jan 29 07:51:16 crc kubenswrapper[4826]: I0129 07:51:16.027265 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bd8sn"] Jan 29 07:51:16 crc kubenswrapper[4826]: I0129 07:51:16.031334 4826 scope.go:117] "RemoveContainer" containerID="ab3b42803747f07c276dd76bc1a660ed15804290fbc82796f73f027a2a1c2a7f" Jan 29 07:51:16 crc kubenswrapper[4826]: I0129 07:51:16.031995 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bd8sn"] Jan 29 07:51:16 crc kubenswrapper[4826]: I0129 07:51:16.065329 4826 scope.go:117] "RemoveContainer" containerID="6a45c8537367ca7b2e25c75821f811aca9b16108c613c3e2f06b75f308bc5d28" Jan 29 07:51:16 crc kubenswrapper[4826]: I0129 07:51:16.823644 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66632125-cfa7-49d5-8bf4-1ed4d57e5f30" path="/var/lib/kubelet/pods/66632125-cfa7-49d5-8bf4-1ed4d57e5f30/volumes" Jan 29 07:53:35 crc kubenswrapper[4826]: I0129 07:53:35.656666 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:53:35 crc kubenswrapper[4826]: I0129 07:53:35.659100 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:54:05 crc kubenswrapper[4826]: I0129 07:54:05.657133 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:54:05 crc kubenswrapper[4826]: I0129 07:54:05.658151 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:54:35 crc kubenswrapper[4826]: I0129 07:54:35.657153 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:54:35 crc kubenswrapper[4826]: I0129 07:54:35.657969 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:54:35 crc kubenswrapper[4826]: I0129 07:54:35.658044 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 07:54:35 crc kubenswrapper[4826]: I0129 07:54:35.659128 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f80afad52e1ff4b5e6ce1a0e6aeec4ffdbea690f31e442a801b15dad967bf2ed"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:54:35 crc kubenswrapper[4826]: I0129 07:54:35.659254 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://f80afad52e1ff4b5e6ce1a0e6aeec4ffdbea690f31e442a801b15dad967bf2ed" gracePeriod=600 Jan 29 07:54:35 crc kubenswrapper[4826]: I0129 07:54:35.951056 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="f80afad52e1ff4b5e6ce1a0e6aeec4ffdbea690f31e442a801b15dad967bf2ed" exitCode=0 Jan 29 07:54:35 crc kubenswrapper[4826]: I0129 07:54:35.951210 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"f80afad52e1ff4b5e6ce1a0e6aeec4ffdbea690f31e442a801b15dad967bf2ed"} Jan 29 07:54:35 crc kubenswrapper[4826]: I0129 07:54:35.951965 4826 scope.go:117] "RemoveContainer" containerID="f18573a32e2ff34eb8f189624513c77e9ac0986e9e09ebc8ad008507d09dbf71" Jan 29 07:54:36 crc kubenswrapper[4826]: I0129 07:54:36.963025 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74"} Jan 29 07:56:35 crc kubenswrapper[4826]: I0129 07:56:35.656358 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:56:35 crc kubenswrapper[4826]: I0129 07:56:35.657234 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:57:05 crc kubenswrapper[4826]: I0129 07:57:05.656380 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:57:05 crc kubenswrapper[4826]: I0129 07:57:05.657002 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:57:35 crc kubenswrapper[4826]: I0129 07:57:35.657323 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 07:57:35 crc kubenswrapper[4826]: I0129 07:57:35.658315 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 07:57:35 crc kubenswrapper[4826]: I0129 07:57:35.658386 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 07:57:35 crc kubenswrapper[4826]: I0129 07:57:35.659199 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 07:57:35 crc kubenswrapper[4826]: I0129 07:57:35.659485 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" gracePeriod=600 Jan 29 07:57:35 crc kubenswrapper[4826]: E0129 07:57:35.811620 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:57:36 crc kubenswrapper[4826]: I0129 07:57:36.628936 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" exitCode=0 Jan 29 07:57:36 crc kubenswrapper[4826]: I0129 07:57:36.628995 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74"} Jan 29 07:57:36 crc kubenswrapper[4826]: I0129 07:57:36.629080 4826 scope.go:117] "RemoveContainer" containerID="f80afad52e1ff4b5e6ce1a0e6aeec4ffdbea690f31e442a801b15dad967bf2ed" Jan 29 07:57:36 crc kubenswrapper[4826]: I0129 07:57:36.629879 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 07:57:36 crc kubenswrapper[4826]: E0129 07:57:36.630357 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:57:49 crc kubenswrapper[4826]: I0129 07:57:49.809450 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 07:57:49 crc kubenswrapper[4826]: E0129 07:57:49.810936 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:58:00 crc kubenswrapper[4826]: I0129 07:58:00.809366 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 07:58:00 crc kubenswrapper[4826]: E0129 07:58:00.810040 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:58:14 crc kubenswrapper[4826]: I0129 07:58:14.808889 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 07:58:14 crc kubenswrapper[4826]: E0129 07:58:14.809671 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:58:25 crc kubenswrapper[4826]: I0129 07:58:25.809787 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 07:58:25 crc kubenswrapper[4826]: E0129 07:58:25.811185 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:58:38 crc kubenswrapper[4826]: I0129 07:58:38.809724 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 07:58:38 crc kubenswrapper[4826]: E0129 07:58:38.810767 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:58:52 crc kubenswrapper[4826]: I0129 07:58:52.809560 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 07:58:52 crc kubenswrapper[4826]: E0129 07:58:52.810551 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:59:07 crc kubenswrapper[4826]: I0129 07:59:07.809395 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 07:59:07 crc kubenswrapper[4826]: E0129 07:59:07.810658 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:59:22 crc kubenswrapper[4826]: I0129 07:59:22.809475 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 07:59:22 crc kubenswrapper[4826]: E0129 07:59:22.810675 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:59:24 crc kubenswrapper[4826]: I0129 07:59:24.997081 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-z4wvn"] Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.008001 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-z4wvn"] Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.135646 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-xxlzx"] Jan 29 07:59:25 crc kubenswrapper[4826]: E0129 07:59:25.135993 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2699fd23-307a-48bd-8f37-f96472c832c5" containerName="extract-utilities" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.136020 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2699fd23-307a-48bd-8f37-f96472c832c5" containerName="extract-utilities" Jan 29 07:59:25 crc kubenswrapper[4826]: E0129 07:59:25.136034 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd26d70a-0057-4168-8a7c-12fb272ac5e4" containerName="extract-content" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.136042 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd26d70a-0057-4168-8a7c-12fb272ac5e4" containerName="extract-content" Jan 29 07:59:25 crc kubenswrapper[4826]: E0129 07:59:25.136060 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66632125-cfa7-49d5-8bf4-1ed4d57e5f30" containerName="extract-content" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.136068 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="66632125-cfa7-49d5-8bf4-1ed4d57e5f30" containerName="extract-content" Jan 29 07:59:25 crc kubenswrapper[4826]: E0129 07:59:25.136081 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66632125-cfa7-49d5-8bf4-1ed4d57e5f30" containerName="extract-utilities" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.136088 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="66632125-cfa7-49d5-8bf4-1ed4d57e5f30" containerName="extract-utilities" Jan 29 07:59:25 crc kubenswrapper[4826]: E0129 07:59:25.136100 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66632125-cfa7-49d5-8bf4-1ed4d57e5f30" containerName="registry-server" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.136107 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="66632125-cfa7-49d5-8bf4-1ed4d57e5f30" containerName="registry-server" Jan 29 07:59:25 crc kubenswrapper[4826]: E0129 07:59:25.136119 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" containerName="registry-server" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.136128 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" containerName="registry-server" Jan 29 07:59:25 crc kubenswrapper[4826]: E0129 07:59:25.136145 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" containerName="extract-utilities" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.136152 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" containerName="extract-utilities" Jan 29 07:59:25 crc kubenswrapper[4826]: E0129 07:59:25.136164 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd26d70a-0057-4168-8a7c-12fb272ac5e4" containerName="extract-utilities" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.136174 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd26d70a-0057-4168-8a7c-12fb272ac5e4" containerName="extract-utilities" Jan 29 07:59:25 crc kubenswrapper[4826]: E0129 07:59:25.136188 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2699fd23-307a-48bd-8f37-f96472c832c5" containerName="registry-server" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.136196 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2699fd23-307a-48bd-8f37-f96472c832c5" containerName="registry-server" Jan 29 07:59:25 crc kubenswrapper[4826]: E0129 07:59:25.136205 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" containerName="extract-content" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.136212 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" containerName="extract-content" Jan 29 07:59:25 crc kubenswrapper[4826]: E0129 07:59:25.136230 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd26d70a-0057-4168-8a7c-12fb272ac5e4" containerName="registry-server" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.136237 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd26d70a-0057-4168-8a7c-12fb272ac5e4" containerName="registry-server" Jan 29 07:59:25 crc kubenswrapper[4826]: E0129 07:59:25.136252 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2699fd23-307a-48bd-8f37-f96472c832c5" containerName="extract-content" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.136260 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2699fd23-307a-48bd-8f37-f96472c832c5" containerName="extract-content" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.137392 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="66632125-cfa7-49d5-8bf4-1ed4d57e5f30" containerName="registry-server" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.137426 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c1a7ca-39cf-4d13-a70a-3bb998e0f274" containerName="registry-server" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.137448 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd26d70a-0057-4168-8a7c-12fb272ac5e4" containerName="registry-server" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.137471 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2699fd23-307a-48bd-8f37-f96472c832c5" containerName="registry-server" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.138165 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xxlzx" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.141003 4826 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-9wdl9" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.143469 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.143591 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.143813 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.152263 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-xxlzx"] Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.172369 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ca4bae39-6932-4513-a21c-553b6624a260-node-mnt\") pod \"crc-storage-crc-xxlzx\" (UID: \"ca4bae39-6932-4513-a21c-553b6624a260\") " pod="crc-storage/crc-storage-crc-xxlzx" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.172495 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ca4bae39-6932-4513-a21c-553b6624a260-crc-storage\") pod \"crc-storage-crc-xxlzx\" (UID: \"ca4bae39-6932-4513-a21c-553b6624a260\") " pod="crc-storage/crc-storage-crc-xxlzx" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.172666 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tv8x\" (UniqueName: \"kubernetes.io/projected/ca4bae39-6932-4513-a21c-553b6624a260-kube-api-access-5tv8x\") pod \"crc-storage-crc-xxlzx\" (UID: \"ca4bae39-6932-4513-a21c-553b6624a260\") " pod="crc-storage/crc-storage-crc-xxlzx" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.273788 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tv8x\" (UniqueName: \"kubernetes.io/projected/ca4bae39-6932-4513-a21c-553b6624a260-kube-api-access-5tv8x\") pod \"crc-storage-crc-xxlzx\" (UID: \"ca4bae39-6932-4513-a21c-553b6624a260\") " pod="crc-storage/crc-storage-crc-xxlzx" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.273900 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ca4bae39-6932-4513-a21c-553b6624a260-node-mnt\") pod \"crc-storage-crc-xxlzx\" (UID: \"ca4bae39-6932-4513-a21c-553b6624a260\") " pod="crc-storage/crc-storage-crc-xxlzx" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.273994 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ca4bae39-6932-4513-a21c-553b6624a260-crc-storage\") pod \"crc-storage-crc-xxlzx\" (UID: \"ca4bae39-6932-4513-a21c-553b6624a260\") " pod="crc-storage/crc-storage-crc-xxlzx" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.274469 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ca4bae39-6932-4513-a21c-553b6624a260-node-mnt\") pod \"crc-storage-crc-xxlzx\" (UID: \"ca4bae39-6932-4513-a21c-553b6624a260\") " pod="crc-storage/crc-storage-crc-xxlzx" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.275040 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ca4bae39-6932-4513-a21c-553b6624a260-crc-storage\") pod \"crc-storage-crc-xxlzx\" (UID: \"ca4bae39-6932-4513-a21c-553b6624a260\") " pod="crc-storage/crc-storage-crc-xxlzx" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.297056 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tv8x\" (UniqueName: \"kubernetes.io/projected/ca4bae39-6932-4513-a21c-553b6624a260-kube-api-access-5tv8x\") pod \"crc-storage-crc-xxlzx\" (UID: \"ca4bae39-6932-4513-a21c-553b6624a260\") " pod="crc-storage/crc-storage-crc-xxlzx" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.464032 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xxlzx" Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.972872 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-xxlzx"] Jan 29 07:59:25 crc kubenswrapper[4826]: I0129 07:59:25.987214 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 07:59:26 crc kubenswrapper[4826]: I0129 07:59:26.588223 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xxlzx" event={"ID":"ca4bae39-6932-4513-a21c-553b6624a260","Type":"ContainerStarted","Data":"e3607892a1f9bd5a963b5dddad052970ef340ad897f1c11bccc45152df811f57"} Jan 29 07:59:26 crc kubenswrapper[4826]: I0129 07:59:26.815858 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d160df0-c2ef-4694-a83f-bc8fef8a272f" path="/var/lib/kubelet/pods/1d160df0-c2ef-4694-a83f-bc8fef8a272f/volumes" Jan 29 07:59:27 crc kubenswrapper[4826]: I0129 07:59:27.595658 4826 generic.go:334] "Generic (PLEG): container finished" podID="ca4bae39-6932-4513-a21c-553b6624a260" containerID="f07f00c73960c8df45c1ade2a0c7eed70c15e5d7f67b649fa70a8788406a466c" exitCode=0 Jan 29 07:59:27 crc kubenswrapper[4826]: I0129 07:59:27.595853 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xxlzx" event={"ID":"ca4bae39-6932-4513-a21c-553b6624a260","Type":"ContainerDied","Data":"f07f00c73960c8df45c1ade2a0c7eed70c15e5d7f67b649fa70a8788406a466c"} Jan 29 07:59:28 crc kubenswrapper[4826]: I0129 07:59:28.986282 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xxlzx" Jan 29 07:59:29 crc kubenswrapper[4826]: I0129 07:59:29.037649 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ca4bae39-6932-4513-a21c-553b6624a260-crc-storage\") pod \"ca4bae39-6932-4513-a21c-553b6624a260\" (UID: \"ca4bae39-6932-4513-a21c-553b6624a260\") " Jan 29 07:59:29 crc kubenswrapper[4826]: I0129 07:59:29.037752 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tv8x\" (UniqueName: \"kubernetes.io/projected/ca4bae39-6932-4513-a21c-553b6624a260-kube-api-access-5tv8x\") pod \"ca4bae39-6932-4513-a21c-553b6624a260\" (UID: \"ca4bae39-6932-4513-a21c-553b6624a260\") " Jan 29 07:59:29 crc kubenswrapper[4826]: I0129 07:59:29.037918 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ca4bae39-6932-4513-a21c-553b6624a260-node-mnt\") pod \"ca4bae39-6932-4513-a21c-553b6624a260\" (UID: \"ca4bae39-6932-4513-a21c-553b6624a260\") " Jan 29 07:59:29 crc kubenswrapper[4826]: I0129 07:59:29.038050 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca4bae39-6932-4513-a21c-553b6624a260-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "ca4bae39-6932-4513-a21c-553b6624a260" (UID: "ca4bae39-6932-4513-a21c-553b6624a260"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:59:29 crc kubenswrapper[4826]: I0129 07:59:29.038618 4826 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/ca4bae39-6932-4513-a21c-553b6624a260-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 29 07:59:29 crc kubenswrapper[4826]: I0129 07:59:29.043192 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4bae39-6932-4513-a21c-553b6624a260-kube-api-access-5tv8x" (OuterVolumeSpecName: "kube-api-access-5tv8x") pod "ca4bae39-6932-4513-a21c-553b6624a260" (UID: "ca4bae39-6932-4513-a21c-553b6624a260"). InnerVolumeSpecName "kube-api-access-5tv8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:59:29 crc kubenswrapper[4826]: I0129 07:59:29.053454 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca4bae39-6932-4513-a21c-553b6624a260-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "ca4bae39-6932-4513-a21c-553b6624a260" (UID: "ca4bae39-6932-4513-a21c-553b6624a260"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:59:29 crc kubenswrapper[4826]: I0129 07:59:29.139144 4826 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/ca4bae39-6932-4513-a21c-553b6624a260-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 29 07:59:29 crc kubenswrapper[4826]: I0129 07:59:29.139185 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tv8x\" (UniqueName: \"kubernetes.io/projected/ca4bae39-6932-4513-a21c-553b6624a260-kube-api-access-5tv8x\") on node \"crc\" DevicePath \"\"" Jan 29 07:59:29 crc kubenswrapper[4826]: I0129 07:59:29.616858 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xxlzx" event={"ID":"ca4bae39-6932-4513-a21c-553b6624a260","Type":"ContainerDied","Data":"e3607892a1f9bd5a963b5dddad052970ef340ad897f1c11bccc45152df811f57"} Jan 29 07:59:29 crc kubenswrapper[4826]: I0129 07:59:29.616920 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3607892a1f9bd5a963b5dddad052970ef340ad897f1c11bccc45152df811f57" Jan 29 07:59:29 crc kubenswrapper[4826]: I0129 07:59:29.616929 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xxlzx" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.309206 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-xxlzx"] Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.318485 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-xxlzx"] Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.433788 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-gblnm"] Jan 29 07:59:31 crc kubenswrapper[4826]: E0129 07:59:31.434215 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4bae39-6932-4513-a21c-553b6624a260" containerName="storage" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.434247 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4bae39-6932-4513-a21c-553b6624a260" containerName="storage" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.434567 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4bae39-6932-4513-a21c-553b6624a260" containerName="storage" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.435286 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gblnm" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.438675 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.438693 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.439462 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.439882 4826 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-9wdl9" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.448068 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-gblnm"] Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.480072 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/41a71c5e-7d16-47a8-a70e-cd62077b0822-crc-storage\") pod \"crc-storage-crc-gblnm\" (UID: \"41a71c5e-7d16-47a8-a70e-cd62077b0822\") " pod="crc-storage/crc-storage-crc-gblnm" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.480218 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/41a71c5e-7d16-47a8-a70e-cd62077b0822-node-mnt\") pod \"crc-storage-crc-gblnm\" (UID: \"41a71c5e-7d16-47a8-a70e-cd62077b0822\") " pod="crc-storage/crc-storage-crc-gblnm" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.480340 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lqs4\" (UniqueName: \"kubernetes.io/projected/41a71c5e-7d16-47a8-a70e-cd62077b0822-kube-api-access-2lqs4\") pod \"crc-storage-crc-gblnm\" (UID: \"41a71c5e-7d16-47a8-a70e-cd62077b0822\") " pod="crc-storage/crc-storage-crc-gblnm" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.582264 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/41a71c5e-7d16-47a8-a70e-cd62077b0822-crc-storage\") pod \"crc-storage-crc-gblnm\" (UID: \"41a71c5e-7d16-47a8-a70e-cd62077b0822\") " pod="crc-storage/crc-storage-crc-gblnm" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.582380 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/41a71c5e-7d16-47a8-a70e-cd62077b0822-node-mnt\") pod \"crc-storage-crc-gblnm\" (UID: \"41a71c5e-7d16-47a8-a70e-cd62077b0822\") " pod="crc-storage/crc-storage-crc-gblnm" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.582441 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lqs4\" (UniqueName: \"kubernetes.io/projected/41a71c5e-7d16-47a8-a70e-cd62077b0822-kube-api-access-2lqs4\") pod \"crc-storage-crc-gblnm\" (UID: \"41a71c5e-7d16-47a8-a70e-cd62077b0822\") " pod="crc-storage/crc-storage-crc-gblnm" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.582842 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/41a71c5e-7d16-47a8-a70e-cd62077b0822-node-mnt\") pod \"crc-storage-crc-gblnm\" (UID: \"41a71c5e-7d16-47a8-a70e-cd62077b0822\") " pod="crc-storage/crc-storage-crc-gblnm" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.584828 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/41a71c5e-7d16-47a8-a70e-cd62077b0822-crc-storage\") pod \"crc-storage-crc-gblnm\" (UID: \"41a71c5e-7d16-47a8-a70e-cd62077b0822\") " pod="crc-storage/crc-storage-crc-gblnm" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.604683 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lqs4\" (UniqueName: \"kubernetes.io/projected/41a71c5e-7d16-47a8-a70e-cd62077b0822-kube-api-access-2lqs4\") pod \"crc-storage-crc-gblnm\" (UID: \"41a71c5e-7d16-47a8-a70e-cd62077b0822\") " pod="crc-storage/crc-storage-crc-gblnm" Jan 29 07:59:31 crc kubenswrapper[4826]: I0129 07:59:31.768251 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gblnm" Jan 29 07:59:32 crc kubenswrapper[4826]: I0129 07:59:32.295048 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-gblnm"] Jan 29 07:59:32 crc kubenswrapper[4826]: W0129 07:59:32.298188 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41a71c5e_7d16_47a8_a70e_cd62077b0822.slice/crio-bc91610257ee782113a9461bc31f4d44e59f5275972288c68b1632179048603b WatchSource:0}: Error finding container bc91610257ee782113a9461bc31f4d44e59f5275972288c68b1632179048603b: Status 404 returned error can't find the container with id bc91610257ee782113a9461bc31f4d44e59f5275972288c68b1632179048603b Jan 29 07:59:32 crc kubenswrapper[4826]: I0129 07:59:32.639969 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gblnm" event={"ID":"41a71c5e-7d16-47a8-a70e-cd62077b0822","Type":"ContainerStarted","Data":"bc91610257ee782113a9461bc31f4d44e59f5275972288c68b1632179048603b"} Jan 29 07:59:32 crc kubenswrapper[4826]: I0129 07:59:32.826280 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4bae39-6932-4513-a21c-553b6624a260" path="/var/lib/kubelet/pods/ca4bae39-6932-4513-a21c-553b6624a260/volumes" Jan 29 07:59:33 crc kubenswrapper[4826]: I0129 07:59:33.651826 4826 generic.go:334] "Generic (PLEG): container finished" podID="41a71c5e-7d16-47a8-a70e-cd62077b0822" containerID="5edc72682e16f3d0fe7304ce9cd6dbca000e17f01221689421d0bcec03708602" exitCode=0 Jan 29 07:59:33 crc kubenswrapper[4826]: I0129 07:59:33.651907 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gblnm" event={"ID":"41a71c5e-7d16-47a8-a70e-cd62077b0822","Type":"ContainerDied","Data":"5edc72682e16f3d0fe7304ce9cd6dbca000e17f01221689421d0bcec03708602"} Jan 29 07:59:33 crc kubenswrapper[4826]: I0129 07:59:33.808745 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 07:59:33 crc kubenswrapper[4826]: E0129 07:59:33.809142 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 07:59:35 crc kubenswrapper[4826]: I0129 07:59:35.094976 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gblnm" Jan 29 07:59:35 crc kubenswrapper[4826]: I0129 07:59:35.149434 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lqs4\" (UniqueName: \"kubernetes.io/projected/41a71c5e-7d16-47a8-a70e-cd62077b0822-kube-api-access-2lqs4\") pod \"41a71c5e-7d16-47a8-a70e-cd62077b0822\" (UID: \"41a71c5e-7d16-47a8-a70e-cd62077b0822\") " Jan 29 07:59:35 crc kubenswrapper[4826]: I0129 07:59:35.149510 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/41a71c5e-7d16-47a8-a70e-cd62077b0822-crc-storage\") pod \"41a71c5e-7d16-47a8-a70e-cd62077b0822\" (UID: \"41a71c5e-7d16-47a8-a70e-cd62077b0822\") " Jan 29 07:59:35 crc kubenswrapper[4826]: I0129 07:59:35.149568 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/41a71c5e-7d16-47a8-a70e-cd62077b0822-node-mnt\") pod \"41a71c5e-7d16-47a8-a70e-cd62077b0822\" (UID: \"41a71c5e-7d16-47a8-a70e-cd62077b0822\") " Jan 29 07:59:35 crc kubenswrapper[4826]: I0129 07:59:35.149958 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41a71c5e-7d16-47a8-a70e-cd62077b0822-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "41a71c5e-7d16-47a8-a70e-cd62077b0822" (UID: "41a71c5e-7d16-47a8-a70e-cd62077b0822"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 07:59:35 crc kubenswrapper[4826]: I0129 07:59:35.156654 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a71c5e-7d16-47a8-a70e-cd62077b0822-kube-api-access-2lqs4" (OuterVolumeSpecName: "kube-api-access-2lqs4") pod "41a71c5e-7d16-47a8-a70e-cd62077b0822" (UID: "41a71c5e-7d16-47a8-a70e-cd62077b0822"). InnerVolumeSpecName "kube-api-access-2lqs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 07:59:35 crc kubenswrapper[4826]: I0129 07:59:35.178889 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41a71c5e-7d16-47a8-a70e-cd62077b0822-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "41a71c5e-7d16-47a8-a70e-cd62077b0822" (UID: "41a71c5e-7d16-47a8-a70e-cd62077b0822"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 07:59:35 crc kubenswrapper[4826]: I0129 07:59:35.250931 4826 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/41a71c5e-7d16-47a8-a70e-cd62077b0822-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 29 07:59:35 crc kubenswrapper[4826]: I0129 07:59:35.250987 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lqs4\" (UniqueName: \"kubernetes.io/projected/41a71c5e-7d16-47a8-a70e-cd62077b0822-kube-api-access-2lqs4\") on node \"crc\" DevicePath \"\"" Jan 29 07:59:35 crc kubenswrapper[4826]: I0129 07:59:35.251006 4826 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/41a71c5e-7d16-47a8-a70e-cd62077b0822-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 29 07:59:35 crc kubenswrapper[4826]: I0129 07:59:35.675727 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gblnm" event={"ID":"41a71c5e-7d16-47a8-a70e-cd62077b0822","Type":"ContainerDied","Data":"bc91610257ee782113a9461bc31f4d44e59f5275972288c68b1632179048603b"} Jan 29 07:59:35 crc kubenswrapper[4826]: I0129 07:59:35.675819 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc91610257ee782113a9461bc31f4d44e59f5275972288c68b1632179048603b" Jan 29 07:59:35 crc kubenswrapper[4826]: I0129 07:59:35.675838 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gblnm" Jan 29 07:59:48 crc kubenswrapper[4826]: I0129 07:59:48.809260 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 07:59:48 crc kubenswrapper[4826]: E0129 07:59:48.810143 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.200998 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5"] Jan 29 08:00:00 crc kubenswrapper[4826]: E0129 08:00:00.202432 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a71c5e-7d16-47a8-a70e-cd62077b0822" containerName="storage" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.202466 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a71c5e-7d16-47a8-a70e-cd62077b0822" containerName="storage" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.202841 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a71c5e-7d16-47a8-a70e-cd62077b0822" containerName="storage" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.203884 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.210178 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.211245 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.216443 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5"] Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.396970 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca60dfe-57df-4127-830d-692ddd8e3c7b-config-volume\") pod \"collect-profiles-29494560-rzll5\" (UID: \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.397183 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca60dfe-57df-4127-830d-692ddd8e3c7b-secret-volume\") pod \"collect-profiles-29494560-rzll5\" (UID: \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.397482 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmws\" (UniqueName: \"kubernetes.io/projected/6ca60dfe-57df-4127-830d-692ddd8e3c7b-kube-api-access-xdmws\") pod \"collect-profiles-29494560-rzll5\" (UID: \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.499878 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmws\" (UniqueName: \"kubernetes.io/projected/6ca60dfe-57df-4127-830d-692ddd8e3c7b-kube-api-access-xdmws\") pod \"collect-profiles-29494560-rzll5\" (UID: \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.500561 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca60dfe-57df-4127-830d-692ddd8e3c7b-config-volume\") pod \"collect-profiles-29494560-rzll5\" (UID: \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.500834 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca60dfe-57df-4127-830d-692ddd8e3c7b-secret-volume\") pod \"collect-profiles-29494560-rzll5\" (UID: \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.502101 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca60dfe-57df-4127-830d-692ddd8e3c7b-config-volume\") pod \"collect-profiles-29494560-rzll5\" (UID: \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.510895 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca60dfe-57df-4127-830d-692ddd8e3c7b-secret-volume\") pod \"collect-profiles-29494560-rzll5\" (UID: \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.539792 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmws\" (UniqueName: \"kubernetes.io/projected/6ca60dfe-57df-4127-830d-692ddd8e3c7b-kube-api-access-xdmws\") pod \"collect-profiles-29494560-rzll5\" (UID: \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" Jan 29 08:00:00 crc kubenswrapper[4826]: I0129 08:00:00.545344 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" Jan 29 08:00:01 crc kubenswrapper[4826]: I0129 08:00:01.055620 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5"] Jan 29 08:00:01 crc kubenswrapper[4826]: I0129 08:00:01.919664 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ca60dfe-57df-4127-830d-692ddd8e3c7b" containerID="cad5124349c793b4ad8d4893fa5b0248fc922e32ac2801f8622a14d757dcce50" exitCode=0 Jan 29 08:00:01 crc kubenswrapper[4826]: I0129 08:00:01.919728 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" event={"ID":"6ca60dfe-57df-4127-830d-692ddd8e3c7b","Type":"ContainerDied","Data":"cad5124349c793b4ad8d4893fa5b0248fc922e32ac2801f8622a14d757dcce50"} Jan 29 08:00:01 crc kubenswrapper[4826]: I0129 08:00:01.919769 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" event={"ID":"6ca60dfe-57df-4127-830d-692ddd8e3c7b","Type":"ContainerStarted","Data":"5de6d7b374a1978488f7e3d0ca3b7845e2ce243f032981ceb5573c8dabaa6dc0"} Jan 29 08:00:02 crc kubenswrapper[4826]: I0129 08:00:02.657192 4826 scope.go:117] "RemoveContainer" containerID="6c05d6f827d0d2f9a0b07269fc670a11f080cc6df58f7041b420ccc38ecbd9d8" Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.287139 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.446835 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdmws\" (UniqueName: \"kubernetes.io/projected/6ca60dfe-57df-4127-830d-692ddd8e3c7b-kube-api-access-xdmws\") pod \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\" (UID: \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\") " Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.447456 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca60dfe-57df-4127-830d-692ddd8e3c7b-config-volume\") pod \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\" (UID: \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\") " Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.448532 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca60dfe-57df-4127-830d-692ddd8e3c7b-secret-volume\") pod \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\" (UID: \"6ca60dfe-57df-4127-830d-692ddd8e3c7b\") " Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.448897 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca60dfe-57df-4127-830d-692ddd8e3c7b-config-volume" (OuterVolumeSpecName: "config-volume") pod "6ca60dfe-57df-4127-830d-692ddd8e3c7b" (UID: "6ca60dfe-57df-4127-830d-692ddd8e3c7b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.449459 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ca60dfe-57df-4127-830d-692ddd8e3c7b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.455234 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca60dfe-57df-4127-830d-692ddd8e3c7b-kube-api-access-xdmws" (OuterVolumeSpecName: "kube-api-access-xdmws") pod "6ca60dfe-57df-4127-830d-692ddd8e3c7b" (UID: "6ca60dfe-57df-4127-830d-692ddd8e3c7b"). InnerVolumeSpecName "kube-api-access-xdmws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.455894 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca60dfe-57df-4127-830d-692ddd8e3c7b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6ca60dfe-57df-4127-830d-692ddd8e3c7b" (UID: "6ca60dfe-57df-4127-830d-692ddd8e3c7b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.550699 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdmws\" (UniqueName: \"kubernetes.io/projected/6ca60dfe-57df-4127-830d-692ddd8e3c7b-kube-api-access-xdmws\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.550756 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ca60dfe-57df-4127-830d-692ddd8e3c7b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.809285 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:00:03 crc kubenswrapper[4826]: E0129 08:00:03.809832 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.940448 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" event={"ID":"6ca60dfe-57df-4127-830d-692ddd8e3c7b","Type":"ContainerDied","Data":"5de6d7b374a1978488f7e3d0ca3b7845e2ce243f032981ceb5573c8dabaa6dc0"} Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.940788 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5de6d7b374a1978488f7e3d0ca3b7845e2ce243f032981ceb5573c8dabaa6dc0" Jan 29 08:00:03 crc kubenswrapper[4826]: I0129 08:00:03.940546 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5" Jan 29 08:00:04 crc kubenswrapper[4826]: I0129 08:00:04.383478 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4"] Jan 29 08:00:04 crc kubenswrapper[4826]: I0129 08:00:04.388440 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494515-4v6n4"] Jan 29 08:00:04 crc kubenswrapper[4826]: I0129 08:00:04.826016 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83d0c70-48d2-4019-be7c-0df2e68c51c9" path="/var/lib/kubelet/pods/a83d0c70-48d2-4019-be7c-0df2e68c51c9/volumes" Jan 29 08:00:14 crc kubenswrapper[4826]: I0129 08:00:14.808830 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:00:14 crc kubenswrapper[4826]: E0129 08:00:14.809880 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:00:28 crc kubenswrapper[4826]: I0129 08:00:28.808665 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:00:28 crc kubenswrapper[4826]: E0129 08:00:28.810148 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:00:40 crc kubenswrapper[4826]: I0129 08:00:40.809267 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:00:40 crc kubenswrapper[4826]: E0129 08:00:40.810065 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:00:54 crc kubenswrapper[4826]: I0129 08:00:54.808485 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:00:54 crc kubenswrapper[4826]: E0129 08:00:54.809366 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:01:02 crc kubenswrapper[4826]: I0129 08:01:02.725280 4826 scope.go:117] "RemoveContainer" containerID="23e37bf1db5d58773c0607ca48c45358bf4c862698164d3ac4558d86047aa7ac" Jan 29 08:01:09 crc kubenswrapper[4826]: I0129 08:01:09.809484 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:01:09 crc kubenswrapper[4826]: E0129 08:01:09.810756 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:01:23 crc kubenswrapper[4826]: I0129 08:01:23.809534 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:01:23 crc kubenswrapper[4826]: E0129 08:01:23.810530 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:01:35 crc kubenswrapper[4826]: I0129 08:01:35.809222 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:01:35 crc kubenswrapper[4826]: E0129 08:01:35.810665 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.405011 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vjkcx"] Jan 29 08:01:41 crc kubenswrapper[4826]: E0129 08:01:41.405982 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca60dfe-57df-4127-830d-692ddd8e3c7b" containerName="collect-profiles" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.405999 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca60dfe-57df-4127-830d-692ddd8e3c7b" containerName="collect-profiles" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.406203 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca60dfe-57df-4127-830d-692ddd8e3c7b" containerName="collect-profiles" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.407412 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.442270 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjkcx"] Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.473339 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnvt2\" (UniqueName: \"kubernetes.io/projected/3755e5ca-8975-4cd7-8765-2acd160c3307-kube-api-access-vnvt2\") pod \"redhat-marketplace-vjkcx\" (UID: \"3755e5ca-8975-4cd7-8765-2acd160c3307\") " pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.473557 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3755e5ca-8975-4cd7-8765-2acd160c3307-utilities\") pod \"redhat-marketplace-vjkcx\" (UID: \"3755e5ca-8975-4cd7-8765-2acd160c3307\") " pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.473634 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3755e5ca-8975-4cd7-8765-2acd160c3307-catalog-content\") pod \"redhat-marketplace-vjkcx\" (UID: \"3755e5ca-8975-4cd7-8765-2acd160c3307\") " pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.574256 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3755e5ca-8975-4cd7-8765-2acd160c3307-utilities\") pod \"redhat-marketplace-vjkcx\" (UID: \"3755e5ca-8975-4cd7-8765-2acd160c3307\") " pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.574330 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3755e5ca-8975-4cd7-8765-2acd160c3307-catalog-content\") pod \"redhat-marketplace-vjkcx\" (UID: \"3755e5ca-8975-4cd7-8765-2acd160c3307\") " pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.574367 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnvt2\" (UniqueName: \"kubernetes.io/projected/3755e5ca-8975-4cd7-8765-2acd160c3307-kube-api-access-vnvt2\") pod \"redhat-marketplace-vjkcx\" (UID: \"3755e5ca-8975-4cd7-8765-2acd160c3307\") " pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.575811 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3755e5ca-8975-4cd7-8765-2acd160c3307-catalog-content\") pod \"redhat-marketplace-vjkcx\" (UID: \"3755e5ca-8975-4cd7-8765-2acd160c3307\") " pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.576204 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3755e5ca-8975-4cd7-8765-2acd160c3307-utilities\") pod \"redhat-marketplace-vjkcx\" (UID: \"3755e5ca-8975-4cd7-8765-2acd160c3307\") " pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.783855 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54c698c8ff-xn9gh"] Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.785290 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.788075 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.789103 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.789251 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.789837 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-979474d99-hxwgz"] Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.794167 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.796760 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fnkhn" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.798555 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-979474d99-hxwgz" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.800605 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnvt2\" (UniqueName: \"kubernetes.io/projected/3755e5ca-8975-4cd7-8765-2acd160c3307-kube-api-access-vnvt2\") pod \"redhat-marketplace-vjkcx\" (UID: \"3755e5ca-8975-4cd7-8765-2acd160c3307\") " pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.825544 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-979474d99-hxwgz"] Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.831083 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54c698c8ff-xn9gh"] Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.878995 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35e533f3-05a6-4364-b392-39b96f0fbea9-dns-svc\") pod \"dnsmasq-dns-54c698c8ff-xn9gh\" (UID: \"35e533f3-05a6-4364-b392-39b96f0fbea9\") " pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.879041 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c2l9\" (UniqueName: \"kubernetes.io/projected/0b451083-b71d-4c92-91ef-89009a9a7389-kube-api-access-8c2l9\") pod \"dnsmasq-dns-979474d99-hxwgz\" (UID: \"0b451083-b71d-4c92-91ef-89009a9a7389\") " pod="openstack/dnsmasq-dns-979474d99-hxwgz" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.879064 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b451083-b71d-4c92-91ef-89009a9a7389-config\") pod \"dnsmasq-dns-979474d99-hxwgz\" (UID: \"0b451083-b71d-4c92-91ef-89009a9a7389\") " pod="openstack/dnsmasq-dns-979474d99-hxwgz" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.879087 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz9zv\" (UniqueName: \"kubernetes.io/projected/35e533f3-05a6-4364-b392-39b96f0fbea9-kube-api-access-kz9zv\") pod \"dnsmasq-dns-54c698c8ff-xn9gh\" (UID: \"35e533f3-05a6-4364-b392-39b96f0fbea9\") " pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.879150 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35e533f3-05a6-4364-b392-39b96f0fbea9-config\") pod \"dnsmasq-dns-54c698c8ff-xn9gh\" (UID: \"35e533f3-05a6-4364-b392-39b96f0fbea9\") " pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.998630 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c2l9\" (UniqueName: \"kubernetes.io/projected/0b451083-b71d-4c92-91ef-89009a9a7389-kube-api-access-8c2l9\") pod \"dnsmasq-dns-979474d99-hxwgz\" (UID: \"0b451083-b71d-4c92-91ef-89009a9a7389\") " pod="openstack/dnsmasq-dns-979474d99-hxwgz" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.998733 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b451083-b71d-4c92-91ef-89009a9a7389-config\") pod \"dnsmasq-dns-979474d99-hxwgz\" (UID: \"0b451083-b71d-4c92-91ef-89009a9a7389\") " pod="openstack/dnsmasq-dns-979474d99-hxwgz" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.998809 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz9zv\" (UniqueName: \"kubernetes.io/projected/35e533f3-05a6-4364-b392-39b96f0fbea9-kube-api-access-kz9zv\") pod \"dnsmasq-dns-54c698c8ff-xn9gh\" (UID: \"35e533f3-05a6-4364-b392-39b96f0fbea9\") " pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.998943 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35e533f3-05a6-4364-b392-39b96f0fbea9-config\") pod \"dnsmasq-dns-54c698c8ff-xn9gh\" (UID: \"35e533f3-05a6-4364-b392-39b96f0fbea9\") " pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" Jan 29 08:01:41 crc kubenswrapper[4826]: I0129 08:01:41.999174 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35e533f3-05a6-4364-b392-39b96f0fbea9-dns-svc\") pod \"dnsmasq-dns-54c698c8ff-xn9gh\" (UID: \"35e533f3-05a6-4364-b392-39b96f0fbea9\") " pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.000260 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35e533f3-05a6-4364-b392-39b96f0fbea9-dns-svc\") pod \"dnsmasq-dns-54c698c8ff-xn9gh\" (UID: \"35e533f3-05a6-4364-b392-39b96f0fbea9\") " pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.001770 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b451083-b71d-4c92-91ef-89009a9a7389-config\") pod \"dnsmasq-dns-979474d99-hxwgz\" (UID: \"0b451083-b71d-4c92-91ef-89009a9a7389\") " pod="openstack/dnsmasq-dns-979474d99-hxwgz" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.002358 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35e533f3-05a6-4364-b392-39b96f0fbea9-config\") pod \"dnsmasq-dns-54c698c8ff-xn9gh\" (UID: \"35e533f3-05a6-4364-b392-39b96f0fbea9\") " pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.035821 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.050193 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz9zv\" (UniqueName: \"kubernetes.io/projected/35e533f3-05a6-4364-b392-39b96f0fbea9-kube-api-access-kz9zv\") pod \"dnsmasq-dns-54c698c8ff-xn9gh\" (UID: \"35e533f3-05a6-4364-b392-39b96f0fbea9\") " pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.074761 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c2l9\" (UniqueName: \"kubernetes.io/projected/0b451083-b71d-4c92-91ef-89009a9a7389-kube-api-access-8c2l9\") pod \"dnsmasq-dns-979474d99-hxwgz\" (UID: \"0b451083-b71d-4c92-91ef-89009a9a7389\") " pod="openstack/dnsmasq-dns-979474d99-hxwgz" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.108740 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.139017 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54c698c8ff-xn9gh"] Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.139468 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-979474d99-hxwgz" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.170674 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-678974fb59-lrzsc"] Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.171727 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.191098 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-678974fb59-lrzsc"] Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.205568 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e26fece8-b284-4d94-9f0f-8b86d1910819-config\") pod \"dnsmasq-dns-678974fb59-lrzsc\" (UID: \"e26fece8-b284-4d94-9f0f-8b86d1910819\") " pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.205624 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzzgr\" (UniqueName: \"kubernetes.io/projected/e26fece8-b284-4d94-9f0f-8b86d1910819-kube-api-access-rzzgr\") pod \"dnsmasq-dns-678974fb59-lrzsc\" (UID: \"e26fece8-b284-4d94-9f0f-8b86d1910819\") " pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.205654 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e26fece8-b284-4d94-9f0f-8b86d1910819-dns-svc\") pod \"dnsmasq-dns-678974fb59-lrzsc\" (UID: \"e26fece8-b284-4d94-9f0f-8b86d1910819\") " pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.310123 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e26fece8-b284-4d94-9f0f-8b86d1910819-config\") pod \"dnsmasq-dns-678974fb59-lrzsc\" (UID: \"e26fece8-b284-4d94-9f0f-8b86d1910819\") " pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.310185 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzzgr\" (UniqueName: \"kubernetes.io/projected/e26fece8-b284-4d94-9f0f-8b86d1910819-kube-api-access-rzzgr\") pod \"dnsmasq-dns-678974fb59-lrzsc\" (UID: \"e26fece8-b284-4d94-9f0f-8b86d1910819\") " pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.310207 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e26fece8-b284-4d94-9f0f-8b86d1910819-dns-svc\") pod \"dnsmasq-dns-678974fb59-lrzsc\" (UID: \"e26fece8-b284-4d94-9f0f-8b86d1910819\") " pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.311235 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e26fece8-b284-4d94-9f0f-8b86d1910819-dns-svc\") pod \"dnsmasq-dns-678974fb59-lrzsc\" (UID: \"e26fece8-b284-4d94-9f0f-8b86d1910819\") " pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.311805 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e26fece8-b284-4d94-9f0f-8b86d1910819-config\") pod \"dnsmasq-dns-678974fb59-lrzsc\" (UID: \"e26fece8-b284-4d94-9f0f-8b86d1910819\") " pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.352144 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzzgr\" (UniqueName: \"kubernetes.io/projected/e26fece8-b284-4d94-9f0f-8b86d1910819-kube-api-access-rzzgr\") pod \"dnsmasq-dns-678974fb59-lrzsc\" (UID: \"e26fece8-b284-4d94-9f0f-8b86d1910819\") " pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.496727 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.657408 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjkcx"] Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.737222 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54c698c8ff-xn9gh"] Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.773096 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-979474d99-hxwgz"] Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.820954 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-979474d99-hxwgz"] Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.849015 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-576ffbb965-vmkd8"] Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.850114 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.871373 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-576ffbb965-vmkd8"] Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.936801 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n4xh\" (UniqueName: \"kubernetes.io/projected/5af500c6-d9f7-460d-9969-e3f444fb0c82-kube-api-access-5n4xh\") pod \"dnsmasq-dns-576ffbb965-vmkd8\" (UID: \"5af500c6-d9f7-460d-9969-e3f444fb0c82\") " pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.936977 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af500c6-d9f7-460d-9969-e3f444fb0c82-dns-svc\") pod \"dnsmasq-dns-576ffbb965-vmkd8\" (UID: \"5af500c6-d9f7-460d-9969-e3f444fb0c82\") " pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.937027 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af500c6-d9f7-460d-9969-e3f444fb0c82-config\") pod \"dnsmasq-dns-576ffbb965-vmkd8\" (UID: \"5af500c6-d9f7-460d-9969-e3f444fb0c82\") " pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.950516 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" event={"ID":"35e533f3-05a6-4364-b392-39b96f0fbea9","Type":"ContainerStarted","Data":"c29a099689853609a73ecd17f2e515dbc42728ba6822db980b3e528d8cce4372"} Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.960126 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-979474d99-hxwgz" event={"ID":"0b451083-b71d-4c92-91ef-89009a9a7389","Type":"ContainerStarted","Data":"55814ac00b9e4c8cb2ec82e44e762244e71195c634d39051498a70a60cf03894"} Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.965275 4826 generic.go:334] "Generic (PLEG): container finished" podID="3755e5ca-8975-4cd7-8765-2acd160c3307" containerID="4c2f1d114468eb970a31e5028cf5482a7f5196f1c143077700d008d68ed54020" exitCode=0 Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.965355 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjkcx" event={"ID":"3755e5ca-8975-4cd7-8765-2acd160c3307","Type":"ContainerDied","Data":"4c2f1d114468eb970a31e5028cf5482a7f5196f1c143077700d008d68ed54020"} Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.965383 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjkcx" event={"ID":"3755e5ca-8975-4cd7-8765-2acd160c3307","Type":"ContainerStarted","Data":"c303f8e0d4a2e09bc27590d761202dfa3f1686000761930b45eb0a9cb5783c24"} Jan 29 08:01:42 crc kubenswrapper[4826]: I0129 08:01:42.972022 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-678974fb59-lrzsc"] Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.038386 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af500c6-d9f7-460d-9969-e3f444fb0c82-dns-svc\") pod \"dnsmasq-dns-576ffbb965-vmkd8\" (UID: \"5af500c6-d9f7-460d-9969-e3f444fb0c82\") " pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.038442 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af500c6-d9f7-460d-9969-e3f444fb0c82-config\") pod \"dnsmasq-dns-576ffbb965-vmkd8\" (UID: \"5af500c6-d9f7-460d-9969-e3f444fb0c82\") " pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.038528 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n4xh\" (UniqueName: \"kubernetes.io/projected/5af500c6-d9f7-460d-9969-e3f444fb0c82-kube-api-access-5n4xh\") pod \"dnsmasq-dns-576ffbb965-vmkd8\" (UID: \"5af500c6-d9f7-460d-9969-e3f444fb0c82\") " pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.039553 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af500c6-d9f7-460d-9969-e3f444fb0c82-dns-svc\") pod \"dnsmasq-dns-576ffbb965-vmkd8\" (UID: \"5af500c6-d9f7-460d-9969-e3f444fb0c82\") " pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.040228 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af500c6-d9f7-460d-9969-e3f444fb0c82-config\") pod \"dnsmasq-dns-576ffbb965-vmkd8\" (UID: \"5af500c6-d9f7-460d-9969-e3f444fb0c82\") " pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.056683 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n4xh\" (UniqueName: \"kubernetes.io/projected/5af500c6-d9f7-460d-9969-e3f444fb0c82-kube-api-access-5n4xh\") pod \"dnsmasq-dns-576ffbb965-vmkd8\" (UID: \"5af500c6-d9f7-460d-9969-e3f444fb0c82\") " pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.216759 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.350322 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.351849 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.358215 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.358879 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.359048 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qqkhd" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.362383 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.362422 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.362466 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.364701 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.397230 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.448270 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.448331 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.448358 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27qzg\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-kube-api-access-27qzg\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.448419 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.448439 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.448470 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.448497 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.448519 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-config-data\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.448541 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.448556 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.448581 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.543371 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-576ffbb965-vmkd8"] Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.550248 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.550283 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.550335 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.550362 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.550407 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-config-data\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.550429 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.550445 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.550472 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.550513 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.550533 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.550555 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27qzg\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-kube-api-access-27qzg\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.551980 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.552232 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.552472 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.565318 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.568597 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-config-data\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.568816 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.569371 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.569478 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e1df1097f9d829b19f128c6a54903625abdcd9215cb4cc34e0d59165afb70809/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.571124 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.575030 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27qzg\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-kube-api-access-27qzg\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.575872 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.576252 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.605334 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\") pod \"rabbitmq-server-0\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.707513 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.980065 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.983606 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.989984 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.990463 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tkp8v" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.990633 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.990749 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.990872 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.990930 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.991062 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 08:01:43 crc kubenswrapper[4826]: I0129 08:01:43.991847 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.018084 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjkcx" event={"ID":"3755e5ca-8975-4cd7-8765-2acd160c3307","Type":"ContainerStarted","Data":"b5ac97f4c334c48878c36818b9f3fd144078b3d27c35a5090803f81d9cb96438"} Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.028000 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" event={"ID":"5af500c6-d9f7-460d-9969-e3f444fb0c82","Type":"ContainerStarted","Data":"16aba1ac9ad3114552e77ea0933d11a93a3db6970f1a79a7ed89fe931da28c9b"} Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.039415 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" event={"ID":"e26fece8-b284-4d94-9f0f-8b86d1910819","Type":"ContainerStarted","Data":"1fbbb2e4db0b18c2f0479bcc2a6053af8c5355e31ccef2ff9a770c4fc0d7879e"} Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.058151 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.058228 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.058846 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.059523 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.059633 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.059731 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.059833 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.059926 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-009411f1-5778-43a5-ae31-5b0d483ea442\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-009411f1-5778-43a5-ae31-5b0d483ea442\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.059965 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.060193 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.060258 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp6zv\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-kube-api-access-qp6zv\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.161685 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.161781 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.161826 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.161869 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-009411f1-5778-43a5-ae31-5b0d483ea442\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-009411f1-5778-43a5-ae31-5b0d483ea442\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.161902 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.161930 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.161956 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp6zv\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-kube-api-access-qp6zv\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.161987 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.162013 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.162054 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.162089 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.162553 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.163165 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.163510 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.163535 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.165475 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.165533 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-009411f1-5778-43a5-ae31-5b0d483ea442\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-009411f1-5778-43a5-ae31-5b0d483ea442\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dec26431f86c0a2fa5fbacb3eb57039fa841352388ca49c72cadb71db320a17a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.167536 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.169637 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.171590 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.174755 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.176476 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.179216 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp6zv\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-kube-api-access-qp6zv\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: W0129 08:01:44.187163 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9305a20_f5b6_4d8c_84f3_19e8bbd6aadf.slice/crio-b3adbc2a3fc33fef4ba81dc5f7855b3c1b08274a59569595ede705102ceef049 WatchSource:0}: Error finding container b3adbc2a3fc33fef4ba81dc5f7855b3c1b08274a59569595ede705102ceef049: Status 404 returned error can't find the container with id b3adbc2a3fc33fef4ba81dc5f7855b3c1b08274a59569595ede705102ceef049 Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.195828 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.201739 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-009411f1-5778-43a5-ae31-5b0d483ea442\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-009411f1-5778-43a5-ae31-5b0d483ea442\") pod \"rabbitmq-cell1-server-0\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.311009 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.543176 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:01:44 crc kubenswrapper[4826]: W0129 08:01:44.559192 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod376bc5a2_a062_4c32_ae34_ea2d16f3e1c8.slice/crio-3c16f685f11f15760a340d53f9596f954f158f59de0eb4ef15ccff0f416632b2 WatchSource:0}: Error finding container 3c16f685f11f15760a340d53f9596f954f158f59de0eb4ef15ccff0f416632b2: Status 404 returned error can't find the container with id 3c16f685f11f15760a340d53f9596f954f158f59de0eb4ef15ccff0f416632b2 Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.641710 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.642847 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.649763 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-tp2fx" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.649935 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.652347 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.652530 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.654239 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.656721 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.689752 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.689844 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-kolla-config\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.689870 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.689892 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxx7b\" (UniqueName: \"kubernetes.io/projected/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-kube-api-access-rxx7b\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.689912 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.689938 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-config-data-default\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.689976 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.690434 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1c3584c-649a-4e8f-8786-2c626d51d50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1c3584c-649a-4e8f-8786-2c626d51d50d\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.791919 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1c3584c-649a-4e8f-8786-2c626d51d50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1c3584c-649a-4e8f-8786-2c626d51d50d\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.792005 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.792071 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-kolla-config\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.792091 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.792114 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxx7b\" (UniqueName: \"kubernetes.io/projected/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-kube-api-access-rxx7b\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.792152 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.792176 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-config-data-default\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.792219 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.797240 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-kolla-config\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.797985 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.799692 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.801553 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-config-data-default\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.804736 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.805874 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.805954 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.805979 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1c3584c-649a-4e8f-8786-2c626d51d50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1c3584c-649a-4e8f-8786-2c626d51d50d\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5777ee48e236f6161dedbe183060e11b989d079d256d917770fc827523ae8c03/globalmount\"" pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.821721 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxx7b\" (UniqueName: \"kubernetes.io/projected/b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae-kube-api-access-rxx7b\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:44 crc kubenswrapper[4826]: I0129 08:01:44.883017 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1c3584c-649a-4e8f-8786-2c626d51d50d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1c3584c-649a-4e8f-8786-2c626d51d50d\") pod \"openstack-galera-0\" (UID: \"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae\") " pod="openstack/openstack-galera-0" Jan 29 08:01:45 crc kubenswrapper[4826]: I0129 08:01:45.003839 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 08:01:45 crc kubenswrapper[4826]: I0129 08:01:45.071629 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf","Type":"ContainerStarted","Data":"b3adbc2a3fc33fef4ba81dc5f7855b3c1b08274a59569595ede705102ceef049"} Jan 29 08:01:45 crc kubenswrapper[4826]: I0129 08:01:45.074475 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8","Type":"ContainerStarted","Data":"3c16f685f11f15760a340d53f9596f954f158f59de0eb4ef15ccff0f416632b2"} Jan 29 08:01:45 crc kubenswrapper[4826]: I0129 08:01:45.082334 4826 generic.go:334] "Generic (PLEG): container finished" podID="3755e5ca-8975-4cd7-8765-2acd160c3307" containerID="b5ac97f4c334c48878c36818b9f3fd144078b3d27c35a5090803f81d9cb96438" exitCode=0 Jan 29 08:01:45 crc kubenswrapper[4826]: I0129 08:01:45.082377 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjkcx" event={"ID":"3755e5ca-8975-4cd7-8765-2acd160c3307","Type":"ContainerDied","Data":"b5ac97f4c334c48878c36818b9f3fd144078b3d27c35a5090803f81d9cb96438"} Jan 29 08:01:45 crc kubenswrapper[4826]: I0129 08:01:45.440570 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 08:01:45 crc kubenswrapper[4826]: I0129 08:01:45.855477 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 08:01:45 crc kubenswrapper[4826]: I0129 08:01:45.857018 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:45 crc kubenswrapper[4826]: I0129 08:01:45.866218 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 08:01:45 crc kubenswrapper[4826]: I0129 08:01:45.895384 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 08:01:45 crc kubenswrapper[4826]: I0129 08:01:45.895826 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 08:01:45 crc kubenswrapper[4826]: I0129 08:01:45.895988 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-68kzv" Jan 29 08:01:45 crc kubenswrapper[4826]: I0129 08:01:45.896341 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.012491 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpn6\" (UniqueName: \"kubernetes.io/projected/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-kube-api-access-7dpn6\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.012567 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.013948 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.014022 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.014070 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-25975bff-5b61-4104-afba-383867b91c1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25975bff-5b61-4104-afba-383867b91c1a\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.014102 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.014246 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.014280 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.093649 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjkcx" event={"ID":"3755e5ca-8975-4cd7-8765-2acd160c3307","Type":"ContainerStarted","Data":"171b19dc15fcde6b340d70b559a125033753e5e9c31a22cbb7c21df45efc9294"} Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.095088 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae","Type":"ContainerStarted","Data":"e90072fef16d94fcc9f289be603c461affb0d6e19fea03aceee1f23756b49ad5"} Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.118986 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vjkcx" podStartSLOduration=2.403120877 podStartE2EDuration="5.11896497s" podCreationTimestamp="2026-01-29 08:01:41 +0000 UTC" firstStartedPulling="2026-01-29 08:01:42.966609388 +0000 UTC m=+4686.828402457" lastFinishedPulling="2026-01-29 08:01:45.682453491 +0000 UTC m=+4689.544246550" observedRunningTime="2026-01-29 08:01:46.114599566 +0000 UTC m=+4689.976392635" watchObservedRunningTime="2026-01-29 08:01:46.11896497 +0000 UTC m=+4689.980758039" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.120085 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.120134 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-25975bff-5b61-4104-afba-383867b91c1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25975bff-5b61-4104-afba-383867b91c1a\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.120154 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.120201 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.120218 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.120261 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dpn6\" (UniqueName: \"kubernetes.io/projected/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-kube-api-access-7dpn6\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.120321 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.120360 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.120876 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.121237 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.122091 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.122183 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.123613 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.123708 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-25975bff-5b61-4104-afba-383867b91c1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25975bff-5b61-4104-afba-383867b91c1a\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f7e3e4733feb462324baf6050db7c245f9c8cb34cdec57d71f7ac823e69f13a7/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.130799 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.132011 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.148955 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dpn6\" (UniqueName: \"kubernetes.io/projected/0a52085e-690a-44e3-a1f3-d4d668b6ff8e-kube-api-access-7dpn6\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.171867 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-25975bff-5b61-4104-afba-383867b91c1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25975bff-5b61-4104-afba-383867b91c1a\") pod \"openstack-cell1-galera-0\" (UID: \"0a52085e-690a-44e3-a1f3-d4d668b6ff8e\") " pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.215228 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.215579 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.220411 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.221614 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.223023 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ggh68" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.223393 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.224279 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.325105 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d01c5a41-f62d-442e-ab5f-69d0abbfa549-kolla-config\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.325408 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d01c5a41-f62d-442e-ab5f-69d0abbfa549-config-data\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.325767 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d01c5a41-f62d-442e-ab5f-69d0abbfa549-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.325828 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvrv6\" (UniqueName: \"kubernetes.io/projected/d01c5a41-f62d-442e-ab5f-69d0abbfa549-kube-api-access-cvrv6\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.325843 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01c5a41-f62d-442e-ab5f-69d0abbfa549-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.427143 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d01c5a41-f62d-442e-ab5f-69d0abbfa549-config-data\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.427195 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d01c5a41-f62d-442e-ab5f-69d0abbfa549-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.427249 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvrv6\" (UniqueName: \"kubernetes.io/projected/d01c5a41-f62d-442e-ab5f-69d0abbfa549-kube-api-access-cvrv6\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.427266 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01c5a41-f62d-442e-ab5f-69d0abbfa549-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.427317 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d01c5a41-f62d-442e-ab5f-69d0abbfa549-kolla-config\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.428470 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d01c5a41-f62d-442e-ab5f-69d0abbfa549-config-data\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.431220 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d01c5a41-f62d-442e-ab5f-69d0abbfa549-kolla-config\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.432363 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d01c5a41-f62d-442e-ab5f-69d0abbfa549-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.436966 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d01c5a41-f62d-442e-ab5f-69d0abbfa549-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.447004 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvrv6\" (UniqueName: \"kubernetes.io/projected/d01c5a41-f62d-442e-ab5f-69d0abbfa549-kube-api-access-cvrv6\") pod \"memcached-0\" (UID: \"d01c5a41-f62d-442e-ab5f-69d0abbfa549\") " pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.596536 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 08:01:46 crc kubenswrapper[4826]: I0129 08:01:46.692018 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 08:01:46 crc kubenswrapper[4826]: W0129 08:01:46.713309 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a52085e_690a_44e3_a1f3_d4d668b6ff8e.slice/crio-c45b130082b39a601da16f1286c5b1cf65f59169fea25a193178b31f0c174d5f WatchSource:0}: Error finding container c45b130082b39a601da16f1286c5b1cf65f59169fea25a193178b31f0c174d5f: Status 404 returned error can't find the container with id c45b130082b39a601da16f1286c5b1cf65f59169fea25a193178b31f0c174d5f Jan 29 08:01:47 crc kubenswrapper[4826]: I0129 08:01:47.046984 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 08:01:47 crc kubenswrapper[4826]: W0129 08:01:47.054756 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd01c5a41_f62d_442e_ab5f_69d0abbfa549.slice/crio-c21dedf94025692517713d71cef2100ca2a21da348303664bfe3fb42dc202a19 WatchSource:0}: Error finding container c21dedf94025692517713d71cef2100ca2a21da348303664bfe3fb42dc202a19: Status 404 returned error can't find the container with id c21dedf94025692517713d71cef2100ca2a21da348303664bfe3fb42dc202a19 Jan 29 08:01:47 crc kubenswrapper[4826]: I0129 08:01:47.114000 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0a52085e-690a-44e3-a1f3-d4d668b6ff8e","Type":"ContainerStarted","Data":"c45b130082b39a601da16f1286c5b1cf65f59169fea25a193178b31f0c174d5f"} Jan 29 08:01:47 crc kubenswrapper[4826]: I0129 08:01:47.117010 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d01c5a41-f62d-442e-ab5f-69d0abbfa549","Type":"ContainerStarted","Data":"c21dedf94025692517713d71cef2100ca2a21da348303664bfe3fb42dc202a19"} Jan 29 08:01:49 crc kubenswrapper[4826]: I0129 08:01:49.808640 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:01:49 crc kubenswrapper[4826]: E0129 08:01:49.811697 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:01:52 crc kubenswrapper[4826]: I0129 08:01:52.043721 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:52 crc kubenswrapper[4826]: I0129 08:01:52.043884 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:52 crc kubenswrapper[4826]: I0129 08:01:52.099350 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:52 crc kubenswrapper[4826]: I0129 08:01:52.191582 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:52 crc kubenswrapper[4826]: I0129 08:01:52.330447 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjkcx"] Jan 29 08:01:54 crc kubenswrapper[4826]: I0129 08:01:54.164064 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vjkcx" podUID="3755e5ca-8975-4cd7-8765-2acd160c3307" containerName="registry-server" containerID="cri-o://171b19dc15fcde6b340d70b559a125033753e5e9c31a22cbb7c21df45efc9294" gracePeriod=2 Jan 29 08:01:55 crc kubenswrapper[4826]: I0129 08:01:55.174146 4826 generic.go:334] "Generic (PLEG): container finished" podID="3755e5ca-8975-4cd7-8765-2acd160c3307" containerID="171b19dc15fcde6b340d70b559a125033753e5e9c31a22cbb7c21df45efc9294" exitCode=0 Jan 29 08:01:55 crc kubenswrapper[4826]: I0129 08:01:55.174191 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjkcx" event={"ID":"3755e5ca-8975-4cd7-8765-2acd160c3307","Type":"ContainerDied","Data":"171b19dc15fcde6b340d70b559a125033753e5e9c31a22cbb7c21df45efc9294"} Jan 29 08:01:55 crc kubenswrapper[4826]: I0129 08:01:55.310290 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:55 crc kubenswrapper[4826]: I0129 08:01:55.394737 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3755e5ca-8975-4cd7-8765-2acd160c3307-utilities\") pod \"3755e5ca-8975-4cd7-8765-2acd160c3307\" (UID: \"3755e5ca-8975-4cd7-8765-2acd160c3307\") " Jan 29 08:01:55 crc kubenswrapper[4826]: I0129 08:01:55.394843 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnvt2\" (UniqueName: \"kubernetes.io/projected/3755e5ca-8975-4cd7-8765-2acd160c3307-kube-api-access-vnvt2\") pod \"3755e5ca-8975-4cd7-8765-2acd160c3307\" (UID: \"3755e5ca-8975-4cd7-8765-2acd160c3307\") " Jan 29 08:01:55 crc kubenswrapper[4826]: I0129 08:01:55.396588 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3755e5ca-8975-4cd7-8765-2acd160c3307-utilities" (OuterVolumeSpecName: "utilities") pod "3755e5ca-8975-4cd7-8765-2acd160c3307" (UID: "3755e5ca-8975-4cd7-8765-2acd160c3307"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:01:55 crc kubenswrapper[4826]: I0129 08:01:55.401531 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3755e5ca-8975-4cd7-8765-2acd160c3307-kube-api-access-vnvt2" (OuterVolumeSpecName: "kube-api-access-vnvt2") pod "3755e5ca-8975-4cd7-8765-2acd160c3307" (UID: "3755e5ca-8975-4cd7-8765-2acd160c3307"). InnerVolumeSpecName "kube-api-access-vnvt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:01:55 crc kubenswrapper[4826]: I0129 08:01:55.496330 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3755e5ca-8975-4cd7-8765-2acd160c3307-catalog-content\") pod \"3755e5ca-8975-4cd7-8765-2acd160c3307\" (UID: \"3755e5ca-8975-4cd7-8765-2acd160c3307\") " Jan 29 08:01:55 crc kubenswrapper[4826]: I0129 08:01:55.496882 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnvt2\" (UniqueName: \"kubernetes.io/projected/3755e5ca-8975-4cd7-8765-2acd160c3307-kube-api-access-vnvt2\") on node \"crc\" DevicePath \"\"" Jan 29 08:01:55 crc kubenswrapper[4826]: I0129 08:01:55.496905 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3755e5ca-8975-4cd7-8765-2acd160c3307-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:01:55 crc kubenswrapper[4826]: I0129 08:01:55.532524 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3755e5ca-8975-4cd7-8765-2acd160c3307-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3755e5ca-8975-4cd7-8765-2acd160c3307" (UID: "3755e5ca-8975-4cd7-8765-2acd160c3307"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:01:55 crc kubenswrapper[4826]: I0129 08:01:55.598145 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3755e5ca-8975-4cd7-8765-2acd160c3307-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:01:56 crc kubenswrapper[4826]: I0129 08:01:56.183034 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjkcx" event={"ID":"3755e5ca-8975-4cd7-8765-2acd160c3307","Type":"ContainerDied","Data":"c303f8e0d4a2e09bc27590d761202dfa3f1686000761930b45eb0a9cb5783c24"} Jan 29 08:01:56 crc kubenswrapper[4826]: I0129 08:01:56.183080 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjkcx" Jan 29 08:01:56 crc kubenswrapper[4826]: I0129 08:01:56.183093 4826 scope.go:117] "RemoveContainer" containerID="171b19dc15fcde6b340d70b559a125033753e5e9c31a22cbb7c21df45efc9294" Jan 29 08:01:56 crc kubenswrapper[4826]: I0129 08:01:56.215434 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjkcx"] Jan 29 08:01:56 crc kubenswrapper[4826]: I0129 08:01:56.223427 4826 scope.go:117] "RemoveContainer" containerID="b5ac97f4c334c48878c36818b9f3fd144078b3d27c35a5090803f81d9cb96438" Jan 29 08:01:56 crc kubenswrapper[4826]: I0129 08:01:56.228575 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjkcx"] Jan 29 08:01:56 crc kubenswrapper[4826]: I0129 08:01:56.246613 4826 scope.go:117] "RemoveContainer" containerID="4c2f1d114468eb970a31e5028cf5482a7f5196f1c143077700d008d68ed54020" Jan 29 08:01:56 crc kubenswrapper[4826]: I0129 08:01:56.821257 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3755e5ca-8975-4cd7-8765-2acd160c3307" path="/var/lib/kubelet/pods/3755e5ca-8975-4cd7-8765-2acd160c3307/volumes" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.576660 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9wpxt"] Jan 29 08:02:02 crc kubenswrapper[4826]: E0129 08:02:02.577664 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3755e5ca-8975-4cd7-8765-2acd160c3307" containerName="extract-utilities" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.577694 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3755e5ca-8975-4cd7-8765-2acd160c3307" containerName="extract-utilities" Jan 29 08:02:02 crc kubenswrapper[4826]: E0129 08:02:02.577704 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3755e5ca-8975-4cd7-8765-2acd160c3307" containerName="extract-content" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.577713 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3755e5ca-8975-4cd7-8765-2acd160c3307" containerName="extract-content" Jan 29 08:02:02 crc kubenswrapper[4826]: E0129 08:02:02.577730 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3755e5ca-8975-4cd7-8765-2acd160c3307" containerName="registry-server" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.577737 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3755e5ca-8975-4cd7-8765-2acd160c3307" containerName="registry-server" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.577873 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3755e5ca-8975-4cd7-8765-2acd160c3307" containerName="registry-server" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.579003 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.589315 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wpxt"] Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.755944 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx5hl\" (UniqueName: \"kubernetes.io/projected/f5680354-0ec5-418e-a681-1af56a789b73-kube-api-access-fx5hl\") pod \"certified-operators-9wpxt\" (UID: \"f5680354-0ec5-418e-a681-1af56a789b73\") " pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.756462 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5680354-0ec5-418e-a681-1af56a789b73-utilities\") pod \"certified-operators-9wpxt\" (UID: \"f5680354-0ec5-418e-a681-1af56a789b73\") " pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.756657 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5680354-0ec5-418e-a681-1af56a789b73-catalog-content\") pod \"certified-operators-9wpxt\" (UID: \"f5680354-0ec5-418e-a681-1af56a789b73\") " pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.858709 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5680354-0ec5-418e-a681-1af56a789b73-utilities\") pod \"certified-operators-9wpxt\" (UID: \"f5680354-0ec5-418e-a681-1af56a789b73\") " pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.858845 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5680354-0ec5-418e-a681-1af56a789b73-catalog-content\") pod \"certified-operators-9wpxt\" (UID: \"f5680354-0ec5-418e-a681-1af56a789b73\") " pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.858920 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx5hl\" (UniqueName: \"kubernetes.io/projected/f5680354-0ec5-418e-a681-1af56a789b73-kube-api-access-fx5hl\") pod \"certified-operators-9wpxt\" (UID: \"f5680354-0ec5-418e-a681-1af56a789b73\") " pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.859149 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5680354-0ec5-418e-a681-1af56a789b73-utilities\") pod \"certified-operators-9wpxt\" (UID: \"f5680354-0ec5-418e-a681-1af56a789b73\") " pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.859450 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5680354-0ec5-418e-a681-1af56a789b73-catalog-content\") pod \"certified-operators-9wpxt\" (UID: \"f5680354-0ec5-418e-a681-1af56a789b73\") " pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.883881 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx5hl\" (UniqueName: \"kubernetes.io/projected/f5680354-0ec5-418e-a681-1af56a789b73-kube-api-access-fx5hl\") pod \"certified-operators-9wpxt\" (UID: \"f5680354-0ec5-418e-a681-1af56a789b73\") " pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:02 crc kubenswrapper[4826]: I0129 08:02:02.915801 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:03 crc kubenswrapper[4826]: I0129 08:02:03.808971 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:02:03 crc kubenswrapper[4826]: E0129 08:02:03.809429 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:02:14 crc kubenswrapper[4826]: I0129 08:02:14.813259 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:02:14 crc kubenswrapper[4826]: E0129 08:02:14.814800 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:02:19 crc kubenswrapper[4826]: E0129 08:02:19.387105 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:02:19 crc kubenswrapper[4826]: E0129 08:02:19.387654 4826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:02:19 crc kubenswrapper[4826]: E0129 08:02:19.387809 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:b130bed8e4e0ff029dd29fba80441dc6,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dpn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(0a52085e-690a-44e3-a1f3-d4d668b6ff8e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:02:19 crc kubenswrapper[4826]: E0129 08:02:19.388972 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="0a52085e-690a-44e3-a1f3-d4d668b6ff8e" Jan 29 08:02:19 crc kubenswrapper[4826]: E0129 08:02:19.403221 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:02:19 crc kubenswrapper[4826]: E0129 08:02:19.403265 4826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:02:19 crc kubenswrapper[4826]: E0129 08:02:19.403457 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:b130bed8e4e0ff029dd29fba80441dc6,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qp6zv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(376bc5a2-a062-4c32-ae34-ea2d16f3e1c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:02:19 crc kubenswrapper[4826]: E0129 08:02:19.404598 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.532445 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.532839 4826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.533091 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rzzgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-678974fb59-lrzsc_openstack(e26fece8-b284-4d94-9f0f-8b86d1910819): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.534985 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" podUID="e26fece8-b284-4d94-9f0f-8b86d1910819" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.539637 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.539685 4826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.539815 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n564h564h676h699hcdh67bh66hfdh569h545h648h94h546h696h668h89h96h667h575h595h5d9h584h8dhbdh697h54bhb7h58fh5c9hd8h5cdh5c7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5n4xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-576ffbb965-vmkd8_openstack(5af500c6-d9f7-460d-9969-e3f444fb0c82): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.541477 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" podUID="5af500c6-d9f7-460d-9969-e3f444fb0c82" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.570234 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.570284 4826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.570510 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n697h54dhb7h666h69h76h59ch55ch65ch596h8h79h5c8h57hc8hfch5d7h697h79h698h5fch644hf9h54chbfh655hfchcbh5f8h646h5f7h89q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz9zv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-54c698c8ff-xn9gh_openstack(35e533f3-05a6-4364-b392-39b96f0fbea9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.571673 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" podUID="35e533f3-05a6-4364-b392-39b96f0fbea9" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.574277 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.574366 4826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.574488 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n647h57bh695h68dh54fhf5hc5h67h5d4hb6h696h685h54ch6h599h5c5h679h74h689h644h5c8h64ch555h5c6h5dh569h698h59fh66ch57bh5b9hb7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8c2l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-979474d99-hxwgz_openstack(0b451083-b71d-4c92-91ef-89009a9a7389): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:02:20 crc kubenswrapper[4826]: E0129 08:02:20.575680 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-979474d99-hxwgz" podUID="0b451083-b71d-4c92-91ef-89009a9a7389" Jan 29 08:02:20 crc kubenswrapper[4826]: I0129 08:02:20.913812 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wpxt"] Jan 29 08:02:20 crc kubenswrapper[4826]: W0129 08:02:20.916599 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5680354_0ec5_418e_a681_1af56a789b73.slice/crio-03d91905ed5cef67098dd3ff314150e4d15d6032dd18551c91cd13a68e22b55e WatchSource:0}: Error finding container 03d91905ed5cef67098dd3ff314150e4d15d6032dd18551c91cd13a68e22b55e: Status 404 returned error can't find the container with id 03d91905ed5cef67098dd3ff314150e4d15d6032dd18551c91cd13a68e22b55e Jan 29 08:02:21 crc kubenswrapper[4826]: I0129 08:02:21.452070 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae","Type":"ContainerStarted","Data":"680cfce53ead21cfdaea302a2ffa81fa42e08fcc184fbac6e01bbaf329877408"} Jan 29 08:02:21 crc kubenswrapper[4826]: I0129 08:02:21.454723 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0a52085e-690a-44e3-a1f3-d4d668b6ff8e","Type":"ContainerStarted","Data":"c8219b676d1a7e29297f449f7f89bcfb088ec793603f3b6b11a124455729135a"} Jan 29 08:02:21 crc kubenswrapper[4826]: I0129 08:02:21.456717 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d01c5a41-f62d-442e-ab5f-69d0abbfa549","Type":"ContainerStarted","Data":"b024ad8623e9c260e2de1dd02132fb27264fd9382d84f43a3a8b0993a1ce4daf"} Jan 29 08:02:21 crc kubenswrapper[4826]: I0129 08:02:21.457125 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 08:02:21 crc kubenswrapper[4826]: I0129 08:02:21.462710 4826 generic.go:334] "Generic (PLEG): container finished" podID="f5680354-0ec5-418e-a681-1af56a789b73" containerID="fd3f3759a7c58b6f63f74a64c2a5836f477504a8af5c440ef8d95f0d15bf1da7" exitCode=0 Jan 29 08:02:21 crc kubenswrapper[4826]: I0129 08:02:21.463505 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpxt" event={"ID":"f5680354-0ec5-418e-a681-1af56a789b73","Type":"ContainerDied","Data":"fd3f3759a7c58b6f63f74a64c2a5836f477504a8af5c440ef8d95f0d15bf1da7"} Jan 29 08:02:21 crc kubenswrapper[4826]: I0129 08:02:21.463548 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpxt" event={"ID":"f5680354-0ec5-418e-a681-1af56a789b73","Type":"ContainerStarted","Data":"03d91905ed5cef67098dd3ff314150e4d15d6032dd18551c91cd13a68e22b55e"} Jan 29 08:02:21 crc kubenswrapper[4826]: E0129 08:02:21.466542 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6\\\"\"" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" podUID="e26fece8-b284-4d94-9f0f-8b86d1910819" Jan 29 08:02:21 crc kubenswrapper[4826]: E0129 08:02:21.466595 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:b130bed8e4e0ff029dd29fba80441dc6\\\"\"" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" podUID="5af500c6-d9f7-460d-9969-e3f444fb0c82" Jan 29 08:02:21 crc kubenswrapper[4826]: I0129 08:02:21.565029 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.261607669 podStartE2EDuration="35.565000252s" podCreationTimestamp="2026-01-29 08:01:46 +0000 UTC" firstStartedPulling="2026-01-29 08:01:47.05741041 +0000 UTC m=+4690.919203479" lastFinishedPulling="2026-01-29 08:02:19.360802983 +0000 UTC m=+4723.222596062" observedRunningTime="2026-01-29 08:02:21.559830436 +0000 UTC m=+4725.421623495" watchObservedRunningTime="2026-01-29 08:02:21.565000252 +0000 UTC m=+4725.426793321" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.301906 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-979474d99-hxwgz" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.304817 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.333160 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c2l9\" (UniqueName: \"kubernetes.io/projected/0b451083-b71d-4c92-91ef-89009a9a7389-kube-api-access-8c2l9\") pod \"0b451083-b71d-4c92-91ef-89009a9a7389\" (UID: \"0b451083-b71d-4c92-91ef-89009a9a7389\") " Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.333545 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b451083-b71d-4c92-91ef-89009a9a7389-config\") pod \"0b451083-b71d-4c92-91ef-89009a9a7389\" (UID: \"0b451083-b71d-4c92-91ef-89009a9a7389\") " Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.333751 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35e533f3-05a6-4364-b392-39b96f0fbea9-dns-svc\") pod \"35e533f3-05a6-4364-b392-39b96f0fbea9\" (UID: \"35e533f3-05a6-4364-b392-39b96f0fbea9\") " Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.334012 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz9zv\" (UniqueName: \"kubernetes.io/projected/35e533f3-05a6-4364-b392-39b96f0fbea9-kube-api-access-kz9zv\") pod \"35e533f3-05a6-4364-b392-39b96f0fbea9\" (UID: \"35e533f3-05a6-4364-b392-39b96f0fbea9\") " Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.336341 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b451083-b71d-4c92-91ef-89009a9a7389-config" (OuterVolumeSpecName: "config") pod "0b451083-b71d-4c92-91ef-89009a9a7389" (UID: "0b451083-b71d-4c92-91ef-89009a9a7389"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.337564 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35e533f3-05a6-4364-b392-39b96f0fbea9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "35e533f3-05a6-4364-b392-39b96f0fbea9" (UID: "35e533f3-05a6-4364-b392-39b96f0fbea9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.340989 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e533f3-05a6-4364-b392-39b96f0fbea9-kube-api-access-kz9zv" (OuterVolumeSpecName: "kube-api-access-kz9zv") pod "35e533f3-05a6-4364-b392-39b96f0fbea9" (UID: "35e533f3-05a6-4364-b392-39b96f0fbea9"). InnerVolumeSpecName "kube-api-access-kz9zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.344633 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b451083-b71d-4c92-91ef-89009a9a7389-kube-api-access-8c2l9" (OuterVolumeSpecName: "kube-api-access-8c2l9") pod "0b451083-b71d-4c92-91ef-89009a9a7389" (UID: "0b451083-b71d-4c92-91ef-89009a9a7389"). InnerVolumeSpecName "kube-api-access-8c2l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.435828 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35e533f3-05a6-4364-b392-39b96f0fbea9-config\") pod \"35e533f3-05a6-4364-b392-39b96f0fbea9\" (UID: \"35e533f3-05a6-4364-b392-39b96f0fbea9\") " Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.436352 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35e533f3-05a6-4364-b392-39b96f0fbea9-config" (OuterVolumeSpecName: "config") pod "35e533f3-05a6-4364-b392-39b96f0fbea9" (UID: "35e533f3-05a6-4364-b392-39b96f0fbea9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.436467 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c2l9\" (UniqueName: \"kubernetes.io/projected/0b451083-b71d-4c92-91ef-89009a9a7389-kube-api-access-8c2l9\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.436492 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b451083-b71d-4c92-91ef-89009a9a7389-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.436528 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35e533f3-05a6-4364-b392-39b96f0fbea9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.436546 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz9zv\" (UniqueName: \"kubernetes.io/projected/35e533f3-05a6-4364-b392-39b96f0fbea9-kube-api-access-kz9zv\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.436562 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35e533f3-05a6-4364-b392-39b96f0fbea9-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.470992 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8","Type":"ContainerStarted","Data":"2cae4c388ac46da0d0758a2c53e1c6f81d8e9f1b0ea830e0a281452453dd4f20"} Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.472665 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpxt" event={"ID":"f5680354-0ec5-418e-a681-1af56a789b73","Type":"ContainerStarted","Data":"cea5c07c62e503f85f41d4e021af21a920fa5b786df704da18ba5495c6dd3fbd"} Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.473955 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" event={"ID":"35e533f3-05a6-4364-b392-39b96f0fbea9","Type":"ContainerDied","Data":"c29a099689853609a73ecd17f2e515dbc42728ba6822db980b3e528d8cce4372"} Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.473969 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54c698c8ff-xn9gh" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.475134 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-979474d99-hxwgz" event={"ID":"0b451083-b71d-4c92-91ef-89009a9a7389","Type":"ContainerDied","Data":"55814ac00b9e4c8cb2ec82e44e762244e71195c634d39051498a70a60cf03894"} Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.475195 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-979474d99-hxwgz" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.477391 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf","Type":"ContainerStarted","Data":"2be5f188e6ec8202e6f83fc5ea5558d869009b79d2cbe45a0c4bcebf63b079ae"} Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.527047 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54c698c8ff-xn9gh"] Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.533164 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54c698c8ff-xn9gh"] Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.605395 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-979474d99-hxwgz"] Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.611837 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-979474d99-hxwgz"] Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.819858 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b451083-b71d-4c92-91ef-89009a9a7389" path="/var/lib/kubelet/pods/0b451083-b71d-4c92-91ef-89009a9a7389/volumes" Jan 29 08:02:22 crc kubenswrapper[4826]: I0129 08:02:22.820618 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35e533f3-05a6-4364-b392-39b96f0fbea9" path="/var/lib/kubelet/pods/35e533f3-05a6-4364-b392-39b96f0fbea9/volumes" Jan 29 08:02:23 crc kubenswrapper[4826]: I0129 08:02:23.489896 4826 generic.go:334] "Generic (PLEG): container finished" podID="f5680354-0ec5-418e-a681-1af56a789b73" containerID="cea5c07c62e503f85f41d4e021af21a920fa5b786df704da18ba5495c6dd3fbd" exitCode=0 Jan 29 08:02:23 crc kubenswrapper[4826]: I0129 08:02:23.489962 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpxt" event={"ID":"f5680354-0ec5-418e-a681-1af56a789b73","Type":"ContainerDied","Data":"cea5c07c62e503f85f41d4e021af21a920fa5b786df704da18ba5495c6dd3fbd"} Jan 29 08:02:24 crc kubenswrapper[4826]: I0129 08:02:24.500857 4826 generic.go:334] "Generic (PLEG): container finished" podID="b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae" containerID="680cfce53ead21cfdaea302a2ffa81fa42e08fcc184fbac6e01bbaf329877408" exitCode=0 Jan 29 08:02:24 crc kubenswrapper[4826]: I0129 08:02:24.500960 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae","Type":"ContainerDied","Data":"680cfce53ead21cfdaea302a2ffa81fa42e08fcc184fbac6e01bbaf329877408"} Jan 29 08:02:24 crc kubenswrapper[4826]: I0129 08:02:24.510180 4826 generic.go:334] "Generic (PLEG): container finished" podID="0a52085e-690a-44e3-a1f3-d4d668b6ff8e" containerID="c8219b676d1a7e29297f449f7f89bcfb088ec793603f3b6b11a124455729135a" exitCode=0 Jan 29 08:02:24 crc kubenswrapper[4826]: I0129 08:02:24.510282 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0a52085e-690a-44e3-a1f3-d4d668b6ff8e","Type":"ContainerDied","Data":"c8219b676d1a7e29297f449f7f89bcfb088ec793603f3b6b11a124455729135a"} Jan 29 08:02:24 crc kubenswrapper[4826]: I0129 08:02:24.517374 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpxt" event={"ID":"f5680354-0ec5-418e-a681-1af56a789b73","Type":"ContainerStarted","Data":"0eccc1ee0fd2c2aebe4822424cca48980f88a708cefa7e3f6ceb2c7dd32be6e8"} Jan 29 08:02:24 crc kubenswrapper[4826]: I0129 08:02:24.599579 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9wpxt" podStartSLOduration=20.119821082 podStartE2EDuration="22.59955535s" podCreationTimestamp="2026-01-29 08:02:02 +0000 UTC" firstStartedPulling="2026-01-29 08:02:21.466146465 +0000 UTC m=+4725.327939534" lastFinishedPulling="2026-01-29 08:02:23.945880693 +0000 UTC m=+4727.807673802" observedRunningTime="2026-01-29 08:02:24.59612017 +0000 UTC m=+4728.457913249" watchObservedRunningTime="2026-01-29 08:02:24.59955535 +0000 UTC m=+4728.461348419" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.133604 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-grkxk"] Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.135742 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.144893 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-grkxk"] Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.185623 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmx2n\" (UniqueName: \"kubernetes.io/projected/df57a550-0fc4-4c54-8632-4954fe75856c-kube-api-access-cmx2n\") pod \"redhat-operators-grkxk\" (UID: \"df57a550-0fc4-4c54-8632-4954fe75856c\") " pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.185678 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df57a550-0fc4-4c54-8632-4954fe75856c-utilities\") pod \"redhat-operators-grkxk\" (UID: \"df57a550-0fc4-4c54-8632-4954fe75856c\") " pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.185716 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df57a550-0fc4-4c54-8632-4954fe75856c-catalog-content\") pod \"redhat-operators-grkxk\" (UID: \"df57a550-0fc4-4c54-8632-4954fe75856c\") " pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.287100 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmx2n\" (UniqueName: \"kubernetes.io/projected/df57a550-0fc4-4c54-8632-4954fe75856c-kube-api-access-cmx2n\") pod \"redhat-operators-grkxk\" (UID: \"df57a550-0fc4-4c54-8632-4954fe75856c\") " pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.287168 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df57a550-0fc4-4c54-8632-4954fe75856c-utilities\") pod \"redhat-operators-grkxk\" (UID: \"df57a550-0fc4-4c54-8632-4954fe75856c\") " pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.287205 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df57a550-0fc4-4c54-8632-4954fe75856c-catalog-content\") pod \"redhat-operators-grkxk\" (UID: \"df57a550-0fc4-4c54-8632-4954fe75856c\") " pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.287906 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df57a550-0fc4-4c54-8632-4954fe75856c-catalog-content\") pod \"redhat-operators-grkxk\" (UID: \"df57a550-0fc4-4c54-8632-4954fe75856c\") " pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.287917 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df57a550-0fc4-4c54-8632-4954fe75856c-utilities\") pod \"redhat-operators-grkxk\" (UID: \"df57a550-0fc4-4c54-8632-4954fe75856c\") " pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.317997 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmx2n\" (UniqueName: \"kubernetes.io/projected/df57a550-0fc4-4c54-8632-4954fe75856c-kube-api-access-cmx2n\") pod \"redhat-operators-grkxk\" (UID: \"df57a550-0fc4-4c54-8632-4954fe75856c\") " pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.453972 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.551886 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae","Type":"ContainerStarted","Data":"7b7c96af2353c5a3cc0f87d02f582f920914c74b149ab142879bcc8346a98278"} Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.557509 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0a52085e-690a-44e3-a1f3-d4d668b6ff8e","Type":"ContainerStarted","Data":"e6819af3b5796ad24a9073942ffbb5fdb7fca6f745b5346d5236024bfd48602c"} Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.582223 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.579567731 podStartE2EDuration="42.58220925s" podCreationTimestamp="2026-01-29 08:01:43 +0000 UTC" firstStartedPulling="2026-01-29 08:01:45.451651327 +0000 UTC m=+4689.313444396" lastFinishedPulling="2026-01-29 08:02:20.454292806 +0000 UTC m=+4724.316085915" observedRunningTime="2026-01-29 08:02:25.577798634 +0000 UTC m=+4729.439591703" watchObservedRunningTime="2026-01-29 08:02:25.58220925 +0000 UTC m=+4729.444002319" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.616162 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371995.23863 podStartE2EDuration="41.616144962s" podCreationTimestamp="2026-01-29 08:01:44 +0000 UTC" firstStartedPulling="2026-01-29 08:01:46.728664511 +0000 UTC m=+4690.590457580" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:02:25.612899166 +0000 UTC m=+4729.474692235" watchObservedRunningTime="2026-01-29 08:02:25.616144962 +0000 UTC m=+4729.477938021" Jan 29 08:02:25 crc kubenswrapper[4826]: I0129 08:02:25.982987 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-grkxk"] Jan 29 08:02:26 crc kubenswrapper[4826]: I0129 08:02:26.217104 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 08:02:26 crc kubenswrapper[4826]: I0129 08:02:26.217423 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 08:02:26 crc kubenswrapper[4826]: I0129 08:02:26.573952 4826 generic.go:334] "Generic (PLEG): container finished" podID="df57a550-0fc4-4c54-8632-4954fe75856c" containerID="85df5376c294a1c568a4db2c3a4fae193003204391bb6a8443435f533d3fa21a" exitCode=0 Jan 29 08:02:26 crc kubenswrapper[4826]: I0129 08:02:26.575129 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grkxk" event={"ID":"df57a550-0fc4-4c54-8632-4954fe75856c","Type":"ContainerDied","Data":"85df5376c294a1c568a4db2c3a4fae193003204391bb6a8443435f533d3fa21a"} Jan 29 08:02:26 crc kubenswrapper[4826]: I0129 08:02:26.575158 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grkxk" event={"ID":"df57a550-0fc4-4c54-8632-4954fe75856c","Type":"ContainerStarted","Data":"6e11d09d800c5eda38c862e69ef28a425b161af7d1370fd848eec623584eea61"} Jan 29 08:02:26 crc kubenswrapper[4826]: I0129 08:02:26.598026 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 08:02:27 crc kubenswrapper[4826]: I0129 08:02:27.587664 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grkxk" event={"ID":"df57a550-0fc4-4c54-8632-4954fe75856c","Type":"ContainerStarted","Data":"012465c3e397fad04a6996d6cb7259ee422598c07cd39ac5b15be942a44984b3"} Jan 29 08:02:28 crc kubenswrapper[4826]: I0129 08:02:28.597748 4826 generic.go:334] "Generic (PLEG): container finished" podID="df57a550-0fc4-4c54-8632-4954fe75856c" containerID="012465c3e397fad04a6996d6cb7259ee422598c07cd39ac5b15be942a44984b3" exitCode=0 Jan 29 08:02:28 crc kubenswrapper[4826]: I0129 08:02:28.597815 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grkxk" event={"ID":"df57a550-0fc4-4c54-8632-4954fe75856c","Type":"ContainerDied","Data":"012465c3e397fad04a6996d6cb7259ee422598c07cd39ac5b15be942a44984b3"} Jan 29 08:02:28 crc kubenswrapper[4826]: I0129 08:02:28.808921 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:02:28 crc kubenswrapper[4826]: E0129 08:02:28.809359 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:02:29 crc kubenswrapper[4826]: I0129 08:02:29.611622 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grkxk" event={"ID":"df57a550-0fc4-4c54-8632-4954fe75856c","Type":"ContainerStarted","Data":"5f9a500040f2402d5d97cbf816e286f8a5e93f935a0ad1662cd16ad67fbfc712"} Jan 29 08:02:29 crc kubenswrapper[4826]: I0129 08:02:29.641092 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-grkxk" podStartSLOduration=1.8007095290000001 podStartE2EDuration="4.641070983s" podCreationTimestamp="2026-01-29 08:02:25 +0000 UTC" firstStartedPulling="2026-01-29 08:02:26.57584779 +0000 UTC m=+4730.437640869" lastFinishedPulling="2026-01-29 08:02:29.416209244 +0000 UTC m=+4733.278002323" observedRunningTime="2026-01-29 08:02:29.634603293 +0000 UTC m=+4733.496396372" watchObservedRunningTime="2026-01-29 08:02:29.641070983 +0000 UTC m=+4733.502864062" Jan 29 08:02:32 crc kubenswrapper[4826]: I0129 08:02:32.332693 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 08:02:32 crc kubenswrapper[4826]: I0129 08:02:32.448364 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 08:02:32 crc kubenswrapper[4826]: I0129 08:02:32.916157 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:32 crc kubenswrapper[4826]: I0129 08:02:32.916249 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:33 crc kubenswrapper[4826]: I0129 08:02:33.357166 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:33 crc kubenswrapper[4826]: I0129 08:02:33.653922 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" event={"ID":"5af500c6-d9f7-460d-9969-e3f444fb0c82","Type":"ContainerStarted","Data":"90bfde05dbf0dfc8cb68c6d870da786179b207f0ebd5cd30bcb761d9bbd34c0a"} Jan 29 08:02:33 crc kubenswrapper[4826]: I0129 08:02:33.735862 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:34 crc kubenswrapper[4826]: I0129 08:02:34.324353 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wpxt"] Jan 29 08:02:34 crc kubenswrapper[4826]: I0129 08:02:34.664636 4826 generic.go:334] "Generic (PLEG): container finished" podID="5af500c6-d9f7-460d-9969-e3f444fb0c82" containerID="90bfde05dbf0dfc8cb68c6d870da786179b207f0ebd5cd30bcb761d9bbd34c0a" exitCode=0 Jan 29 08:02:34 crc kubenswrapper[4826]: I0129 08:02:34.665565 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" event={"ID":"5af500c6-d9f7-460d-9969-e3f444fb0c82","Type":"ContainerDied","Data":"90bfde05dbf0dfc8cb68c6d870da786179b207f0ebd5cd30bcb761d9bbd34c0a"} Jan 29 08:02:34 crc kubenswrapper[4826]: I0129 08:02:34.845501 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zqdgc"] Jan 29 08:02:34 crc kubenswrapper[4826]: I0129 08:02:34.846522 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zqdgc" Jan 29 08:02:34 crc kubenswrapper[4826]: I0129 08:02:34.849383 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 08:02:34 crc kubenswrapper[4826]: I0129 08:02:34.868758 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zqdgc"] Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.003982 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.004032 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.043775 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc0f11a-4196-43d6-94b8-cfc9b9f0713c-operator-scripts\") pod \"root-account-create-update-zqdgc\" (UID: \"1bc0f11a-4196-43d6-94b8-cfc9b9f0713c\") " pod="openstack/root-account-create-update-zqdgc" Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.043816 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttkmq\" (UniqueName: \"kubernetes.io/projected/1bc0f11a-4196-43d6-94b8-cfc9b9f0713c-kube-api-access-ttkmq\") pod \"root-account-create-update-zqdgc\" (UID: \"1bc0f11a-4196-43d6-94b8-cfc9b9f0713c\") " pod="openstack/root-account-create-update-zqdgc" Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.095715 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.145627 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc0f11a-4196-43d6-94b8-cfc9b9f0713c-operator-scripts\") pod \"root-account-create-update-zqdgc\" (UID: \"1bc0f11a-4196-43d6-94b8-cfc9b9f0713c\") " pod="openstack/root-account-create-update-zqdgc" Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.146064 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttkmq\" (UniqueName: \"kubernetes.io/projected/1bc0f11a-4196-43d6-94b8-cfc9b9f0713c-kube-api-access-ttkmq\") pod \"root-account-create-update-zqdgc\" (UID: \"1bc0f11a-4196-43d6-94b8-cfc9b9f0713c\") " pod="openstack/root-account-create-update-zqdgc" Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.146443 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc0f11a-4196-43d6-94b8-cfc9b9f0713c-operator-scripts\") pod \"root-account-create-update-zqdgc\" (UID: \"1bc0f11a-4196-43d6-94b8-cfc9b9f0713c\") " pod="openstack/root-account-create-update-zqdgc" Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.167084 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttkmq\" (UniqueName: \"kubernetes.io/projected/1bc0f11a-4196-43d6-94b8-cfc9b9f0713c-kube-api-access-ttkmq\") pod \"root-account-create-update-zqdgc\" (UID: \"1bc0f11a-4196-43d6-94b8-cfc9b9f0713c\") " pod="openstack/root-account-create-update-zqdgc" Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.220197 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zqdgc" Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.454603 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.454647 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.672713 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9wpxt" podUID="f5680354-0ec5-418e-a681-1af56a789b73" containerName="registry-server" containerID="cri-o://0eccc1ee0fd2c2aebe4822424cca48980f88a708cefa7e3f6ceb2c7dd32be6e8" gracePeriod=2 Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.712934 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zqdgc"] Jan 29 08:02:35 crc kubenswrapper[4826]: I0129 08:02:35.802940 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.137671 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.278517 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx5hl\" (UniqueName: \"kubernetes.io/projected/f5680354-0ec5-418e-a681-1af56a789b73-kube-api-access-fx5hl\") pod \"f5680354-0ec5-418e-a681-1af56a789b73\" (UID: \"f5680354-0ec5-418e-a681-1af56a789b73\") " Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.278968 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5680354-0ec5-418e-a681-1af56a789b73-catalog-content\") pod \"f5680354-0ec5-418e-a681-1af56a789b73\" (UID: \"f5680354-0ec5-418e-a681-1af56a789b73\") " Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.279103 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5680354-0ec5-418e-a681-1af56a789b73-utilities\") pod \"f5680354-0ec5-418e-a681-1af56a789b73\" (UID: \"f5680354-0ec5-418e-a681-1af56a789b73\") " Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.279951 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5680354-0ec5-418e-a681-1af56a789b73-utilities" (OuterVolumeSpecName: "utilities") pod "f5680354-0ec5-418e-a681-1af56a789b73" (UID: "f5680354-0ec5-418e-a681-1af56a789b73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.294019 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5680354-0ec5-418e-a681-1af56a789b73-kube-api-access-fx5hl" (OuterVolumeSpecName: "kube-api-access-fx5hl") pod "f5680354-0ec5-418e-a681-1af56a789b73" (UID: "f5680354-0ec5-418e-a681-1af56a789b73"). InnerVolumeSpecName "kube-api-access-fx5hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.329067 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5680354-0ec5-418e-a681-1af56a789b73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5680354-0ec5-418e-a681-1af56a789b73" (UID: "f5680354-0ec5-418e-a681-1af56a789b73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.380839 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5680354-0ec5-418e-a681-1af56a789b73-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.380872 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx5hl\" (UniqueName: \"kubernetes.io/projected/f5680354-0ec5-418e-a681-1af56a789b73-kube-api-access-fx5hl\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.380884 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5680354-0ec5-418e-a681-1af56a789b73-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.532149 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-grkxk" podUID="df57a550-0fc4-4c54-8632-4954fe75856c" containerName="registry-server" probeResult="failure" output=< Jan 29 08:02:36 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 08:02:36 crc kubenswrapper[4826]: > Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.686124 4826 generic.go:334] "Generic (PLEG): container finished" podID="1bc0f11a-4196-43d6-94b8-cfc9b9f0713c" containerID="817a5f95f307240cd2a67ca35028c75fefb430c3777588c3c7c0002ffcf45078" exitCode=0 Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.686247 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zqdgc" event={"ID":"1bc0f11a-4196-43d6-94b8-cfc9b9f0713c","Type":"ContainerDied","Data":"817a5f95f307240cd2a67ca35028c75fefb430c3777588c3c7c0002ffcf45078"} Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.686335 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zqdgc" event={"ID":"1bc0f11a-4196-43d6-94b8-cfc9b9f0713c","Type":"ContainerStarted","Data":"b1d7cd8a349b6bcd87f07b94244d412ae83a52cfc13bc3fb0824eb8b72a09925"} Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.691933 4826 generic.go:334] "Generic (PLEG): container finished" podID="f5680354-0ec5-418e-a681-1af56a789b73" containerID="0eccc1ee0fd2c2aebe4822424cca48980f88a708cefa7e3f6ceb2c7dd32be6e8" exitCode=0 Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.692013 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpxt" event={"ID":"f5680354-0ec5-418e-a681-1af56a789b73","Type":"ContainerDied","Data":"0eccc1ee0fd2c2aebe4822424cca48980f88a708cefa7e3f6ceb2c7dd32be6e8"} Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.692044 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wpxt" event={"ID":"f5680354-0ec5-418e-a681-1af56a789b73","Type":"ContainerDied","Data":"03d91905ed5cef67098dd3ff314150e4d15d6032dd18551c91cd13a68e22b55e"} Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.692065 4826 scope.go:117] "RemoveContainer" containerID="0eccc1ee0fd2c2aebe4822424cca48980f88a708cefa7e3f6ceb2c7dd32be6e8" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.692128 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wpxt" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.697369 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" event={"ID":"5af500c6-d9f7-460d-9969-e3f444fb0c82","Type":"ContainerStarted","Data":"f8b72bfe960e84e2de81b8ba6e5934127cceae272945f2fa7a42796d2d07e072"} Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.698032 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.702425 4826 generic.go:334] "Generic (PLEG): container finished" podID="e26fece8-b284-4d94-9f0f-8b86d1910819" containerID="bf86d9a950d21b8c8bdffcad5d4b65bf16252831f7006aaae1818c7d6ad86af6" exitCode=0 Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.702883 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" event={"ID":"e26fece8-b284-4d94-9f0f-8b86d1910819","Type":"ContainerDied","Data":"bf86d9a950d21b8c8bdffcad5d4b65bf16252831f7006aaae1818c7d6ad86af6"} Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.739277 4826 scope.go:117] "RemoveContainer" containerID="cea5c07c62e503f85f41d4e021af21a920fa5b786df704da18ba5495c6dd3fbd" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.792748 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" podStartSLOduration=5.337952972 podStartE2EDuration="54.792731344s" podCreationTimestamp="2026-01-29 08:01:42 +0000 UTC" firstStartedPulling="2026-01-29 08:01:43.608200617 +0000 UTC m=+4687.469993686" lastFinishedPulling="2026-01-29 08:02:33.062978949 +0000 UTC m=+4736.924772058" observedRunningTime="2026-01-29 08:02:36.792230121 +0000 UTC m=+4740.654023190" watchObservedRunningTime="2026-01-29 08:02:36.792731344 +0000 UTC m=+4740.654524413" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.803253 4826 scope.go:117] "RemoveContainer" containerID="fd3f3759a7c58b6f63f74a64c2a5836f477504a8af5c440ef8d95f0d15bf1da7" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.827839 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wpxt"] Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.849474 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9wpxt"] Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.859455 4826 scope.go:117] "RemoveContainer" containerID="0eccc1ee0fd2c2aebe4822424cca48980f88a708cefa7e3f6ceb2c7dd32be6e8" Jan 29 08:02:36 crc kubenswrapper[4826]: E0129 08:02:36.859876 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eccc1ee0fd2c2aebe4822424cca48980f88a708cefa7e3f6ceb2c7dd32be6e8\": container with ID starting with 0eccc1ee0fd2c2aebe4822424cca48980f88a708cefa7e3f6ceb2c7dd32be6e8 not found: ID does not exist" containerID="0eccc1ee0fd2c2aebe4822424cca48980f88a708cefa7e3f6ceb2c7dd32be6e8" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.859910 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eccc1ee0fd2c2aebe4822424cca48980f88a708cefa7e3f6ceb2c7dd32be6e8"} err="failed to get container status \"0eccc1ee0fd2c2aebe4822424cca48980f88a708cefa7e3f6ceb2c7dd32be6e8\": rpc error: code = NotFound desc = could not find container \"0eccc1ee0fd2c2aebe4822424cca48980f88a708cefa7e3f6ceb2c7dd32be6e8\": container with ID starting with 0eccc1ee0fd2c2aebe4822424cca48980f88a708cefa7e3f6ceb2c7dd32be6e8 not found: ID does not exist" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.859938 4826 scope.go:117] "RemoveContainer" containerID="cea5c07c62e503f85f41d4e021af21a920fa5b786df704da18ba5495c6dd3fbd" Jan 29 08:02:36 crc kubenswrapper[4826]: E0129 08:02:36.860287 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea5c07c62e503f85f41d4e021af21a920fa5b786df704da18ba5495c6dd3fbd\": container with ID starting with cea5c07c62e503f85f41d4e021af21a920fa5b786df704da18ba5495c6dd3fbd not found: ID does not exist" containerID="cea5c07c62e503f85f41d4e021af21a920fa5b786df704da18ba5495c6dd3fbd" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.860350 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea5c07c62e503f85f41d4e021af21a920fa5b786df704da18ba5495c6dd3fbd"} err="failed to get container status \"cea5c07c62e503f85f41d4e021af21a920fa5b786df704da18ba5495c6dd3fbd\": rpc error: code = NotFound desc = could not find container \"cea5c07c62e503f85f41d4e021af21a920fa5b786df704da18ba5495c6dd3fbd\": container with ID starting with cea5c07c62e503f85f41d4e021af21a920fa5b786df704da18ba5495c6dd3fbd not found: ID does not exist" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.860369 4826 scope.go:117] "RemoveContainer" containerID="fd3f3759a7c58b6f63f74a64c2a5836f477504a8af5c440ef8d95f0d15bf1da7" Jan 29 08:02:36 crc kubenswrapper[4826]: E0129 08:02:36.860803 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd3f3759a7c58b6f63f74a64c2a5836f477504a8af5c440ef8d95f0d15bf1da7\": container with ID starting with fd3f3759a7c58b6f63f74a64c2a5836f477504a8af5c440ef8d95f0d15bf1da7 not found: ID does not exist" containerID="fd3f3759a7c58b6f63f74a64c2a5836f477504a8af5c440ef8d95f0d15bf1da7" Jan 29 08:02:36 crc kubenswrapper[4826]: I0129 08:02:36.860829 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd3f3759a7c58b6f63f74a64c2a5836f477504a8af5c440ef8d95f0d15bf1da7"} err="failed to get container status \"fd3f3759a7c58b6f63f74a64c2a5836f477504a8af5c440ef8d95f0d15bf1da7\": rpc error: code = NotFound desc = could not find container \"fd3f3759a7c58b6f63f74a64c2a5836f477504a8af5c440ef8d95f0d15bf1da7\": container with ID starting with fd3f3759a7c58b6f63f74a64c2a5836f477504a8af5c440ef8d95f0d15bf1da7 not found: ID does not exist" Jan 29 08:02:37 crc kubenswrapper[4826]: I0129 08:02:37.716012 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" event={"ID":"e26fece8-b284-4d94-9f0f-8b86d1910819","Type":"ContainerStarted","Data":"8e8609bf56b17cfbb015813c9aa3bb701cadc592d52e7f092d5db7f106130195"} Jan 29 08:02:37 crc kubenswrapper[4826]: I0129 08:02:37.747791 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" podStartSLOduration=-9223371981.107006 podStartE2EDuration="55.74776975s" podCreationTimestamp="2026-01-29 08:01:42 +0000 UTC" firstStartedPulling="2026-01-29 08:01:43.004894034 +0000 UTC m=+4686.866687103" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:02:37.741193707 +0000 UTC m=+4741.602986806" watchObservedRunningTime="2026-01-29 08:02:37.74776975 +0000 UTC m=+4741.609562829" Jan 29 08:02:38 crc kubenswrapper[4826]: I0129 08:02:38.087532 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zqdgc" Jan 29 08:02:38 crc kubenswrapper[4826]: I0129 08:02:38.210751 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttkmq\" (UniqueName: \"kubernetes.io/projected/1bc0f11a-4196-43d6-94b8-cfc9b9f0713c-kube-api-access-ttkmq\") pod \"1bc0f11a-4196-43d6-94b8-cfc9b9f0713c\" (UID: \"1bc0f11a-4196-43d6-94b8-cfc9b9f0713c\") " Jan 29 08:02:38 crc kubenswrapper[4826]: I0129 08:02:38.210878 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc0f11a-4196-43d6-94b8-cfc9b9f0713c-operator-scripts\") pod \"1bc0f11a-4196-43d6-94b8-cfc9b9f0713c\" (UID: \"1bc0f11a-4196-43d6-94b8-cfc9b9f0713c\") " Jan 29 08:02:38 crc kubenswrapper[4826]: I0129 08:02:38.211453 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bc0f11a-4196-43d6-94b8-cfc9b9f0713c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bc0f11a-4196-43d6-94b8-cfc9b9f0713c" (UID: "1bc0f11a-4196-43d6-94b8-cfc9b9f0713c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:02:38 crc kubenswrapper[4826]: I0129 08:02:38.217463 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc0f11a-4196-43d6-94b8-cfc9b9f0713c-kube-api-access-ttkmq" (OuterVolumeSpecName: "kube-api-access-ttkmq") pod "1bc0f11a-4196-43d6-94b8-cfc9b9f0713c" (UID: "1bc0f11a-4196-43d6-94b8-cfc9b9f0713c"). InnerVolumeSpecName "kube-api-access-ttkmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:02:38 crc kubenswrapper[4826]: I0129 08:02:38.313194 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttkmq\" (UniqueName: \"kubernetes.io/projected/1bc0f11a-4196-43d6-94b8-cfc9b9f0713c-kube-api-access-ttkmq\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:38 crc kubenswrapper[4826]: I0129 08:02:38.313244 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc0f11a-4196-43d6-94b8-cfc9b9f0713c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:38 crc kubenswrapper[4826]: I0129 08:02:38.726516 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zqdgc" event={"ID":"1bc0f11a-4196-43d6-94b8-cfc9b9f0713c","Type":"ContainerDied","Data":"b1d7cd8a349b6bcd87f07b94244d412ae83a52cfc13bc3fb0824eb8b72a09925"} Jan 29 08:02:38 crc kubenswrapper[4826]: I0129 08:02:38.727802 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d7cd8a349b6bcd87f07b94244d412ae83a52cfc13bc3fb0824eb8b72a09925" Jan 29 08:02:38 crc kubenswrapper[4826]: I0129 08:02:38.726597 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zqdgc" Jan 29 08:02:38 crc kubenswrapper[4826]: I0129 08:02:38.820042 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5680354-0ec5-418e-a681-1af56a789b73" path="/var/lib/kubelet/pods/f5680354-0ec5-418e-a681-1af56a789b73/volumes" Jan 29 08:02:42 crc kubenswrapper[4826]: I0129 08:02:42.497633 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:02:42 crc kubenswrapper[4826]: I0129 08:02:42.498572 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:02:42 crc kubenswrapper[4826]: I0129 08:02:42.812364 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.219592 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.305928 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-678974fb59-lrzsc"] Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.625380 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zqdgc"] Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.628197 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zqdgc"] Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.704731 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-v488d"] Jan 29 08:02:43 crc kubenswrapper[4826]: E0129 08:02:43.712601 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5680354-0ec5-418e-a681-1af56a789b73" containerName="extract-utilities" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.712698 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5680354-0ec5-418e-a681-1af56a789b73" containerName="extract-utilities" Jan 29 08:02:43 crc kubenswrapper[4826]: E0129 08:02:43.712803 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5680354-0ec5-418e-a681-1af56a789b73" containerName="extract-content" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.712881 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5680354-0ec5-418e-a681-1af56a789b73" containerName="extract-content" Jan 29 08:02:43 crc kubenswrapper[4826]: E0129 08:02:43.712982 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5680354-0ec5-418e-a681-1af56a789b73" containerName="registry-server" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.713047 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5680354-0ec5-418e-a681-1af56a789b73" containerName="registry-server" Jan 29 08:02:43 crc kubenswrapper[4826]: E0129 08:02:43.713122 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc0f11a-4196-43d6-94b8-cfc9b9f0713c" containerName="mariadb-account-create-update" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.713181 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc0f11a-4196-43d6-94b8-cfc9b9f0713c" containerName="mariadb-account-create-update" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.713404 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc0f11a-4196-43d6-94b8-cfc9b9f0713c" containerName="mariadb-account-create-update" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.713491 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5680354-0ec5-418e-a681-1af56a789b73" containerName="registry-server" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.714081 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v488d" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.718961 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.723211 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v488d"] Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.773540 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" podUID="e26fece8-b284-4d94-9f0f-8b86d1910819" containerName="dnsmasq-dns" containerID="cri-o://8e8609bf56b17cfbb015813c9aa3bb701cadc592d52e7f092d5db7f106130195" gracePeriod=10 Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.773760 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"1fff344b8a55b0b4a3914b487742a26e3d0886958427c810117738efef01b20e"} Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.823636 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7swj\" (UniqueName: \"kubernetes.io/projected/9b11520f-13c9-4211-8543-5fccafb422ad-kube-api-access-z7swj\") pod \"root-account-create-update-v488d\" (UID: \"9b11520f-13c9-4211-8543-5fccafb422ad\") " pod="openstack/root-account-create-update-v488d" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.824368 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b11520f-13c9-4211-8543-5fccafb422ad-operator-scripts\") pod \"root-account-create-update-v488d\" (UID: \"9b11520f-13c9-4211-8543-5fccafb422ad\") " pod="openstack/root-account-create-update-v488d" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.926802 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7swj\" (UniqueName: \"kubernetes.io/projected/9b11520f-13c9-4211-8543-5fccafb422ad-kube-api-access-z7swj\") pod \"root-account-create-update-v488d\" (UID: \"9b11520f-13c9-4211-8543-5fccafb422ad\") " pod="openstack/root-account-create-update-v488d" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.927079 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b11520f-13c9-4211-8543-5fccafb422ad-operator-scripts\") pod \"root-account-create-update-v488d\" (UID: \"9b11520f-13c9-4211-8543-5fccafb422ad\") " pod="openstack/root-account-create-update-v488d" Jan 29 08:02:43 crc kubenswrapper[4826]: I0129 08:02:43.927889 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b11520f-13c9-4211-8543-5fccafb422ad-operator-scripts\") pod \"root-account-create-update-v488d\" (UID: \"9b11520f-13c9-4211-8543-5fccafb422ad\") " pod="openstack/root-account-create-update-v488d" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.003157 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7swj\" (UniqueName: \"kubernetes.io/projected/9b11520f-13c9-4211-8543-5fccafb422ad-kube-api-access-z7swj\") pod \"root-account-create-update-v488d\" (UID: \"9b11520f-13c9-4211-8543-5fccafb422ad\") " pod="openstack/root-account-create-update-v488d" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.034462 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v488d" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.612173 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.742808 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e26fece8-b284-4d94-9f0f-8b86d1910819-config\") pod \"e26fece8-b284-4d94-9f0f-8b86d1910819\" (UID: \"e26fece8-b284-4d94-9f0f-8b86d1910819\") " Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.742917 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzzgr\" (UniqueName: \"kubernetes.io/projected/e26fece8-b284-4d94-9f0f-8b86d1910819-kube-api-access-rzzgr\") pod \"e26fece8-b284-4d94-9f0f-8b86d1910819\" (UID: \"e26fece8-b284-4d94-9f0f-8b86d1910819\") " Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.742965 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e26fece8-b284-4d94-9f0f-8b86d1910819-dns-svc\") pod \"e26fece8-b284-4d94-9f0f-8b86d1910819\" (UID: \"e26fece8-b284-4d94-9f0f-8b86d1910819\") " Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.763733 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e26fece8-b284-4d94-9f0f-8b86d1910819-kube-api-access-rzzgr" (OuterVolumeSpecName: "kube-api-access-rzzgr") pod "e26fece8-b284-4d94-9f0f-8b86d1910819" (UID: "e26fece8-b284-4d94-9f0f-8b86d1910819"). InnerVolumeSpecName "kube-api-access-rzzgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.785962 4826 generic.go:334] "Generic (PLEG): container finished" podID="e26fece8-b284-4d94-9f0f-8b86d1910819" containerID="8e8609bf56b17cfbb015813c9aa3bb701cadc592d52e7f092d5db7f106130195" exitCode=0 Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.786010 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" event={"ID":"e26fece8-b284-4d94-9f0f-8b86d1910819","Type":"ContainerDied","Data":"8e8609bf56b17cfbb015813c9aa3bb701cadc592d52e7f092d5db7f106130195"} Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.786039 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" event={"ID":"e26fece8-b284-4d94-9f0f-8b86d1910819","Type":"ContainerDied","Data":"1fbbb2e4db0b18c2f0479bcc2a6053af8c5355e31ccef2ff9a770c4fc0d7879e"} Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.786058 4826 scope.go:117] "RemoveContainer" containerID="8e8609bf56b17cfbb015813c9aa3bb701cadc592d52e7f092d5db7f106130195" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.786188 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-678974fb59-lrzsc" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.801373 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e26fece8-b284-4d94-9f0f-8b86d1910819-config" (OuterVolumeSpecName: "config") pod "e26fece8-b284-4d94-9f0f-8b86d1910819" (UID: "e26fece8-b284-4d94-9f0f-8b86d1910819"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.808963 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e26fece8-b284-4d94-9f0f-8b86d1910819-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e26fece8-b284-4d94-9f0f-8b86d1910819" (UID: "e26fece8-b284-4d94-9f0f-8b86d1910819"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.821446 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bc0f11a-4196-43d6-94b8-cfc9b9f0713c" path="/var/lib/kubelet/pods/1bc0f11a-4196-43d6-94b8-cfc9b9f0713c/volumes" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.825336 4826 scope.go:117] "RemoveContainer" containerID="bf86d9a950d21b8c8bdffcad5d4b65bf16252831f7006aaae1818c7d6ad86af6" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.846938 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e26fece8-b284-4d94-9f0f-8b86d1910819-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.847117 4826 scope.go:117] "RemoveContainer" containerID="8e8609bf56b17cfbb015813c9aa3bb701cadc592d52e7f092d5db7f106130195" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.847547 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzzgr\" (UniqueName: \"kubernetes.io/projected/e26fece8-b284-4d94-9f0f-8b86d1910819-kube-api-access-rzzgr\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.847571 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e26fece8-b284-4d94-9f0f-8b86d1910819-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:44 crc kubenswrapper[4826]: E0129 08:02:44.847574 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e8609bf56b17cfbb015813c9aa3bb701cadc592d52e7f092d5db7f106130195\": container with ID starting with 8e8609bf56b17cfbb015813c9aa3bb701cadc592d52e7f092d5db7f106130195 not found: ID does not exist" containerID="8e8609bf56b17cfbb015813c9aa3bb701cadc592d52e7f092d5db7f106130195" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.847604 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8609bf56b17cfbb015813c9aa3bb701cadc592d52e7f092d5db7f106130195"} err="failed to get container status \"8e8609bf56b17cfbb015813c9aa3bb701cadc592d52e7f092d5db7f106130195\": rpc error: code = NotFound desc = could not find container \"8e8609bf56b17cfbb015813c9aa3bb701cadc592d52e7f092d5db7f106130195\": container with ID starting with 8e8609bf56b17cfbb015813c9aa3bb701cadc592d52e7f092d5db7f106130195 not found: ID does not exist" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.847626 4826 scope.go:117] "RemoveContainer" containerID="bf86d9a950d21b8c8bdffcad5d4b65bf16252831f7006aaae1818c7d6ad86af6" Jan 29 08:02:44 crc kubenswrapper[4826]: E0129 08:02:44.848205 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf86d9a950d21b8c8bdffcad5d4b65bf16252831f7006aaae1818c7d6ad86af6\": container with ID starting with bf86d9a950d21b8c8bdffcad5d4b65bf16252831f7006aaae1818c7d6ad86af6 not found: ID does not exist" containerID="bf86d9a950d21b8c8bdffcad5d4b65bf16252831f7006aaae1818c7d6ad86af6" Jan 29 08:02:44 crc kubenswrapper[4826]: I0129 08:02:44.848231 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf86d9a950d21b8c8bdffcad5d4b65bf16252831f7006aaae1818c7d6ad86af6"} err="failed to get container status \"bf86d9a950d21b8c8bdffcad5d4b65bf16252831f7006aaae1818c7d6ad86af6\": rpc error: code = NotFound desc = could not find container \"bf86d9a950d21b8c8bdffcad5d4b65bf16252831f7006aaae1818c7d6ad86af6\": container with ID starting with bf86d9a950d21b8c8bdffcad5d4b65bf16252831f7006aaae1818c7d6ad86af6 not found: ID does not exist" Jan 29 08:02:45 crc kubenswrapper[4826]: I0129 08:02:45.021614 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v488d"] Jan 29 08:02:45 crc kubenswrapper[4826]: I0129 08:02:45.137706 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-678974fb59-lrzsc"] Jan 29 08:02:45 crc kubenswrapper[4826]: I0129 08:02:45.145563 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-678974fb59-lrzsc"] Jan 29 08:02:45 crc kubenswrapper[4826]: I0129 08:02:45.552850 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:45 crc kubenswrapper[4826]: I0129 08:02:45.634368 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:45 crc kubenswrapper[4826]: I0129 08:02:45.801870 4826 generic.go:334] "Generic (PLEG): container finished" podID="9b11520f-13c9-4211-8543-5fccafb422ad" containerID="c1ad3845712b39a0059997603369da605485bee6656e374de156ba79ac3b693a" exitCode=0 Jan 29 08:02:45 crc kubenswrapper[4826]: I0129 08:02:45.802024 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v488d" event={"ID":"9b11520f-13c9-4211-8543-5fccafb422ad","Type":"ContainerDied","Data":"c1ad3845712b39a0059997603369da605485bee6656e374de156ba79ac3b693a"} Jan 29 08:02:45 crc kubenswrapper[4826]: I0129 08:02:45.802107 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v488d" event={"ID":"9b11520f-13c9-4211-8543-5fccafb422ad","Type":"ContainerStarted","Data":"4c034f7635a75d5b3e18ccdd7e740e757f656230f4826773d99df90784488593"} Jan 29 08:02:45 crc kubenswrapper[4826]: I0129 08:02:45.810585 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-grkxk"] Jan 29 08:02:46 crc kubenswrapper[4826]: I0129 08:02:46.813437 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-grkxk" podUID="df57a550-0fc4-4c54-8632-4954fe75856c" containerName="registry-server" containerID="cri-o://5f9a500040f2402d5d97cbf816e286f8a5e93f935a0ad1662cd16ad67fbfc712" gracePeriod=2 Jan 29 08:02:46 crc kubenswrapper[4826]: I0129 08:02:46.819288 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e26fece8-b284-4d94-9f0f-8b86d1910819" path="/var/lib/kubelet/pods/e26fece8-b284-4d94-9f0f-8b86d1910819/volumes" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.191676 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v488d" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.288090 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.388174 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7swj\" (UniqueName: \"kubernetes.io/projected/9b11520f-13c9-4211-8543-5fccafb422ad-kube-api-access-z7swj\") pod \"9b11520f-13c9-4211-8543-5fccafb422ad\" (UID: \"9b11520f-13c9-4211-8543-5fccafb422ad\") " Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.388249 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b11520f-13c9-4211-8543-5fccafb422ad-operator-scripts\") pod \"9b11520f-13c9-4211-8543-5fccafb422ad\" (UID: \"9b11520f-13c9-4211-8543-5fccafb422ad\") " Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.389101 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b11520f-13c9-4211-8543-5fccafb422ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b11520f-13c9-4211-8543-5fccafb422ad" (UID: "9b11520f-13c9-4211-8543-5fccafb422ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.389529 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b11520f-13c9-4211-8543-5fccafb422ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.394659 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b11520f-13c9-4211-8543-5fccafb422ad-kube-api-access-z7swj" (OuterVolumeSpecName: "kube-api-access-z7swj") pod "9b11520f-13c9-4211-8543-5fccafb422ad" (UID: "9b11520f-13c9-4211-8543-5fccafb422ad"). InnerVolumeSpecName "kube-api-access-z7swj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.490751 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df57a550-0fc4-4c54-8632-4954fe75856c-utilities\") pod \"df57a550-0fc4-4c54-8632-4954fe75856c\" (UID: \"df57a550-0fc4-4c54-8632-4954fe75856c\") " Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.491147 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmx2n\" (UniqueName: \"kubernetes.io/projected/df57a550-0fc4-4c54-8632-4954fe75856c-kube-api-access-cmx2n\") pod \"df57a550-0fc4-4c54-8632-4954fe75856c\" (UID: \"df57a550-0fc4-4c54-8632-4954fe75856c\") " Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.491232 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df57a550-0fc4-4c54-8632-4954fe75856c-catalog-content\") pod \"df57a550-0fc4-4c54-8632-4954fe75856c\" (UID: \"df57a550-0fc4-4c54-8632-4954fe75856c\") " Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.491459 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7swj\" (UniqueName: \"kubernetes.io/projected/9b11520f-13c9-4211-8543-5fccafb422ad-kube-api-access-z7swj\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.492332 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df57a550-0fc4-4c54-8632-4954fe75856c-utilities" (OuterVolumeSpecName: "utilities") pod "df57a550-0fc4-4c54-8632-4954fe75856c" (UID: "df57a550-0fc4-4c54-8632-4954fe75856c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.496369 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df57a550-0fc4-4c54-8632-4954fe75856c-kube-api-access-cmx2n" (OuterVolumeSpecName: "kube-api-access-cmx2n") pod "df57a550-0fc4-4c54-8632-4954fe75856c" (UID: "df57a550-0fc4-4c54-8632-4954fe75856c"). InnerVolumeSpecName "kube-api-access-cmx2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.593482 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmx2n\" (UniqueName: \"kubernetes.io/projected/df57a550-0fc4-4c54-8632-4954fe75856c-kube-api-access-cmx2n\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.593526 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df57a550-0fc4-4c54-8632-4954fe75856c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.629104 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df57a550-0fc4-4c54-8632-4954fe75856c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df57a550-0fc4-4c54-8632-4954fe75856c" (UID: "df57a550-0fc4-4c54-8632-4954fe75856c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.695044 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df57a550-0fc4-4c54-8632-4954fe75856c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.821537 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v488d" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.821595 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v488d" event={"ID":"9b11520f-13c9-4211-8543-5fccafb422ad","Type":"ContainerDied","Data":"4c034f7635a75d5b3e18ccdd7e740e757f656230f4826773d99df90784488593"} Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.821646 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c034f7635a75d5b3e18ccdd7e740e757f656230f4826773d99df90784488593" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.825145 4826 generic.go:334] "Generic (PLEG): container finished" podID="df57a550-0fc4-4c54-8632-4954fe75856c" containerID="5f9a500040f2402d5d97cbf816e286f8a5e93f935a0ad1662cd16ad67fbfc712" exitCode=0 Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.825207 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grkxk" event={"ID":"df57a550-0fc4-4c54-8632-4954fe75856c","Type":"ContainerDied","Data":"5f9a500040f2402d5d97cbf816e286f8a5e93f935a0ad1662cd16ad67fbfc712"} Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.825243 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grkxk" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.825269 4826 scope.go:117] "RemoveContainer" containerID="5f9a500040f2402d5d97cbf816e286f8a5e93f935a0ad1662cd16ad67fbfc712" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.825248 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grkxk" event={"ID":"df57a550-0fc4-4c54-8632-4954fe75856c","Type":"ContainerDied","Data":"6e11d09d800c5eda38c862e69ef28a425b161af7d1370fd848eec623584eea61"} Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.863670 4826 scope.go:117] "RemoveContainer" containerID="012465c3e397fad04a6996d6cb7259ee422598c07cd39ac5b15be942a44984b3" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.878094 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-grkxk"] Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.893612 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-grkxk"] Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.900352 4826 scope.go:117] "RemoveContainer" containerID="85df5376c294a1c568a4db2c3a4fae193003204391bb6a8443435f533d3fa21a" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.920245 4826 scope.go:117] "RemoveContainer" containerID="5f9a500040f2402d5d97cbf816e286f8a5e93f935a0ad1662cd16ad67fbfc712" Jan 29 08:02:47 crc kubenswrapper[4826]: E0129 08:02:47.920800 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f9a500040f2402d5d97cbf816e286f8a5e93f935a0ad1662cd16ad67fbfc712\": container with ID starting with 5f9a500040f2402d5d97cbf816e286f8a5e93f935a0ad1662cd16ad67fbfc712 not found: ID does not exist" containerID="5f9a500040f2402d5d97cbf816e286f8a5e93f935a0ad1662cd16ad67fbfc712" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.920839 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f9a500040f2402d5d97cbf816e286f8a5e93f935a0ad1662cd16ad67fbfc712"} err="failed to get container status \"5f9a500040f2402d5d97cbf816e286f8a5e93f935a0ad1662cd16ad67fbfc712\": rpc error: code = NotFound desc = could not find container \"5f9a500040f2402d5d97cbf816e286f8a5e93f935a0ad1662cd16ad67fbfc712\": container with ID starting with 5f9a500040f2402d5d97cbf816e286f8a5e93f935a0ad1662cd16ad67fbfc712 not found: ID does not exist" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.920861 4826 scope.go:117] "RemoveContainer" containerID="012465c3e397fad04a6996d6cb7259ee422598c07cd39ac5b15be942a44984b3" Jan 29 08:02:47 crc kubenswrapper[4826]: E0129 08:02:47.921350 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"012465c3e397fad04a6996d6cb7259ee422598c07cd39ac5b15be942a44984b3\": container with ID starting with 012465c3e397fad04a6996d6cb7259ee422598c07cd39ac5b15be942a44984b3 not found: ID does not exist" containerID="012465c3e397fad04a6996d6cb7259ee422598c07cd39ac5b15be942a44984b3" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.921493 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"012465c3e397fad04a6996d6cb7259ee422598c07cd39ac5b15be942a44984b3"} err="failed to get container status \"012465c3e397fad04a6996d6cb7259ee422598c07cd39ac5b15be942a44984b3\": rpc error: code = NotFound desc = could not find container \"012465c3e397fad04a6996d6cb7259ee422598c07cd39ac5b15be942a44984b3\": container with ID starting with 012465c3e397fad04a6996d6cb7259ee422598c07cd39ac5b15be942a44984b3 not found: ID does not exist" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.923436 4826 scope.go:117] "RemoveContainer" containerID="85df5376c294a1c568a4db2c3a4fae193003204391bb6a8443435f533d3fa21a" Jan 29 08:02:47 crc kubenswrapper[4826]: E0129 08:02:47.924569 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85df5376c294a1c568a4db2c3a4fae193003204391bb6a8443435f533d3fa21a\": container with ID starting with 85df5376c294a1c568a4db2c3a4fae193003204391bb6a8443435f533d3fa21a not found: ID does not exist" containerID="85df5376c294a1c568a4db2c3a4fae193003204391bb6a8443435f533d3fa21a" Jan 29 08:02:47 crc kubenswrapper[4826]: I0129 08:02:47.924638 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85df5376c294a1c568a4db2c3a4fae193003204391bb6a8443435f533d3fa21a"} err="failed to get container status \"85df5376c294a1c568a4db2c3a4fae193003204391bb6a8443435f533d3fa21a\": rpc error: code = NotFound desc = could not find container \"85df5376c294a1c568a4db2c3a4fae193003204391bb6a8443435f533d3fa21a\": container with ID starting with 85df5376c294a1c568a4db2c3a4fae193003204391bb6a8443435f533d3fa21a not found: ID does not exist" Jan 29 08:02:48 crc kubenswrapper[4826]: I0129 08:02:48.825576 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df57a550-0fc4-4c54-8632-4954fe75856c" path="/var/lib/kubelet/pods/df57a550-0fc4-4c54-8632-4954fe75856c/volumes" Jan 29 08:02:55 crc kubenswrapper[4826]: I0129 08:02:55.906852 4826 generic.go:334] "Generic (PLEG): container finished" podID="f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" containerID="2be5f188e6ec8202e6f83fc5ea5558d869009b79d2cbe45a0c4bcebf63b079ae" exitCode=0 Jan 29 08:02:55 crc kubenswrapper[4826]: I0129 08:02:55.906929 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf","Type":"ContainerDied","Data":"2be5f188e6ec8202e6f83fc5ea5558d869009b79d2cbe45a0c4bcebf63b079ae"} Jan 29 08:02:55 crc kubenswrapper[4826]: I0129 08:02:55.911349 4826 generic.go:334] "Generic (PLEG): container finished" podID="376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" containerID="2cae4c388ac46da0d0758a2c53e1c6f81d8e9f1b0ea830e0a281452453dd4f20" exitCode=0 Jan 29 08:02:55 crc kubenswrapper[4826]: I0129 08:02:55.911379 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8","Type":"ContainerDied","Data":"2cae4c388ac46da0d0758a2c53e1c6f81d8e9f1b0ea830e0a281452453dd4f20"} Jan 29 08:02:56 crc kubenswrapper[4826]: I0129 08:02:56.921718 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf","Type":"ContainerStarted","Data":"04d533c3263c5bb27c62af06206108331301dab7b31d4a532ade56b40415ffb9"} Jan 29 08:02:56 crc kubenswrapper[4826]: I0129 08:02:56.922499 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 08:02:56 crc kubenswrapper[4826]: I0129 08:02:56.925007 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8","Type":"ContainerStarted","Data":"0ddf6daeff3e08ebcdd0225bdadf4e120ac88a2257b2489823451505b1e9312d"} Jan 29 08:02:56 crc kubenswrapper[4826]: I0129 08:02:56.925276 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:02:56 crc kubenswrapper[4826]: I0129 08:02:56.994737 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.731757027 podStartE2EDuration="1m14.994712114s" podCreationTimestamp="2026-01-29 08:01:42 +0000 UTC" firstStartedPulling="2026-01-29 08:01:44.191321729 +0000 UTC m=+4688.053114798" lastFinishedPulling="2026-01-29 08:02:20.454276796 +0000 UTC m=+4724.316069885" observedRunningTime="2026-01-29 08:02:56.963488374 +0000 UTC m=+4760.825281453" watchObservedRunningTime="2026-01-29 08:02:56.994712114 +0000 UTC m=+4760.856505213" Jan 29 08:02:57 crc kubenswrapper[4826]: I0129 08:02:57.001862 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371961.852932 podStartE2EDuration="1m15.001843482s" podCreationTimestamp="2026-01-29 08:01:42 +0000 UTC" firstStartedPulling="2026-01-29 08:01:44.568213373 +0000 UTC m=+4688.430006442" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:02:56.991381847 +0000 UTC m=+4760.853174926" watchObservedRunningTime="2026-01-29 08:02:57.001843482 +0000 UTC m=+4760.863636561" Jan 29 08:03:13 crc kubenswrapper[4826]: I0129 08:03:13.713635 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 08:03:14 crc kubenswrapper[4826]: I0129 08:03:14.314566 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.601643 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cf4cf88bf-259rh"] Jan 29 08:03:20 crc kubenswrapper[4826]: E0129 08:03:20.603569 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26fece8-b284-4d94-9f0f-8b86d1910819" containerName="dnsmasq-dns" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.603672 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26fece8-b284-4d94-9f0f-8b86d1910819" containerName="dnsmasq-dns" Jan 29 08:03:20 crc kubenswrapper[4826]: E0129 08:03:20.603757 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df57a550-0fc4-4c54-8632-4954fe75856c" containerName="registry-server" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.603836 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="df57a550-0fc4-4c54-8632-4954fe75856c" containerName="registry-server" Jan 29 08:03:20 crc kubenswrapper[4826]: E0129 08:03:20.603934 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df57a550-0fc4-4c54-8632-4954fe75856c" containerName="extract-content" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.604000 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="df57a550-0fc4-4c54-8632-4954fe75856c" containerName="extract-content" Jan 29 08:03:20 crc kubenswrapper[4826]: E0129 08:03:20.604060 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26fece8-b284-4d94-9f0f-8b86d1910819" containerName="init" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.604116 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26fece8-b284-4d94-9f0f-8b86d1910819" containerName="init" Jan 29 08:03:20 crc kubenswrapper[4826]: E0129 08:03:20.604183 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df57a550-0fc4-4c54-8632-4954fe75856c" containerName="extract-utilities" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.604245 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="df57a550-0fc4-4c54-8632-4954fe75856c" containerName="extract-utilities" Jan 29 08:03:20 crc kubenswrapper[4826]: E0129 08:03:20.604336 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b11520f-13c9-4211-8543-5fccafb422ad" containerName="mariadb-account-create-update" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.604402 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b11520f-13c9-4211-8543-5fccafb422ad" containerName="mariadb-account-create-update" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.604603 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="df57a550-0fc4-4c54-8632-4954fe75856c" containerName="registry-server" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.604683 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e26fece8-b284-4d94-9f0f-8b86d1910819" containerName="dnsmasq-dns" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.604749 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b11520f-13c9-4211-8543-5fccafb422ad" containerName="mariadb-account-create-update" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.606853 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.614129 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf4cf88bf-259rh"] Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.750922 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55efbdae-46d2-40b3-8ce0-482112cdeb2b-config\") pod \"dnsmasq-dns-6cf4cf88bf-259rh\" (UID: \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\") " pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.751243 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55efbdae-46d2-40b3-8ce0-482112cdeb2b-dns-svc\") pod \"dnsmasq-dns-6cf4cf88bf-259rh\" (UID: \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\") " pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.751412 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9lx2\" (UniqueName: \"kubernetes.io/projected/55efbdae-46d2-40b3-8ce0-482112cdeb2b-kube-api-access-l9lx2\") pod \"dnsmasq-dns-6cf4cf88bf-259rh\" (UID: \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\") " pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.852586 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9lx2\" (UniqueName: \"kubernetes.io/projected/55efbdae-46d2-40b3-8ce0-482112cdeb2b-kube-api-access-l9lx2\") pod \"dnsmasq-dns-6cf4cf88bf-259rh\" (UID: \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\") " pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.852991 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55efbdae-46d2-40b3-8ce0-482112cdeb2b-config\") pod \"dnsmasq-dns-6cf4cf88bf-259rh\" (UID: \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\") " pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.853254 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55efbdae-46d2-40b3-8ce0-482112cdeb2b-dns-svc\") pod \"dnsmasq-dns-6cf4cf88bf-259rh\" (UID: \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\") " pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.854350 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55efbdae-46d2-40b3-8ce0-482112cdeb2b-config\") pod \"dnsmasq-dns-6cf4cf88bf-259rh\" (UID: \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\") " pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.854673 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55efbdae-46d2-40b3-8ce0-482112cdeb2b-dns-svc\") pod \"dnsmasq-dns-6cf4cf88bf-259rh\" (UID: \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\") " pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.878039 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9lx2\" (UniqueName: \"kubernetes.io/projected/55efbdae-46d2-40b3-8ce0-482112cdeb2b-kube-api-access-l9lx2\") pod \"dnsmasq-dns-6cf4cf88bf-259rh\" (UID: \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\") " pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:03:20 crc kubenswrapper[4826]: I0129 08:03:20.931674 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:03:21 crc kubenswrapper[4826]: I0129 08:03:21.414087 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf4cf88bf-259rh"] Jan 29 08:03:21 crc kubenswrapper[4826]: I0129 08:03:21.509396 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:03:22 crc kubenswrapper[4826]: I0129 08:03:22.046172 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:03:22 crc kubenswrapper[4826]: I0129 08:03:22.172944 4826 generic.go:334] "Generic (PLEG): container finished" podID="55efbdae-46d2-40b3-8ce0-482112cdeb2b" containerID="4dcbd755b79c5cedcd58ad8f78cbe022c654d62d55e9995554ee26fb59f6a266" exitCode=0 Jan 29 08:03:22 crc kubenswrapper[4826]: I0129 08:03:22.173010 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" event={"ID":"55efbdae-46d2-40b3-8ce0-482112cdeb2b","Type":"ContainerDied","Data":"4dcbd755b79c5cedcd58ad8f78cbe022c654d62d55e9995554ee26fb59f6a266"} Jan 29 08:03:22 crc kubenswrapper[4826]: I0129 08:03:22.173107 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" event={"ID":"55efbdae-46d2-40b3-8ce0-482112cdeb2b","Type":"ContainerStarted","Data":"4a874d3acb5fc495893a6c3e2830e68235c1f84ee012d7ef02eb2cee4a71ddc5"} Jan 29 08:03:23 crc kubenswrapper[4826]: I0129 08:03:23.183333 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" event={"ID":"55efbdae-46d2-40b3-8ce0-482112cdeb2b","Type":"ContainerStarted","Data":"0e8fd8d21e89cab4935a601407d27a2b474d066c7e9aac3f9518cba0d516abd1"} Jan 29 08:03:23 crc kubenswrapper[4826]: I0129 08:03:23.185069 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:03:23 crc kubenswrapper[4826]: I0129 08:03:23.206553 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" podStartSLOduration=3.20652065 podStartE2EDuration="3.20652065s" podCreationTimestamp="2026-01-29 08:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:03:23.201711464 +0000 UTC m=+4787.063504573" watchObservedRunningTime="2026-01-29 08:03:23.20652065 +0000 UTC m=+4787.068313759" Jan 29 08:03:25 crc kubenswrapper[4826]: I0129 08:03:25.488190 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" containerName="rabbitmq" containerID="cri-o://04d533c3263c5bb27c62af06206108331301dab7b31d4a532ade56b40415ffb9" gracePeriod=604797 Jan 29 08:03:26 crc kubenswrapper[4826]: I0129 08:03:26.518132 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" containerName="rabbitmq" containerID="cri-o://0ddf6daeff3e08ebcdd0225bdadf4e120ac88a2257b2489823451505b1e9312d" gracePeriod=604796 Jan 29 08:03:30 crc kubenswrapper[4826]: I0129 08:03:30.933531 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.009578 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-576ffbb965-vmkd8"] Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.009951 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" podUID="5af500c6-d9f7-460d-9969-e3f444fb0c82" containerName="dnsmasq-dns" containerID="cri-o://f8b72bfe960e84e2de81b8ba6e5934127cceae272945f2fa7a42796d2d07e072" gracePeriod=10 Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.267523 4826 generic.go:334] "Generic (PLEG): container finished" podID="5af500c6-d9f7-460d-9969-e3f444fb0c82" containerID="f8b72bfe960e84e2de81b8ba6e5934127cceae272945f2fa7a42796d2d07e072" exitCode=0 Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.267573 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" event={"ID":"5af500c6-d9f7-460d-9969-e3f444fb0c82","Type":"ContainerDied","Data":"f8b72bfe960e84e2de81b8ba6e5934127cceae272945f2fa7a42796d2d07e072"} Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.425394 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.534889 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af500c6-d9f7-460d-9969-e3f444fb0c82-config\") pod \"5af500c6-d9f7-460d-9969-e3f444fb0c82\" (UID: \"5af500c6-d9f7-460d-9969-e3f444fb0c82\") " Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.535010 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af500c6-d9f7-460d-9969-e3f444fb0c82-dns-svc\") pod \"5af500c6-d9f7-460d-9969-e3f444fb0c82\" (UID: \"5af500c6-d9f7-460d-9969-e3f444fb0c82\") " Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.535140 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n4xh\" (UniqueName: \"kubernetes.io/projected/5af500c6-d9f7-460d-9969-e3f444fb0c82-kube-api-access-5n4xh\") pod \"5af500c6-d9f7-460d-9969-e3f444fb0c82\" (UID: \"5af500c6-d9f7-460d-9969-e3f444fb0c82\") " Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.542389 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af500c6-d9f7-460d-9969-e3f444fb0c82-kube-api-access-5n4xh" (OuterVolumeSpecName: "kube-api-access-5n4xh") pod "5af500c6-d9f7-460d-9969-e3f444fb0c82" (UID: "5af500c6-d9f7-460d-9969-e3f444fb0c82"). InnerVolumeSpecName "kube-api-access-5n4xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.567905 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af500c6-d9f7-460d-9969-e3f444fb0c82-config" (OuterVolumeSpecName: "config") pod "5af500c6-d9f7-460d-9969-e3f444fb0c82" (UID: "5af500c6-d9f7-460d-9969-e3f444fb0c82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.580225 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af500c6-d9f7-460d-9969-e3f444fb0c82-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5af500c6-d9f7-460d-9969-e3f444fb0c82" (UID: "5af500c6-d9f7-460d-9969-e3f444fb0c82"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.637324 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af500c6-d9f7-460d-9969-e3f444fb0c82-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.637350 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5af500c6-d9f7-460d-9969-e3f444fb0c82-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:31 crc kubenswrapper[4826]: I0129 08:03:31.637360 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n4xh\" (UniqueName: \"kubernetes.io/projected/5af500c6-d9f7-460d-9969-e3f444fb0c82-kube-api-access-5n4xh\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.226581 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.297772 4826 generic.go:334] "Generic (PLEG): container finished" podID="f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" containerID="04d533c3263c5bb27c62af06206108331301dab7b31d4a532ade56b40415ffb9" exitCode=0 Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.297904 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf","Type":"ContainerDied","Data":"04d533c3263c5bb27c62af06206108331301dab7b31d4a532ade56b40415ffb9"} Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.297948 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf","Type":"ContainerDied","Data":"b3adbc2a3fc33fef4ba81dc5f7855b3c1b08274a59569595ede705102ceef049"} Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.297977 4826 scope.go:117] "RemoveContainer" containerID="04d533c3263c5bb27c62af06206108331301dab7b31d4a532ade56b40415ffb9" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.298154 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.304427 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" event={"ID":"5af500c6-d9f7-460d-9969-e3f444fb0c82","Type":"ContainerDied","Data":"16aba1ac9ad3114552e77ea0933d11a93a3db6970f1a79a7ed89fe931da28c9b"} Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.304549 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-576ffbb965-vmkd8" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.336753 4826 scope.go:117] "RemoveContainer" containerID="2be5f188e6ec8202e6f83fc5ea5558d869009b79d2cbe45a0c4bcebf63b079ae" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.350094 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-pod-info\") pod \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.350179 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-erlang-cookie-secret\") pod \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.350235 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-erlang-cookie\") pod \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.350281 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-plugins-conf\") pod \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.350346 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27qzg\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-kube-api-access-27qzg\") pod \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.350376 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-server-conf\") pod \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.350432 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-confd\") pod \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.350453 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-plugins\") pod \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.350475 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-tls\") pod \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.350613 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\") pod \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.350644 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-config-data\") pod \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.351261 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" (UID: "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.352110 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" (UID: "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.355386 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" (UID: "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.357571 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-pod-info" (OuterVolumeSpecName: "pod-info") pod "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" (UID: "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.358137 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" (UID: "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.360531 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-kube-api-access-27qzg" (OuterVolumeSpecName: "kube-api-access-27qzg") pod "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" (UID: "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf"). InnerVolumeSpecName "kube-api-access-27qzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.373239 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" (UID: "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.380587 4826 scope.go:117] "RemoveContainer" containerID="04d533c3263c5bb27c62af06206108331301dab7b31d4a532ade56b40415ffb9" Jan 29 08:03:32 crc kubenswrapper[4826]: E0129 08:03:32.381075 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04d533c3263c5bb27c62af06206108331301dab7b31d4a532ade56b40415ffb9\": container with ID starting with 04d533c3263c5bb27c62af06206108331301dab7b31d4a532ade56b40415ffb9 not found: ID does not exist" containerID="04d533c3263c5bb27c62af06206108331301dab7b31d4a532ade56b40415ffb9" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.381115 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04d533c3263c5bb27c62af06206108331301dab7b31d4a532ade56b40415ffb9"} err="failed to get container status \"04d533c3263c5bb27c62af06206108331301dab7b31d4a532ade56b40415ffb9\": rpc error: code = NotFound desc = could not find container \"04d533c3263c5bb27c62af06206108331301dab7b31d4a532ade56b40415ffb9\": container with ID starting with 04d533c3263c5bb27c62af06206108331301dab7b31d4a532ade56b40415ffb9 not found: ID does not exist" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.381141 4826 scope.go:117] "RemoveContainer" containerID="2be5f188e6ec8202e6f83fc5ea5558d869009b79d2cbe45a0c4bcebf63b079ae" Jan 29 08:03:32 crc kubenswrapper[4826]: E0129 08:03:32.381478 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2be5f188e6ec8202e6f83fc5ea5558d869009b79d2cbe45a0c4bcebf63b079ae\": container with ID starting with 2be5f188e6ec8202e6f83fc5ea5558d869009b79d2cbe45a0c4bcebf63b079ae not found: ID does not exist" containerID="2be5f188e6ec8202e6f83fc5ea5558d869009b79d2cbe45a0c4bcebf63b079ae" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.381512 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2be5f188e6ec8202e6f83fc5ea5558d869009b79d2cbe45a0c4bcebf63b079ae"} err="failed to get container status \"2be5f188e6ec8202e6f83fc5ea5558d869009b79d2cbe45a0c4bcebf63b079ae\": rpc error: code = NotFound desc = could not find container \"2be5f188e6ec8202e6f83fc5ea5558d869009b79d2cbe45a0c4bcebf63b079ae\": container with ID starting with 2be5f188e6ec8202e6f83fc5ea5558d869009b79d2cbe45a0c4bcebf63b079ae not found: ID does not exist" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.381530 4826 scope.go:117] "RemoveContainer" containerID="f8b72bfe960e84e2de81b8ba6e5934127cceae272945f2fa7a42796d2d07e072" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.394863 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d" (OuterVolumeSpecName: "persistence") pod "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" (UID: "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf"). InnerVolumeSpecName "pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.397905 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-576ffbb965-vmkd8"] Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.406147 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-config-data" (OuterVolumeSpecName: "config-data") pod "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" (UID: "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.408125 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-server-conf" (OuterVolumeSpecName: "server-conf") pod "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" (UID: "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.413879 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-576ffbb965-vmkd8"] Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.452035 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" (UID: "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.452261 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-confd\") pod \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\" (UID: \"f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf\") " Jan 29 08:03:32 crc kubenswrapper[4826]: W0129 08:03:32.452413 4826 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf/volumes/kubernetes.io~projected/rabbitmq-confd Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.452476 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" (UID: "f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.452646 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.452660 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.452669 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.452700 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\") on node \"crc\" " Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.452711 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.452719 4826 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.452727 4826 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.453417 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.453436 4826 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.453445 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27qzg\" (UniqueName: \"kubernetes.io/projected/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-kube-api-access-27qzg\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.453453 4826 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.458452 4826 scope.go:117] "RemoveContainer" containerID="90bfde05dbf0dfc8cb68c6d870da786179b207f0ebd5cd30bcb761d9bbd34c0a" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.468934 4826 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.469243 4826 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d") on node "crc" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.555138 4826 reconciler_common.go:293] "Volume detached for volume \"pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.636191 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.646691 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.661870 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:03:32 crc kubenswrapper[4826]: E0129 08:03:32.662221 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af500c6-d9f7-460d-9969-e3f444fb0c82" containerName="dnsmasq-dns" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.662249 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af500c6-d9f7-460d-9969-e3f444fb0c82" containerName="dnsmasq-dns" Jan 29 08:03:32 crc kubenswrapper[4826]: E0129 08:03:32.662262 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" containerName="rabbitmq" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.662272 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" containerName="rabbitmq" Jan 29 08:03:32 crc kubenswrapper[4826]: E0129 08:03:32.662286 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af500c6-d9f7-460d-9969-e3f444fb0c82" containerName="init" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.662312 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af500c6-d9f7-460d-9969-e3f444fb0c82" containerName="init" Jan 29 08:03:32 crc kubenswrapper[4826]: E0129 08:03:32.662322 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" containerName="setup-container" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.662329 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" containerName="setup-container" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.662500 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af500c6-d9f7-460d-9969-e3f444fb0c82" containerName="dnsmasq-dns" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.662521 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" containerName="rabbitmq" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.663486 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.665150 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.665686 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qqkhd" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.665868 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.665972 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.669291 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.669427 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.669443 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.683953 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.759841 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.759936 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.760077 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-config-data\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.760184 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.760342 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.760423 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.760475 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.760538 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.760575 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.761004 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.761184 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9h5f\" (UniqueName: \"kubernetes.io/projected/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-kube-api-access-t9h5f\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.833649 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af500c6-d9f7-460d-9969-e3f444fb0c82" path="/var/lib/kubelet/pods/5af500c6-d9f7-460d-9969-e3f444fb0c82/volumes" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.835344 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf" path="/var/lib/kubelet/pods/f9305a20-f5b6-4d8c-84f3-19e8bbd6aadf/volumes" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.862316 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.862369 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.862441 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.862468 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.862512 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.862538 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.862567 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.862600 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9h5f\" (UniqueName: \"kubernetes.io/projected/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-kube-api-access-t9h5f\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.862653 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.862675 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.862713 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-config-data\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.863631 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-config-data\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.866003 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.866696 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.868699 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.868946 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.869943 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.869968 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e1df1097f9d829b19f128c6a54903625abdcd9215cb4cc34e0d59165afb70809/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.870413 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.871564 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.875155 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.884800 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.885117 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9h5f\" (UniqueName: \"kubernetes.io/projected/cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7-kube-api-access-t9h5f\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:32 crc kubenswrapper[4826]: I0129 08:03:32.906081 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dc9c77b2-9802-49f3-8dde-c05d3c74540d\") pod \"rabbitmq-server-0\" (UID: \"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7\") " pod="openstack/rabbitmq-server-0" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.010267 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.037090 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.168094 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-plugins-conf\") pod \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.168135 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-pod-info\") pod \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.168174 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp6zv\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-kube-api-access-qp6zv\") pod \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.168250 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-server-conf\") pod \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.168275 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-tls\") pod \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.168356 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-plugins\") pod \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.168407 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-config-data\") pod \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.168431 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-confd\") pod \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.168469 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-erlang-cookie\") pod \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.168493 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-erlang-cookie-secret\") pod \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.168638 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-009411f1-5778-43a5-ae31-5b0d483ea442\") pod \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\" (UID: \"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8\") " Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.169700 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" (UID: "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.169733 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" (UID: "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.170166 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" (UID: "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.175058 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" (UID: "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.175703 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" (UID: "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.180732 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-pod-info" (OuterVolumeSpecName: "pod-info") pod "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" (UID: "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.183620 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-kube-api-access-qp6zv" (OuterVolumeSpecName: "kube-api-access-qp6zv") pod "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" (UID: "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8"). InnerVolumeSpecName "kube-api-access-qp6zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.187024 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-009411f1-5778-43a5-ae31-5b0d483ea442" (OuterVolumeSpecName: "persistence") pod "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" (UID: "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8"). InnerVolumeSpecName "pvc-009411f1-5778-43a5-ae31-5b0d483ea442". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.195673 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-config-data" (OuterVolumeSpecName: "config-data") pod "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" (UID: "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.230760 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-server-conf" (OuterVolumeSpecName: "server-conf") pod "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" (UID: "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.271097 4826 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.271137 4826 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.271151 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp6zv\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-kube-api-access-qp6zv\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.271167 4826 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.271178 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.271188 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.271198 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.271210 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.271222 4826 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.271257 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-009411f1-5778-43a5-ae31-5b0d483ea442\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-009411f1-5778-43a5-ae31-5b0d483ea442\") on node \"crc\" " Jan 29 08:03:33 crc kubenswrapper[4826]: I0129 08:03:33.272642 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" (UID: "376bc5a2-a062-4c32-ae34-ea2d16f3e1c8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.291151 4826 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.291426 4826 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-009411f1-5778-43a5-ae31-5b0d483ea442" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-009411f1-5778-43a5-ae31-5b0d483ea442") on node "crc" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.313345 4826 generic.go:334] "Generic (PLEG): container finished" podID="376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" containerID="0ddf6daeff3e08ebcdd0225bdadf4e120ac88a2257b2489823451505b1e9312d" exitCode=0 Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.313402 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.313441 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8","Type":"ContainerDied","Data":"0ddf6daeff3e08ebcdd0225bdadf4e120ac88a2257b2489823451505b1e9312d"} Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.313509 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"376bc5a2-a062-4c32-ae34-ea2d16f3e1c8","Type":"ContainerDied","Data":"3c16f685f11f15760a340d53f9596f954f158f59de0eb4ef15ccff0f416632b2"} Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.313535 4826 scope.go:117] "RemoveContainer" containerID="0ddf6daeff3e08ebcdd0225bdadf4e120ac88a2257b2489823451505b1e9312d" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.341754 4826 scope.go:117] "RemoveContainer" containerID="2cae4c388ac46da0d0758a2c53e1c6f81d8e9f1b0ea830e0a281452453dd4f20" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.366735 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.367480 4826 scope.go:117] "RemoveContainer" containerID="0ddf6daeff3e08ebcdd0225bdadf4e120ac88a2257b2489823451505b1e9312d" Jan 29 08:03:34 crc kubenswrapper[4826]: E0129 08:03:33.368304 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ddf6daeff3e08ebcdd0225bdadf4e120ac88a2257b2489823451505b1e9312d\": container with ID starting with 0ddf6daeff3e08ebcdd0225bdadf4e120ac88a2257b2489823451505b1e9312d not found: ID does not exist" containerID="0ddf6daeff3e08ebcdd0225bdadf4e120ac88a2257b2489823451505b1e9312d" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.368342 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ddf6daeff3e08ebcdd0225bdadf4e120ac88a2257b2489823451505b1e9312d"} err="failed to get container status \"0ddf6daeff3e08ebcdd0225bdadf4e120ac88a2257b2489823451505b1e9312d\": rpc error: code = NotFound desc = could not find container \"0ddf6daeff3e08ebcdd0225bdadf4e120ac88a2257b2489823451505b1e9312d\": container with ID starting with 0ddf6daeff3e08ebcdd0225bdadf4e120ac88a2257b2489823451505b1e9312d not found: ID does not exist" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.368368 4826 scope.go:117] "RemoveContainer" containerID="2cae4c388ac46da0d0758a2c53e1c6f81d8e9f1b0ea830e0a281452453dd4f20" Jan 29 08:03:34 crc kubenswrapper[4826]: E0129 08:03:33.368688 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cae4c388ac46da0d0758a2c53e1c6f81d8e9f1b0ea830e0a281452453dd4f20\": container with ID starting with 2cae4c388ac46da0d0758a2c53e1c6f81d8e9f1b0ea830e0a281452453dd4f20 not found: ID does not exist" containerID="2cae4c388ac46da0d0758a2c53e1c6f81d8e9f1b0ea830e0a281452453dd4f20" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.368704 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cae4c388ac46da0d0758a2c53e1c6f81d8e9f1b0ea830e0a281452453dd4f20"} err="failed to get container status \"2cae4c388ac46da0d0758a2c53e1c6f81d8e9f1b0ea830e0a281452453dd4f20\": rpc error: code = NotFound desc = could not find container \"2cae4c388ac46da0d0758a2c53e1c6f81d8e9f1b0ea830e0a281452453dd4f20\": container with ID starting with 2cae4c388ac46da0d0758a2c53e1c6f81d8e9f1b0ea830e0a281452453dd4f20 not found: ID does not exist" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.378374 4826 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.378405 4826 reconciler_common.go:293] "Volume detached for volume \"pvc-009411f1-5778-43a5-ae31-5b0d483ea442\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-009411f1-5778-43a5-ae31-5b0d483ea442\") on node \"crc\" DevicePath \"\"" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.378436 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.384048 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:03:34 crc kubenswrapper[4826]: E0129 08:03:33.384677 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" containerName="rabbitmq" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.384701 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" containerName="rabbitmq" Jan 29 08:03:34 crc kubenswrapper[4826]: E0129 08:03:33.384754 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" containerName="setup-container" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.384767 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" containerName="setup-container" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.385038 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" containerName="rabbitmq" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.386817 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.390709 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.390783 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.391245 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tkp8v" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.391443 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.391751 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.392543 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.392748 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.393790 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.480132 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/07711939-da14-4da1-8017-25d0f0719763-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.480178 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/07711939-da14-4da1-8017-25d0f0719763-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.480214 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5h4g\" (UniqueName: \"kubernetes.io/projected/07711939-da14-4da1-8017-25d0f0719763-kube-api-access-w5h4g\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.480240 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/07711939-da14-4da1-8017-25d0f0719763-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.480279 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/07711939-da14-4da1-8017-25d0f0719763-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.480353 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/07711939-da14-4da1-8017-25d0f0719763-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.480393 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-009411f1-5778-43a5-ae31-5b0d483ea442\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-009411f1-5778-43a5-ae31-5b0d483ea442\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.480435 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/07711939-da14-4da1-8017-25d0f0719763-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.480460 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07711939-da14-4da1-8017-25d0f0719763-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.480483 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/07711939-da14-4da1-8017-25d0f0719763-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.480510 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/07711939-da14-4da1-8017-25d0f0719763-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.536920 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.582132 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/07711939-da14-4da1-8017-25d0f0719763-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.582213 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/07711939-da14-4da1-8017-25d0f0719763-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.582259 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-009411f1-5778-43a5-ae31-5b0d483ea442\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-009411f1-5778-43a5-ae31-5b0d483ea442\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.582288 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/07711939-da14-4da1-8017-25d0f0719763-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.583938 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07711939-da14-4da1-8017-25d0f0719763-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.584060 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/07711939-da14-4da1-8017-25d0f0719763-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.584529 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/07711939-da14-4da1-8017-25d0f0719763-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.582333 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07711939-da14-4da1-8017-25d0f0719763-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.584620 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/07711939-da14-4da1-8017-25d0f0719763-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.584662 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/07711939-da14-4da1-8017-25d0f0719763-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.585064 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.585104 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/07711939-da14-4da1-8017-25d0f0719763-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.585109 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-009411f1-5778-43a5-ae31-5b0d483ea442\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-009411f1-5778-43a5-ae31-5b0d483ea442\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dec26431f86c0a2fa5fbacb3eb57039fa841352388ca49c72cadb71db320a17a/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.584733 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/07711939-da14-4da1-8017-25d0f0719763-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.585182 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/07711939-da14-4da1-8017-25d0f0719763-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.585252 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5h4g\" (UniqueName: \"kubernetes.io/projected/07711939-da14-4da1-8017-25d0f0719763-kube-api-access-w5h4g\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.586127 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/07711939-da14-4da1-8017-25d0f0719763-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.586988 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/07711939-da14-4da1-8017-25d0f0719763-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.590139 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/07711939-da14-4da1-8017-25d0f0719763-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.590538 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/07711939-da14-4da1-8017-25d0f0719763-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.591771 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/07711939-da14-4da1-8017-25d0f0719763-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.602372 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/07711939-da14-4da1-8017-25d0f0719763-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.609390 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5h4g\" (UniqueName: \"kubernetes.io/projected/07711939-da14-4da1-8017-25d0f0719763-kube-api-access-w5h4g\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.625474 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-009411f1-5778-43a5-ae31-5b0d483ea442\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-009411f1-5778-43a5-ae31-5b0d483ea442\") pod \"rabbitmq-cell1-server-0\" (UID: \"07711939-da14-4da1-8017-25d0f0719763\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:33.718467 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:34.245668 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 08:03:34 crc kubenswrapper[4826]: W0129 08:03:34.255564 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07711939_da14_4da1_8017_25d0f0719763.slice/crio-0549bee57bdc48d439760f0d638fc14746d327007681942e26d129222876f3f6 WatchSource:0}: Error finding container 0549bee57bdc48d439760f0d638fc14746d327007681942e26d129222876f3f6: Status 404 returned error can't find the container with id 0549bee57bdc48d439760f0d638fc14746d327007681942e26d129222876f3f6 Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:34.326196 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7","Type":"ContainerStarted","Data":"fa43f87d7051352a11e2efa8524a04f2c93d228cbb7de0376e087ef397661856"} Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:34.330612 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"07711939-da14-4da1-8017-25d0f0719763","Type":"ContainerStarted","Data":"0549bee57bdc48d439760f0d638fc14746d327007681942e26d129222876f3f6"} Jan 29 08:03:34 crc kubenswrapper[4826]: I0129 08:03:34.826646 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="376bc5a2-a062-4c32-ae34-ea2d16f3e1c8" path="/var/lib/kubelet/pods/376bc5a2-a062-4c32-ae34-ea2d16f3e1c8/volumes" Jan 29 08:03:35 crc kubenswrapper[4826]: I0129 08:03:35.349818 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7","Type":"ContainerStarted","Data":"d80262284920f51c799074b8217fde1724be4dde4a7a27911dd386ec9d2a1ae6"} Jan 29 08:03:36 crc kubenswrapper[4826]: I0129 08:03:36.362882 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"07711939-da14-4da1-8017-25d0f0719763","Type":"ContainerStarted","Data":"4273255b31d878ca38c4d13f59938a6ce84c9afc1ea193a53e0d51792560c542"} Jan 29 08:04:09 crc kubenswrapper[4826]: I0129 08:04:09.683089 4826 generic.go:334] "Generic (PLEG): container finished" podID="cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7" containerID="d80262284920f51c799074b8217fde1724be4dde4a7a27911dd386ec9d2a1ae6" exitCode=0 Jan 29 08:04:09 crc kubenswrapper[4826]: I0129 08:04:09.683215 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7","Type":"ContainerDied","Data":"d80262284920f51c799074b8217fde1724be4dde4a7a27911dd386ec9d2a1ae6"} Jan 29 08:04:09 crc kubenswrapper[4826]: I0129 08:04:09.685065 4826 generic.go:334] "Generic (PLEG): container finished" podID="07711939-da14-4da1-8017-25d0f0719763" containerID="4273255b31d878ca38c4d13f59938a6ce84c9afc1ea193a53e0d51792560c542" exitCode=0 Jan 29 08:04:09 crc kubenswrapper[4826]: I0129 08:04:09.685084 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"07711939-da14-4da1-8017-25d0f0719763","Type":"ContainerDied","Data":"4273255b31d878ca38c4d13f59938a6ce84c9afc1ea193a53e0d51792560c542"} Jan 29 08:04:10 crc kubenswrapper[4826]: I0129 08:04:10.694460 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7","Type":"ContainerStarted","Data":"cd5fea527f9097377b93a2bdf3e489f49aa028789ffb083cdb751cc8f8005199"} Jan 29 08:04:10 crc kubenswrapper[4826]: I0129 08:04:10.694974 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 08:04:10 crc kubenswrapper[4826]: I0129 08:04:10.697129 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"07711939-da14-4da1-8017-25d0f0719763","Type":"ContainerStarted","Data":"e5225fe98a7010f954627ca79dc29fa2e137df499d966c84289847fb36898a61"} Jan 29 08:04:10 crc kubenswrapper[4826]: I0129 08:04:10.697492 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:04:10 crc kubenswrapper[4826]: I0129 08:04:10.772984 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.772966838 podStartE2EDuration="38.772966838s" podCreationTimestamp="2026-01-29 08:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:04:10.771289304 +0000 UTC m=+4834.633082403" watchObservedRunningTime="2026-01-29 08:04:10.772966838 +0000 UTC m=+4834.634759917" Jan 29 08:04:10 crc kubenswrapper[4826]: I0129 08:04:10.804464 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.804439645 podStartE2EDuration="37.804439645s" podCreationTimestamp="2026-01-29 08:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:04:10.801631381 +0000 UTC m=+4834.663424460" watchObservedRunningTime="2026-01-29 08:04:10.804439645 +0000 UTC m=+4834.666232714" Jan 29 08:04:23 crc kubenswrapper[4826]: I0129 08:04:23.041654 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 08:04:23 crc kubenswrapper[4826]: I0129 08:04:23.721616 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 08:04:27 crc kubenswrapper[4826]: I0129 08:04:27.621436 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 29 08:04:27 crc kubenswrapper[4826]: I0129 08:04:27.623693 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 08:04:27 crc kubenswrapper[4826]: I0129 08:04:27.628414 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qlrsf" Jan 29 08:04:27 crc kubenswrapper[4826]: I0129 08:04:27.635750 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 29 08:04:27 crc kubenswrapper[4826]: I0129 08:04:27.762607 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fplth\" (UniqueName: \"kubernetes.io/projected/a3614f89-f14d-4b21-81f9-b9b67e5b343b-kube-api-access-fplth\") pod \"mariadb-client\" (UID: \"a3614f89-f14d-4b21-81f9-b9b67e5b343b\") " pod="openstack/mariadb-client" Jan 29 08:04:27 crc kubenswrapper[4826]: I0129 08:04:27.864818 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fplth\" (UniqueName: \"kubernetes.io/projected/a3614f89-f14d-4b21-81f9-b9b67e5b343b-kube-api-access-fplth\") pod \"mariadb-client\" (UID: \"a3614f89-f14d-4b21-81f9-b9b67e5b343b\") " pod="openstack/mariadb-client" Jan 29 08:04:28 crc kubenswrapper[4826]: I0129 08:04:28.395841 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fplth\" (UniqueName: \"kubernetes.io/projected/a3614f89-f14d-4b21-81f9-b9b67e5b343b-kube-api-access-fplth\") pod \"mariadb-client\" (UID: \"a3614f89-f14d-4b21-81f9-b9b67e5b343b\") " pod="openstack/mariadb-client" Jan 29 08:04:28 crc kubenswrapper[4826]: I0129 08:04:28.552724 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 08:04:28 crc kubenswrapper[4826]: I0129 08:04:28.919736 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 29 08:04:28 crc kubenswrapper[4826]: I0129 08:04:28.925531 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:04:29 crc kubenswrapper[4826]: I0129 08:04:29.865593 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"a3614f89-f14d-4b21-81f9-b9b67e5b343b","Type":"ContainerStarted","Data":"5977218996c9012350b1ae6393b259ae9f476d108013fc457e518b7d32d6bfaa"} Jan 29 08:04:29 crc kubenswrapper[4826]: I0129 08:04:29.865917 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"a3614f89-f14d-4b21-81f9-b9b67e5b343b","Type":"ContainerStarted","Data":"9c3f0292979a8c225394a9451286d9dcde4bc321cfb6793f8b244d4436857641"} Jan 29 08:04:29 crc kubenswrapper[4826]: I0129 08:04:29.887906 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.336559236 podStartE2EDuration="2.887882823s" podCreationTimestamp="2026-01-29 08:04:27 +0000 UTC" firstStartedPulling="2026-01-29 08:04:28.924080397 +0000 UTC m=+4852.785873466" lastFinishedPulling="2026-01-29 08:04:29.475403974 +0000 UTC m=+4853.337197053" observedRunningTime="2026-01-29 08:04:29.884287478 +0000 UTC m=+4853.746080557" watchObservedRunningTime="2026-01-29 08:04:29.887882823 +0000 UTC m=+4853.749675892" Jan 29 08:04:41 crc kubenswrapper[4826]: I0129 08:04:41.585091 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 29 08:04:41 crc kubenswrapper[4826]: I0129 08:04:41.585994 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="a3614f89-f14d-4b21-81f9-b9b67e5b343b" containerName="mariadb-client" containerID="cri-o://5977218996c9012350b1ae6393b259ae9f476d108013fc457e518b7d32d6bfaa" gracePeriod=30 Jan 29 08:04:41 crc kubenswrapper[4826]: I0129 08:04:41.976896 4826 generic.go:334] "Generic (PLEG): container finished" podID="a3614f89-f14d-4b21-81f9-b9b67e5b343b" containerID="5977218996c9012350b1ae6393b259ae9f476d108013fc457e518b7d32d6bfaa" exitCode=143 Jan 29 08:04:41 crc kubenswrapper[4826]: I0129 08:04:41.977063 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"a3614f89-f14d-4b21-81f9-b9b67e5b343b","Type":"ContainerDied","Data":"5977218996c9012350b1ae6393b259ae9f476d108013fc457e518b7d32d6bfaa"} Jan 29 08:04:42 crc kubenswrapper[4826]: I0129 08:04:42.103137 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 08:04:42 crc kubenswrapper[4826]: I0129 08:04:42.214683 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fplth\" (UniqueName: \"kubernetes.io/projected/a3614f89-f14d-4b21-81f9-b9b67e5b343b-kube-api-access-fplth\") pod \"a3614f89-f14d-4b21-81f9-b9b67e5b343b\" (UID: \"a3614f89-f14d-4b21-81f9-b9b67e5b343b\") " Jan 29 08:04:42 crc kubenswrapper[4826]: I0129 08:04:42.233497 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3614f89-f14d-4b21-81f9-b9b67e5b343b-kube-api-access-fplth" (OuterVolumeSpecName: "kube-api-access-fplth") pod "a3614f89-f14d-4b21-81f9-b9b67e5b343b" (UID: "a3614f89-f14d-4b21-81f9-b9b67e5b343b"). InnerVolumeSpecName "kube-api-access-fplth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:04:42 crc kubenswrapper[4826]: I0129 08:04:42.316591 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fplth\" (UniqueName: \"kubernetes.io/projected/a3614f89-f14d-4b21-81f9-b9b67e5b343b-kube-api-access-fplth\") on node \"crc\" DevicePath \"\"" Jan 29 08:04:42 crc kubenswrapper[4826]: I0129 08:04:42.991526 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"a3614f89-f14d-4b21-81f9-b9b67e5b343b","Type":"ContainerDied","Data":"9c3f0292979a8c225394a9451286d9dcde4bc321cfb6793f8b244d4436857641"} Jan 29 08:04:42 crc kubenswrapper[4826]: I0129 08:04:42.991899 4826 scope.go:117] "RemoveContainer" containerID="5977218996c9012350b1ae6393b259ae9f476d108013fc457e518b7d32d6bfaa" Jan 29 08:04:42 crc kubenswrapper[4826]: I0129 08:04:42.991632 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 08:04:43 crc kubenswrapper[4826]: I0129 08:04:43.028292 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 29 08:04:43 crc kubenswrapper[4826]: I0129 08:04:43.040106 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 29 08:04:44 crc kubenswrapper[4826]: I0129 08:04:44.825953 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3614f89-f14d-4b21-81f9-b9b67e5b343b" path="/var/lib/kubelet/pods/a3614f89-f14d-4b21-81f9-b9b67e5b343b/volumes" Jan 29 08:05:05 crc kubenswrapper[4826]: I0129 08:05:05.656577 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:05:05 crc kubenswrapper[4826]: I0129 08:05:05.657394 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:05:35 crc kubenswrapper[4826]: I0129 08:05:35.656125 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:05:35 crc kubenswrapper[4826]: I0129 08:05:35.657703 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:06:05 crc kubenswrapper[4826]: I0129 08:06:05.656849 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:06:05 crc kubenswrapper[4826]: I0129 08:06:05.658628 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:06:05 crc kubenswrapper[4826]: I0129 08:06:05.658731 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 08:06:05 crc kubenswrapper[4826]: I0129 08:06:05.659797 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1fff344b8a55b0b4a3914b487742a26e3d0886958427c810117738efef01b20e"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:06:05 crc kubenswrapper[4826]: I0129 08:06:05.659940 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://1fff344b8a55b0b4a3914b487742a26e3d0886958427c810117738efef01b20e" gracePeriod=600 Jan 29 08:06:06 crc kubenswrapper[4826]: I0129 08:06:06.796358 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="1fff344b8a55b0b4a3914b487742a26e3d0886958427c810117738efef01b20e" exitCode=0 Jan 29 08:06:06 crc kubenswrapper[4826]: I0129 08:06:06.796460 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"1fff344b8a55b0b4a3914b487742a26e3d0886958427c810117738efef01b20e"} Jan 29 08:06:06 crc kubenswrapper[4826]: I0129 08:06:06.796730 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a"} Jan 29 08:06:06 crc kubenswrapper[4826]: I0129 08:06:06.796769 4826 scope.go:117] "RemoveContainer" containerID="91d43ff4726de7f88ea18b0c649f8fb814edd6a2376ec59b98b6b3d03af51e74" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.542637 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5jvcb"] Jan 29 08:06:09 crc kubenswrapper[4826]: E0129 08:06:09.543501 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3614f89-f14d-4b21-81f9-b9b67e5b343b" containerName="mariadb-client" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.543524 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3614f89-f14d-4b21-81f9-b9b67e5b343b" containerName="mariadb-client" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.543789 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3614f89-f14d-4b21-81f9-b9b67e5b343b" containerName="mariadb-client" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.545697 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.573008 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jvcb"] Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.618566 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ec2c16-325c-4cf2-8d75-708fa085d899-utilities\") pod \"community-operators-5jvcb\" (UID: \"a6ec2c16-325c-4cf2-8d75-708fa085d899\") " pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.618906 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ec2c16-325c-4cf2-8d75-708fa085d899-catalog-content\") pod \"community-operators-5jvcb\" (UID: \"a6ec2c16-325c-4cf2-8d75-708fa085d899\") " pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.619032 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zqwm\" (UniqueName: \"kubernetes.io/projected/a6ec2c16-325c-4cf2-8d75-708fa085d899-kube-api-access-9zqwm\") pod \"community-operators-5jvcb\" (UID: \"a6ec2c16-325c-4cf2-8d75-708fa085d899\") " pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.720822 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ec2c16-325c-4cf2-8d75-708fa085d899-catalog-content\") pod \"community-operators-5jvcb\" (UID: \"a6ec2c16-325c-4cf2-8d75-708fa085d899\") " pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.721126 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zqwm\" (UniqueName: \"kubernetes.io/projected/a6ec2c16-325c-4cf2-8d75-708fa085d899-kube-api-access-9zqwm\") pod \"community-operators-5jvcb\" (UID: \"a6ec2c16-325c-4cf2-8d75-708fa085d899\") " pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.721275 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ec2c16-325c-4cf2-8d75-708fa085d899-utilities\") pod \"community-operators-5jvcb\" (UID: \"a6ec2c16-325c-4cf2-8d75-708fa085d899\") " pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.721827 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ec2c16-325c-4cf2-8d75-708fa085d899-utilities\") pod \"community-operators-5jvcb\" (UID: \"a6ec2c16-325c-4cf2-8d75-708fa085d899\") " pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.721939 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ec2c16-325c-4cf2-8d75-708fa085d899-catalog-content\") pod \"community-operators-5jvcb\" (UID: \"a6ec2c16-325c-4cf2-8d75-708fa085d899\") " pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.754723 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zqwm\" (UniqueName: \"kubernetes.io/projected/a6ec2c16-325c-4cf2-8d75-708fa085d899-kube-api-access-9zqwm\") pod \"community-operators-5jvcb\" (UID: \"a6ec2c16-325c-4cf2-8d75-708fa085d899\") " pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:09 crc kubenswrapper[4826]: I0129 08:06:09.887541 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:10 crc kubenswrapper[4826]: I0129 08:06:10.247782 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jvcb"] Jan 29 08:06:10 crc kubenswrapper[4826]: I0129 08:06:10.841033 4826 generic.go:334] "Generic (PLEG): container finished" podID="a6ec2c16-325c-4cf2-8d75-708fa085d899" containerID="92c64f577520ba75456eccf023c4b3770fb32870ea78e91f3bb1e37804d50e2f" exitCode=0 Jan 29 08:06:10 crc kubenswrapper[4826]: I0129 08:06:10.841118 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jvcb" event={"ID":"a6ec2c16-325c-4cf2-8d75-708fa085d899","Type":"ContainerDied","Data":"92c64f577520ba75456eccf023c4b3770fb32870ea78e91f3bb1e37804d50e2f"} Jan 29 08:06:10 crc kubenswrapper[4826]: I0129 08:06:10.841648 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jvcb" event={"ID":"a6ec2c16-325c-4cf2-8d75-708fa085d899","Type":"ContainerStarted","Data":"3f6dc7162ed4f5bb74a802eb9c6868796185318683ba7640fcc521839572e18e"} Jan 29 08:06:12 crc kubenswrapper[4826]: I0129 08:06:12.865174 4826 generic.go:334] "Generic (PLEG): container finished" podID="a6ec2c16-325c-4cf2-8d75-708fa085d899" containerID="e018ce10cafd5f04851755508f4fe27657984dac6c0f541f1eeeff46e37b0961" exitCode=0 Jan 29 08:06:12 crc kubenswrapper[4826]: I0129 08:06:12.865287 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jvcb" event={"ID":"a6ec2c16-325c-4cf2-8d75-708fa085d899","Type":"ContainerDied","Data":"e018ce10cafd5f04851755508f4fe27657984dac6c0f541f1eeeff46e37b0961"} Jan 29 08:06:13 crc kubenswrapper[4826]: I0129 08:06:13.877991 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jvcb" event={"ID":"a6ec2c16-325c-4cf2-8d75-708fa085d899","Type":"ContainerStarted","Data":"b1760bb21c783d192feea7fbbbf1b8ba8c8ead663926db553a8e1697c2d3975d"} Jan 29 08:06:13 crc kubenswrapper[4826]: I0129 08:06:13.915073 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5jvcb" podStartSLOduration=2.361151005 podStartE2EDuration="4.915050963s" podCreationTimestamp="2026-01-29 08:06:09 +0000 UTC" firstStartedPulling="2026-01-29 08:06:10.844366595 +0000 UTC m=+4954.706159694" lastFinishedPulling="2026-01-29 08:06:13.398266553 +0000 UTC m=+4957.260059652" observedRunningTime="2026-01-29 08:06:13.907527555 +0000 UTC m=+4957.769320714" watchObservedRunningTime="2026-01-29 08:06:13.915050963 +0000 UTC m=+4957.776844042" Jan 29 08:06:19 crc kubenswrapper[4826]: I0129 08:06:19.580459 4826 scope.go:117] "RemoveContainer" containerID="f07f00c73960c8df45c1ade2a0c7eed70c15e5d7f67b649fa70a8788406a466c" Jan 29 08:06:19 crc kubenswrapper[4826]: I0129 08:06:19.888917 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:19 crc kubenswrapper[4826]: I0129 08:06:19.889583 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:19 crc kubenswrapper[4826]: I0129 08:06:19.960772 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:21 crc kubenswrapper[4826]: I0129 08:06:21.013945 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:21 crc kubenswrapper[4826]: I0129 08:06:21.085180 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jvcb"] Jan 29 08:06:22 crc kubenswrapper[4826]: I0129 08:06:22.960964 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5jvcb" podUID="a6ec2c16-325c-4cf2-8d75-708fa085d899" containerName="registry-server" containerID="cri-o://b1760bb21c783d192feea7fbbbf1b8ba8c8ead663926db553a8e1697c2d3975d" gracePeriod=2 Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.486205 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.568236 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ec2c16-325c-4cf2-8d75-708fa085d899-utilities\") pod \"a6ec2c16-325c-4cf2-8d75-708fa085d899\" (UID: \"a6ec2c16-325c-4cf2-8d75-708fa085d899\") " Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.568415 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ec2c16-325c-4cf2-8d75-708fa085d899-catalog-content\") pod \"a6ec2c16-325c-4cf2-8d75-708fa085d899\" (UID: \"a6ec2c16-325c-4cf2-8d75-708fa085d899\") " Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.568463 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zqwm\" (UniqueName: \"kubernetes.io/projected/a6ec2c16-325c-4cf2-8d75-708fa085d899-kube-api-access-9zqwm\") pod \"a6ec2c16-325c-4cf2-8d75-708fa085d899\" (UID: \"a6ec2c16-325c-4cf2-8d75-708fa085d899\") " Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.569715 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6ec2c16-325c-4cf2-8d75-708fa085d899-utilities" (OuterVolumeSpecName: "utilities") pod "a6ec2c16-325c-4cf2-8d75-708fa085d899" (UID: "a6ec2c16-325c-4cf2-8d75-708fa085d899"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.578372 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6ec2c16-325c-4cf2-8d75-708fa085d899-kube-api-access-9zqwm" (OuterVolumeSpecName: "kube-api-access-9zqwm") pod "a6ec2c16-325c-4cf2-8d75-708fa085d899" (UID: "a6ec2c16-325c-4cf2-8d75-708fa085d899"). InnerVolumeSpecName "kube-api-access-9zqwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.648644 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6ec2c16-325c-4cf2-8d75-708fa085d899-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6ec2c16-325c-4cf2-8d75-708fa085d899" (UID: "a6ec2c16-325c-4cf2-8d75-708fa085d899"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.671029 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ec2c16-325c-4cf2-8d75-708fa085d899-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.671071 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zqwm\" (UniqueName: \"kubernetes.io/projected/a6ec2c16-325c-4cf2-8d75-708fa085d899-kube-api-access-9zqwm\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.671086 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ec2c16-325c-4cf2-8d75-708fa085d899-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.972941 4826 generic.go:334] "Generic (PLEG): container finished" podID="a6ec2c16-325c-4cf2-8d75-708fa085d899" containerID="b1760bb21c783d192feea7fbbbf1b8ba8c8ead663926db553a8e1697c2d3975d" exitCode=0 Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.972995 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jvcb" event={"ID":"a6ec2c16-325c-4cf2-8d75-708fa085d899","Type":"ContainerDied","Data":"b1760bb21c783d192feea7fbbbf1b8ba8c8ead663926db553a8e1697c2d3975d"} Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.973036 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jvcb" event={"ID":"a6ec2c16-325c-4cf2-8d75-708fa085d899","Type":"ContainerDied","Data":"3f6dc7162ed4f5bb74a802eb9c6868796185318683ba7640fcc521839572e18e"} Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.973065 4826 scope.go:117] "RemoveContainer" containerID="b1760bb21c783d192feea7fbbbf1b8ba8c8ead663926db553a8e1697c2d3975d" Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.973092 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jvcb" Jan 29 08:06:23 crc kubenswrapper[4826]: I0129 08:06:23.994454 4826 scope.go:117] "RemoveContainer" containerID="e018ce10cafd5f04851755508f4fe27657984dac6c0f541f1eeeff46e37b0961" Jan 29 08:06:24 crc kubenswrapper[4826]: I0129 08:06:24.035661 4826 scope.go:117] "RemoveContainer" containerID="92c64f577520ba75456eccf023c4b3770fb32870ea78e91f3bb1e37804d50e2f" Jan 29 08:06:24 crc kubenswrapper[4826]: I0129 08:06:24.038240 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jvcb"] Jan 29 08:06:24 crc kubenswrapper[4826]: I0129 08:06:24.050817 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5jvcb"] Jan 29 08:06:24 crc kubenswrapper[4826]: I0129 08:06:24.073522 4826 scope.go:117] "RemoveContainer" containerID="b1760bb21c783d192feea7fbbbf1b8ba8c8ead663926db553a8e1697c2d3975d" Jan 29 08:06:24 crc kubenswrapper[4826]: E0129 08:06:24.074077 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1760bb21c783d192feea7fbbbf1b8ba8c8ead663926db553a8e1697c2d3975d\": container with ID starting with b1760bb21c783d192feea7fbbbf1b8ba8c8ead663926db553a8e1697c2d3975d not found: ID does not exist" containerID="b1760bb21c783d192feea7fbbbf1b8ba8c8ead663926db553a8e1697c2d3975d" Jan 29 08:06:24 crc kubenswrapper[4826]: I0129 08:06:24.074123 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1760bb21c783d192feea7fbbbf1b8ba8c8ead663926db553a8e1697c2d3975d"} err="failed to get container status \"b1760bb21c783d192feea7fbbbf1b8ba8c8ead663926db553a8e1697c2d3975d\": rpc error: code = NotFound desc = could not find container \"b1760bb21c783d192feea7fbbbf1b8ba8c8ead663926db553a8e1697c2d3975d\": container with ID starting with b1760bb21c783d192feea7fbbbf1b8ba8c8ead663926db553a8e1697c2d3975d not found: ID does not exist" Jan 29 08:06:24 crc kubenswrapper[4826]: I0129 08:06:24.074156 4826 scope.go:117] "RemoveContainer" containerID="e018ce10cafd5f04851755508f4fe27657984dac6c0f541f1eeeff46e37b0961" Jan 29 08:06:24 crc kubenswrapper[4826]: E0129 08:06:24.074613 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e018ce10cafd5f04851755508f4fe27657984dac6c0f541f1eeeff46e37b0961\": container with ID starting with e018ce10cafd5f04851755508f4fe27657984dac6c0f541f1eeeff46e37b0961 not found: ID does not exist" containerID="e018ce10cafd5f04851755508f4fe27657984dac6c0f541f1eeeff46e37b0961" Jan 29 08:06:24 crc kubenswrapper[4826]: I0129 08:06:24.074657 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e018ce10cafd5f04851755508f4fe27657984dac6c0f541f1eeeff46e37b0961"} err="failed to get container status \"e018ce10cafd5f04851755508f4fe27657984dac6c0f541f1eeeff46e37b0961\": rpc error: code = NotFound desc = could not find container \"e018ce10cafd5f04851755508f4fe27657984dac6c0f541f1eeeff46e37b0961\": container with ID starting with e018ce10cafd5f04851755508f4fe27657984dac6c0f541f1eeeff46e37b0961 not found: ID does not exist" Jan 29 08:06:24 crc kubenswrapper[4826]: I0129 08:06:24.074682 4826 scope.go:117] "RemoveContainer" containerID="92c64f577520ba75456eccf023c4b3770fb32870ea78e91f3bb1e37804d50e2f" Jan 29 08:06:24 crc kubenswrapper[4826]: E0129 08:06:24.075131 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92c64f577520ba75456eccf023c4b3770fb32870ea78e91f3bb1e37804d50e2f\": container with ID starting with 92c64f577520ba75456eccf023c4b3770fb32870ea78e91f3bb1e37804d50e2f not found: ID does not exist" containerID="92c64f577520ba75456eccf023c4b3770fb32870ea78e91f3bb1e37804d50e2f" Jan 29 08:06:24 crc kubenswrapper[4826]: I0129 08:06:24.075160 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92c64f577520ba75456eccf023c4b3770fb32870ea78e91f3bb1e37804d50e2f"} err="failed to get container status \"92c64f577520ba75456eccf023c4b3770fb32870ea78e91f3bb1e37804d50e2f\": rpc error: code = NotFound desc = could not find container \"92c64f577520ba75456eccf023c4b3770fb32870ea78e91f3bb1e37804d50e2f\": container with ID starting with 92c64f577520ba75456eccf023c4b3770fb32870ea78e91f3bb1e37804d50e2f not found: ID does not exist" Jan 29 08:06:24 crc kubenswrapper[4826]: I0129 08:06:24.825528 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6ec2c16-325c-4cf2-8d75-708fa085d899" path="/var/lib/kubelet/pods/a6ec2c16-325c-4cf2-8d75-708fa085d899/volumes" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.623718 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 29 08:07:24 crc kubenswrapper[4826]: E0129 08:07:24.625180 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ec2c16-325c-4cf2-8d75-708fa085d899" containerName="extract-content" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.625207 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ec2c16-325c-4cf2-8d75-708fa085d899" containerName="extract-content" Jan 29 08:07:24 crc kubenswrapper[4826]: E0129 08:07:24.625243 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ec2c16-325c-4cf2-8d75-708fa085d899" containerName="extract-utilities" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.625259 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ec2c16-325c-4cf2-8d75-708fa085d899" containerName="extract-utilities" Jan 29 08:07:24 crc kubenswrapper[4826]: E0129 08:07:24.625333 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ec2c16-325c-4cf2-8d75-708fa085d899" containerName="registry-server" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.625351 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ec2c16-325c-4cf2-8d75-708fa085d899" containerName="registry-server" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.625698 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ec2c16-325c-4cf2-8d75-708fa085d899" containerName="registry-server" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.626893 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.634783 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.671545 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qlrsf" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.771777 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e\") pod \"mariadb-copy-data\" (UID: \"3481317b-b919-4828-8880-1b5446b88adb\") " pod="openstack/mariadb-copy-data" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.771854 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8q76\" (UniqueName: \"kubernetes.io/projected/3481317b-b919-4828-8880-1b5446b88adb-kube-api-access-n8q76\") pod \"mariadb-copy-data\" (UID: \"3481317b-b919-4828-8880-1b5446b88adb\") " pod="openstack/mariadb-copy-data" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.874012 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e\") pod \"mariadb-copy-data\" (UID: \"3481317b-b919-4828-8880-1b5446b88adb\") " pod="openstack/mariadb-copy-data" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.874136 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8q76\" (UniqueName: \"kubernetes.io/projected/3481317b-b919-4828-8880-1b5446b88adb-kube-api-access-n8q76\") pod \"mariadb-copy-data\" (UID: \"3481317b-b919-4828-8880-1b5446b88adb\") " pod="openstack/mariadb-copy-data" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.878268 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.878366 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e\") pod \"mariadb-copy-data\" (UID: \"3481317b-b919-4828-8880-1b5446b88adb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6f95687b49d2db035222f155fd0cf00c8d44b90fda0572eb673a533ee9156b2a/globalmount\"" pod="openstack/mariadb-copy-data" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.899967 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8q76\" (UniqueName: \"kubernetes.io/projected/3481317b-b919-4828-8880-1b5446b88adb-kube-api-access-n8q76\") pod \"mariadb-copy-data\" (UID: \"3481317b-b919-4828-8880-1b5446b88adb\") " pod="openstack/mariadb-copy-data" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.936550 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e\") pod \"mariadb-copy-data\" (UID: \"3481317b-b919-4828-8880-1b5446b88adb\") " pod="openstack/mariadb-copy-data" Jan 29 08:07:24 crc kubenswrapper[4826]: I0129 08:07:24.993005 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 29 08:07:25 crc kubenswrapper[4826]: I0129 08:07:25.565519 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 29 08:07:26 crc kubenswrapper[4826]: I0129 08:07:26.584206 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"3481317b-b919-4828-8880-1b5446b88adb","Type":"ContainerStarted","Data":"1ffb4254a5ed7663a653716ce73b15da9075226ab195f9b2d15f6c724fcd86d1"} Jan 29 08:07:26 crc kubenswrapper[4826]: I0129 08:07:26.584500 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"3481317b-b919-4828-8880-1b5446b88adb","Type":"ContainerStarted","Data":"16ff19e51cd5430c96e77fb2877d7572df08d4b87026c81689d565ed6abcbd2d"} Jan 29 08:07:29 crc kubenswrapper[4826]: I0129 08:07:29.500457 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=6.500425696 podStartE2EDuration="6.500425696s" podCreationTimestamp="2026-01-29 08:07:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:07:26.603831154 +0000 UTC m=+5030.465624263" watchObservedRunningTime="2026-01-29 08:07:29.500425696 +0000 UTC m=+5033.362218785" Jan 29 08:07:29 crc kubenswrapper[4826]: I0129 08:07:29.506093 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 29 08:07:29 crc kubenswrapper[4826]: I0129 08:07:29.508084 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 08:07:29 crc kubenswrapper[4826]: I0129 08:07:29.514471 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 29 08:07:29 crc kubenswrapper[4826]: I0129 08:07:29.671083 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jn9m\" (UniqueName: \"kubernetes.io/projected/28da450c-c4df-41ed-b5b8-bcc952c15b9a-kube-api-access-5jn9m\") pod \"mariadb-client\" (UID: \"28da450c-c4df-41ed-b5b8-bcc952c15b9a\") " pod="openstack/mariadb-client" Jan 29 08:07:29 crc kubenswrapper[4826]: I0129 08:07:29.774451 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jn9m\" (UniqueName: \"kubernetes.io/projected/28da450c-c4df-41ed-b5b8-bcc952c15b9a-kube-api-access-5jn9m\") pod \"mariadb-client\" (UID: \"28da450c-c4df-41ed-b5b8-bcc952c15b9a\") " pod="openstack/mariadb-client" Jan 29 08:07:29 crc kubenswrapper[4826]: I0129 08:07:29.813760 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jn9m\" (UniqueName: \"kubernetes.io/projected/28da450c-c4df-41ed-b5b8-bcc952c15b9a-kube-api-access-5jn9m\") pod \"mariadb-client\" (UID: \"28da450c-c4df-41ed-b5b8-bcc952c15b9a\") " pod="openstack/mariadb-client" Jan 29 08:07:29 crc kubenswrapper[4826]: I0129 08:07:29.842678 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 08:07:30 crc kubenswrapper[4826]: I0129 08:07:30.381266 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 29 08:07:30 crc kubenswrapper[4826]: I0129 08:07:30.613947 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"28da450c-c4df-41ed-b5b8-bcc952c15b9a","Type":"ContainerStarted","Data":"2bc8c6e0a31ee7f1e103e5b9144e4f2fac7a5fc45b89547ed9b58d0bdde92f0d"} Jan 29 08:07:31 crc kubenswrapper[4826]: I0129 08:07:31.624910 4826 generic.go:334] "Generic (PLEG): container finished" podID="28da450c-c4df-41ed-b5b8-bcc952c15b9a" containerID="0502a74beb4fb58419f4c87f0c60c4a997163bf3aed17009285ddadcfeb7955d" exitCode=0 Jan 29 08:07:31 crc kubenswrapper[4826]: I0129 08:07:31.624966 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"28da450c-c4df-41ed-b5b8-bcc952c15b9a","Type":"ContainerDied","Data":"0502a74beb4fb58419f4c87f0c60c4a997163bf3aed17009285ddadcfeb7955d"} Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.088053 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.119974 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_28da450c-c4df-41ed-b5b8-bcc952c15b9a/mariadb-client/0.log" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.153727 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.167121 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.231993 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jn9m\" (UniqueName: \"kubernetes.io/projected/28da450c-c4df-41ed-b5b8-bcc952c15b9a-kube-api-access-5jn9m\") pod \"28da450c-c4df-41ed-b5b8-bcc952c15b9a\" (UID: \"28da450c-c4df-41ed-b5b8-bcc952c15b9a\") " Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.238367 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28da450c-c4df-41ed-b5b8-bcc952c15b9a-kube-api-access-5jn9m" (OuterVolumeSpecName: "kube-api-access-5jn9m") pod "28da450c-c4df-41ed-b5b8-bcc952c15b9a" (UID: "28da450c-c4df-41ed-b5b8-bcc952c15b9a"). InnerVolumeSpecName "kube-api-access-5jn9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.310200 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 29 08:07:33 crc kubenswrapper[4826]: E0129 08:07:33.310639 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28da450c-c4df-41ed-b5b8-bcc952c15b9a" containerName="mariadb-client" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.310655 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="28da450c-c4df-41ed-b5b8-bcc952c15b9a" containerName="mariadb-client" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.310827 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="28da450c-c4df-41ed-b5b8-bcc952c15b9a" containerName="mariadb-client" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.311479 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.323360 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.334664 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jn9m\" (UniqueName: \"kubernetes.io/projected/28da450c-c4df-41ed-b5b8-bcc952c15b9a-kube-api-access-5jn9m\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.437053 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxmnp\" (UniqueName: \"kubernetes.io/projected/b3fa555a-9034-493a-8c52-fb26c7e33182-kube-api-access-sxmnp\") pod \"mariadb-client\" (UID: \"b3fa555a-9034-493a-8c52-fb26c7e33182\") " pod="openstack/mariadb-client" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.539072 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxmnp\" (UniqueName: \"kubernetes.io/projected/b3fa555a-9034-493a-8c52-fb26c7e33182-kube-api-access-sxmnp\") pod \"mariadb-client\" (UID: \"b3fa555a-9034-493a-8c52-fb26c7e33182\") " pod="openstack/mariadb-client" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.569814 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxmnp\" (UniqueName: \"kubernetes.io/projected/b3fa555a-9034-493a-8c52-fb26c7e33182-kube-api-access-sxmnp\") pod \"mariadb-client\" (UID: \"b3fa555a-9034-493a-8c52-fb26c7e33182\") " pod="openstack/mariadb-client" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.636407 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.665260 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bc8c6e0a31ee7f1e103e5b9144e4f2fac7a5fc45b89547ed9b58d0bdde92f0d" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.665364 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 08:07:33 crc kubenswrapper[4826]: I0129 08:07:33.729785 4826 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="28da450c-c4df-41ed-b5b8-bcc952c15b9a" podUID="b3fa555a-9034-493a-8c52-fb26c7e33182" Jan 29 08:07:34 crc kubenswrapper[4826]: I0129 08:07:34.159080 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 29 08:07:34 crc kubenswrapper[4826]: W0129 08:07:34.164664 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3fa555a_9034_493a_8c52_fb26c7e33182.slice/crio-5bd886c5e4d582c0ea60e6b40ab8f6b4313220ccefbe19fff3036001443e95ef WatchSource:0}: Error finding container 5bd886c5e4d582c0ea60e6b40ab8f6b4313220ccefbe19fff3036001443e95ef: Status 404 returned error can't find the container with id 5bd886c5e4d582c0ea60e6b40ab8f6b4313220ccefbe19fff3036001443e95ef Jan 29 08:07:34 crc kubenswrapper[4826]: I0129 08:07:34.674011 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b3fa555a-9034-493a-8c52-fb26c7e33182","Type":"ContainerStarted","Data":"5fbcb597f025be2341218b893491ab3d466e5dfaa0e751c7ab941414bc19b8e4"} Jan 29 08:07:34 crc kubenswrapper[4826]: I0129 08:07:34.674058 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b3fa555a-9034-493a-8c52-fb26c7e33182","Type":"ContainerStarted","Data":"5bd886c5e4d582c0ea60e6b40ab8f6b4313220ccefbe19fff3036001443e95ef"} Jan 29 08:07:34 crc kubenswrapper[4826]: I0129 08:07:34.741121 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_b3fa555a-9034-493a-8c52-fb26c7e33182/mariadb-client/0.log" Jan 29 08:07:34 crc kubenswrapper[4826]: I0129 08:07:34.818203 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28da450c-c4df-41ed-b5b8-bcc952c15b9a" path="/var/lib/kubelet/pods/28da450c-c4df-41ed-b5b8-bcc952c15b9a/volumes" Jan 29 08:07:35 crc kubenswrapper[4826]: I0129 08:07:35.688045 4826 generic.go:334] "Generic (PLEG): container finished" podID="b3fa555a-9034-493a-8c52-fb26c7e33182" containerID="5fbcb597f025be2341218b893491ab3d466e5dfaa0e751c7ab941414bc19b8e4" exitCode=0 Jan 29 08:07:35 crc kubenswrapper[4826]: I0129 08:07:35.688120 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b3fa555a-9034-493a-8c52-fb26c7e33182","Type":"ContainerDied","Data":"5fbcb597f025be2341218b893491ab3d466e5dfaa0e751c7ab941414bc19b8e4"} Jan 29 08:07:37 crc kubenswrapper[4826]: I0129 08:07:37.068974 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 08:07:37 crc kubenswrapper[4826]: I0129 08:07:37.123008 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 29 08:07:37 crc kubenswrapper[4826]: I0129 08:07:37.132587 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 29 08:07:37 crc kubenswrapper[4826]: I0129 08:07:37.202279 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxmnp\" (UniqueName: \"kubernetes.io/projected/b3fa555a-9034-493a-8c52-fb26c7e33182-kube-api-access-sxmnp\") pod \"b3fa555a-9034-493a-8c52-fb26c7e33182\" (UID: \"b3fa555a-9034-493a-8c52-fb26c7e33182\") " Jan 29 08:07:37 crc kubenswrapper[4826]: I0129 08:07:37.211556 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3fa555a-9034-493a-8c52-fb26c7e33182-kube-api-access-sxmnp" (OuterVolumeSpecName: "kube-api-access-sxmnp") pod "b3fa555a-9034-493a-8c52-fb26c7e33182" (UID: "b3fa555a-9034-493a-8c52-fb26c7e33182"). InnerVolumeSpecName "kube-api-access-sxmnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:07:37 crc kubenswrapper[4826]: I0129 08:07:37.304100 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxmnp\" (UniqueName: \"kubernetes.io/projected/b3fa555a-9034-493a-8c52-fb26c7e33182-kube-api-access-sxmnp\") on node \"crc\" DevicePath \"\"" Jan 29 08:07:37 crc kubenswrapper[4826]: I0129 08:07:37.707594 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bd886c5e4d582c0ea60e6b40ab8f6b4313220ccefbe19fff3036001443e95ef" Jan 29 08:07:37 crc kubenswrapper[4826]: I0129 08:07:37.707682 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 29 08:07:38 crc kubenswrapper[4826]: I0129 08:07:38.818279 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3fa555a-9034-493a-8c52-fb26c7e33182" path="/var/lib/kubelet/pods/b3fa555a-9034-493a-8c52-fb26c7e33182/volumes" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.833093 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 08:08:13 crc kubenswrapper[4826]: E0129 08:08:13.834949 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3fa555a-9034-493a-8c52-fb26c7e33182" containerName="mariadb-client" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.835049 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3fa555a-9034-493a-8c52-fb26c7e33182" containerName="mariadb-client" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.835321 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3fa555a-9034-493a-8c52-fb26c7e33182" containerName="mariadb-client" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.836283 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:13 crc kubenswrapper[4826]: W0129 08:08:13.838446 4826 reflector.go:561] object-"openstack"/"ovndbcluster-nb-scripts": failed to list *v1.ConfigMap: configmaps "ovndbcluster-nb-scripts" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 29 08:08:13 crc kubenswrapper[4826]: E0129 08:08:13.838501 4826 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-nb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovndbcluster-nb-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 08:08:13 crc kubenswrapper[4826]: W0129 08:08:13.838550 4826 reflector.go:561] object-"openstack"/"cert-ovn-metrics": failed to list *v1.Secret: secrets "cert-ovn-metrics" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 29 08:08:13 crc kubenswrapper[4826]: E0129 08:08:13.838565 4826 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovn-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-ovn-metrics\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 08:08:13 crc kubenswrapper[4826]: W0129 08:08:13.839355 4826 reflector.go:561] object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-cdz4j": failed to list *v1.Secret: secrets "ovncluster-ovndbcluster-nb-dockercfg-cdz4j" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 29 08:08:13 crc kubenswrapper[4826]: E0129 08:08:13.839481 4826 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncluster-ovndbcluster-nb-dockercfg-cdz4j\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovncluster-ovndbcluster-nb-dockercfg-cdz4j\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 08:08:13 crc kubenswrapper[4826]: W0129 08:08:13.839434 4826 reflector.go:561] object-"openstack"/"cert-ovndbcluster-nb-ovndbs": failed to list *v1.Secret: secrets "cert-ovndbcluster-nb-ovndbs" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 29 08:08:13 crc kubenswrapper[4826]: E0129 08:08:13.839669 4826 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-ovndbcluster-nb-ovndbs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-ovndbcluster-nb-ovndbs\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.840978 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.855681 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.862680 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.864328 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.879639 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.881119 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.903171 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.925053 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/948956a4-cb1c-4cb0-bb88-a749d1ab5990-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.925090 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/948956a4-cb1c-4cb0-bb88-a749d1ab5990-config\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.925130 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/948956a4-cb1c-4cb0-bb88-a749d1ab5990-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.925154 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948956a4-cb1c-4cb0-bb88-a749d1ab5990-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.925335 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-afd16f2c-75ba-42ba-853d-e5f20c9aa8e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afd16f2c-75ba-42ba-853d-e5f20c9aa8e6\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.925407 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/948956a4-cb1c-4cb0-bb88-a749d1ab5990-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.925447 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/948956a4-cb1c-4cb0-bb88-a749d1ab5990-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.925641 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlk2p\" (UniqueName: \"kubernetes.io/projected/948956a4-cb1c-4cb0-bb88-a749d1ab5990-kube-api-access-zlk2p\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:13 crc kubenswrapper[4826]: I0129 08:08:13.930945 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027146 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26b5252c-28c5-44b9-a17f-6afb24926978-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027196 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlk2p\" (UniqueName: \"kubernetes.io/projected/948956a4-cb1c-4cb0-bb88-a749d1ab5990-kube-api-access-zlk2p\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027225 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg9l2\" (UniqueName: \"kubernetes.io/projected/0277bfc7-7497-454e-a4d0-efd51c1c50a4-kube-api-access-cg9l2\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027243 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/948956a4-cb1c-4cb0-bb88-a749d1ab5990-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027262 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0277bfc7-7497-454e-a4d0-efd51c1c50a4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027288 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/948956a4-cb1c-4cb0-bb88-a749d1ab5990-config\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027315 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b5252c-28c5-44b9-a17f-6afb24926978-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027342 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vblfq\" (UniqueName: \"kubernetes.io/projected/26b5252c-28c5-44b9-a17f-6afb24926978-kube-api-access-vblfq\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027366 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b5252c-28c5-44b9-a17f-6afb24926978-config\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027388 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/948956a4-cb1c-4cb0-bb88-a749d1ab5990-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027404 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b5252c-28c5-44b9-a17f-6afb24926978-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027421 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0277bfc7-7497-454e-a4d0-efd51c1c50a4-config\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027443 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948956a4-cb1c-4cb0-bb88-a749d1ab5990-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027459 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0277bfc7-7497-454e-a4d0-efd51c1c50a4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027477 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0277bfc7-7497-454e-a4d0-efd51c1c50a4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027506 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26b5252c-28c5-44b9-a17f-6afb24926978-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027525 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0277bfc7-7497-454e-a4d0-efd51c1c50a4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027551 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-afd16f2c-75ba-42ba-853d-e5f20c9aa8e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afd16f2c-75ba-42ba-853d-e5f20c9aa8e6\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027569 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0277bfc7-7497-454e-a4d0-efd51c1c50a4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027584 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b5252c-28c5-44b9-a17f-6afb24926978-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027605 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-23e4a609-87a5-48a3-a92a-06949321873d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23e4a609-87a5-48a3-a92a-06949321873d\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027621 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/948956a4-cb1c-4cb0-bb88-a749d1ab5990-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027637 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/948956a4-cb1c-4cb0-bb88-a749d1ab5990-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.027658 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-665b5dc2-c5bd-49ab-b7ab-da76b496ac43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-665b5dc2-c5bd-49ab-b7ab-da76b496ac43\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.028491 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/948956a4-cb1c-4cb0-bb88-a749d1ab5990-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.030384 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/948956a4-cb1c-4cb0-bb88-a749d1ab5990-config\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.031323 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.031422 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-afd16f2c-75ba-42ba-853d-e5f20c9aa8e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afd16f2c-75ba-42ba-853d-e5f20c9aa8e6\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/36bb33959b7c337ebbfae378359b68543f0c048008d92302e4af61c9fa33b92b/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.041459 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948956a4-cb1c-4cb0-bb88-a749d1ab5990-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.054628 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlk2p\" (UniqueName: \"kubernetes.io/projected/948956a4-cb1c-4cb0-bb88-a749d1ab5990-kube-api-access-zlk2p\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.060649 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-afd16f2c-75ba-42ba-853d-e5f20c9aa8e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afd16f2c-75ba-42ba-853d-e5f20c9aa8e6\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.129638 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b5252c-28c5-44b9-a17f-6afb24926978-config\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.129715 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b5252c-28c5-44b9-a17f-6afb24926978-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.129770 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0277bfc7-7497-454e-a4d0-efd51c1c50a4-config\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.129817 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0277bfc7-7497-454e-a4d0-efd51c1c50a4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.129848 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0277bfc7-7497-454e-a4d0-efd51c1c50a4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.129880 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26b5252c-28c5-44b9-a17f-6afb24926978-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.129926 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0277bfc7-7497-454e-a4d0-efd51c1c50a4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.129966 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0277bfc7-7497-454e-a4d0-efd51c1c50a4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.129994 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b5252c-28c5-44b9-a17f-6afb24926978-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.130036 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-23e4a609-87a5-48a3-a92a-06949321873d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23e4a609-87a5-48a3-a92a-06949321873d\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.130120 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-665b5dc2-c5bd-49ab-b7ab-da76b496ac43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-665b5dc2-c5bd-49ab-b7ab-da76b496ac43\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.130216 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26b5252c-28c5-44b9-a17f-6afb24926978-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.130282 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg9l2\" (UniqueName: \"kubernetes.io/projected/0277bfc7-7497-454e-a4d0-efd51c1c50a4-kube-api-access-cg9l2\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.130380 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0277bfc7-7497-454e-a4d0-efd51c1c50a4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.130449 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b5252c-28c5-44b9-a17f-6afb24926978-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.130510 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vblfq\" (UniqueName: \"kubernetes.io/projected/26b5252c-28c5-44b9-a17f-6afb24926978-kube-api-access-vblfq\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.131094 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b5252c-28c5-44b9-a17f-6afb24926978-config\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.132584 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0277bfc7-7497-454e-a4d0-efd51c1c50a4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.132970 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26b5252c-28c5-44b9-a17f-6afb24926978-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.133655 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0277bfc7-7497-454e-a4d0-efd51c1c50a4-config\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.133781 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.133844 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-665b5dc2-c5bd-49ab-b7ab-da76b496ac43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-665b5dc2-c5bd-49ab-b7ab-da76b496ac43\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4e5ec6e75d4e56719c220c91621f36ab98580d5d3241a0a95a34e1581a727df6/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.134020 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.134095 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-23e4a609-87a5-48a3-a92a-06949321873d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23e4a609-87a5-48a3-a92a-06949321873d\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2357e0f84338d557e3fbda0a4c73ba0b55eecdca7a070639390894f89149df61/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.135001 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b5252c-28c5-44b9-a17f-6afb24926978-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.136586 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0277bfc7-7497-454e-a4d0-efd51c1c50a4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.168286 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vblfq\" (UniqueName: \"kubernetes.io/projected/26b5252c-28c5-44b9-a17f-6afb24926978-kube-api-access-vblfq\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.182317 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg9l2\" (UniqueName: \"kubernetes.io/projected/0277bfc7-7497-454e-a4d0-efd51c1c50a4-kube-api-access-cg9l2\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.212252 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-23e4a609-87a5-48a3-a92a-06949321873d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23e4a609-87a5-48a3-a92a-06949321873d\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.294833 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-665b5dc2-c5bd-49ab-b7ab-da76b496ac43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-665b5dc2-c5bd-49ab-b7ab-da76b496ac43\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.391883 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.393107 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.394664 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.394955 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.395328 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.395945 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xdpmq" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.426613 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.456667 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.458360 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.469916 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.471507 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.477001 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.483108 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.540924 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eac3a17b-f262-4a99-a017-bdb7c57da317-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.541185 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eac3a17b-f262-4a99-a017-bdb7c57da317-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.541291 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac3a17b-f262-4a99-a017-bdb7c57da317-config\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.541405 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eac3a17b-f262-4a99-a017-bdb7c57da317-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.541538 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac3a17b-f262-4a99-a017-bdb7c57da317-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.541608 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8gq7\" (UniqueName: \"kubernetes.io/projected/eac3a17b-f262-4a99-a017-bdb7c57da317-kube-api-access-f8gq7\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.541660 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b41fdf01-b7ca-408e-bdac-724bf462a15b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b41fdf01-b7ca-408e-bdac-724bf462a15b\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.541719 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eac3a17b-f262-4a99-a017-bdb7c57da317-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.642834 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.642900 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a63471-debe-4dc8-8eeb-8e9b115aef32-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.642939 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fb27\" (UniqueName: \"kubernetes.io/projected/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-kube-api-access-4fb27\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.642974 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a63471-debe-4dc8-8eeb-8e9b115aef32-config\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643020 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6a63471-debe-4dc8-8eeb-8e9b115aef32-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643065 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6a63471-debe-4dc8-8eeb-8e9b115aef32-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643126 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643161 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eac3a17b-f262-4a99-a017-bdb7c57da317-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643247 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac3a17b-f262-4a99-a017-bdb7c57da317-config\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643327 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eac3a17b-f262-4a99-a017-bdb7c57da317-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643430 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f6a63471-debe-4dc8-8eeb-8e9b115aef32-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643485 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643535 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac3a17b-f262-4a99-a017-bdb7c57da317-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643646 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643718 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8gq7\" (UniqueName: \"kubernetes.io/projected/eac3a17b-f262-4a99-a017-bdb7c57da317-kube-api-access-f8gq7\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643774 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643823 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b41fdf01-b7ca-408e-bdac-724bf462a15b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b41fdf01-b7ca-408e-bdac-724bf462a15b\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643880 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwhcv\" (UniqueName: \"kubernetes.io/projected/f6a63471-debe-4dc8-8eeb-8e9b115aef32-kube-api-access-dwhcv\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643912 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6a63471-debe-4dc8-8eeb-8e9b115aef32-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.643964 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1f144c1b-5880-4039-8e41-48139f0d06fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f144c1b-5880-4039-8e41-48139f0d06fe\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.644001 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c64e0890-dec5-4db0-a545-8e4426849446\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c64e0890-dec5-4db0-a545-8e4426849446\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.644035 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eac3a17b-f262-4a99-a017-bdb7c57da317-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.644079 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-config\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.644092 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac3a17b-f262-4a99-a017-bdb7c57da317-config\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.644113 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eac3a17b-f262-4a99-a017-bdb7c57da317-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.645068 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eac3a17b-f262-4a99-a017-bdb7c57da317-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.645488 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eac3a17b-f262-4a99-a017-bdb7c57da317-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.649742 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eac3a17b-f262-4a99-a017-bdb7c57da317-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.649985 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.650021 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b41fdf01-b7ca-408e-bdac-724bf462a15b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b41fdf01-b7ca-408e-bdac-724bf462a15b\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3ae1cb50022545557e90b935952cb83e64e42d22bfda6bb583f579717129d2a5/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.650392 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eac3a17b-f262-4a99-a017-bdb7c57da317-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.664482 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8gq7\" (UniqueName: \"kubernetes.io/projected/eac3a17b-f262-4a99-a017-bdb7c57da317-kube-api-access-f8gq7\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.683361 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b41fdf01-b7ca-408e-bdac-724bf462a15b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b41fdf01-b7ca-408e-bdac-724bf462a15b\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.745334 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6a63471-debe-4dc8-8eeb-8e9b115aef32-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.745421 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6a63471-debe-4dc8-8eeb-8e9b115aef32-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.745473 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.745588 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f6a63471-debe-4dc8-8eeb-8e9b115aef32-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.745624 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.745686 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.745753 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.745813 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwhcv\" (UniqueName: \"kubernetes.io/projected/f6a63471-debe-4dc8-8eeb-8e9b115aef32-kube-api-access-dwhcv\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.745845 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6a63471-debe-4dc8-8eeb-8e9b115aef32-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.745892 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1f144c1b-5880-4039-8e41-48139f0d06fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f144c1b-5880-4039-8e41-48139f0d06fe\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.745932 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c64e0890-dec5-4db0-a545-8e4426849446\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c64e0890-dec5-4db0-a545-8e4426849446\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.745975 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-config\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.746027 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.746063 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a63471-debe-4dc8-8eeb-8e9b115aef32-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.746098 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a63471-debe-4dc8-8eeb-8e9b115aef32-config\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.746136 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fb27\" (UniqueName: \"kubernetes.io/projected/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-kube-api-access-4fb27\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.747980 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-config\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.748665 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f6a63471-debe-4dc8-8eeb-8e9b115aef32-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.749329 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.749465 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.749537 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a63471-debe-4dc8-8eeb-8e9b115aef32-config\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.749574 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.749614 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1f144c1b-5880-4039-8e41-48139f0d06fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f144c1b-5880-4039-8e41-48139f0d06fe\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/59d7b86348735a25e4db78d34f5634088471f4c77de02838be9de1b06de1f533/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.749788 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f6a63471-debe-4dc8-8eeb-8e9b115aef32-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.751904 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.752909 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26b5252c-28c5-44b9-a17f-6afb24926978-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.753474 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6a63471-debe-4dc8-8eeb-8e9b115aef32-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.754241 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.754738 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.754785 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c64e0890-dec5-4db0-a545-8e4426849446\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c64e0890-dec5-4db0-a545-8e4426849446\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/edd5777b3ead85ba0f9da767dad63dbcd0cfc6da3ca005cf5bb9fbe09a8ccc39/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.755342 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.758186 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0277bfc7-7497-454e-a4d0-efd51c1c50a4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.760051 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/948956a4-cb1c-4cb0-bb88-a749d1ab5990-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.760747 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a63471-debe-4dc8-8eeb-8e9b115aef32-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.767591 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fb27\" (UniqueName: \"kubernetes.io/projected/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-kube-api-access-4fb27\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.781592 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwhcv\" (UniqueName: \"kubernetes.io/projected/f6a63471-debe-4dc8-8eeb-8e9b115aef32-kube-api-access-dwhcv\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.799323 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c64e0890-dec5-4db0-a545-8e4426849446\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c64e0890-dec5-4db0-a545-8e4426849446\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.807900 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1f144c1b-5880-4039-8e41-48139f0d06fe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1f144c1b-5880-4039-8e41-48139f0d06fe\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.829242 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-cdz4j" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.849208 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.858608 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b5252c-28c5-44b9-a17f-6afb24926978-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.859757 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0277bfc7-7497-454e-a4d0-efd51c1c50a4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:14 crc kubenswrapper[4826]: I0129 08:08:14.862078 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/948956a4-cb1c-4cb0-bb88-a749d1ab5990-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:15 crc kubenswrapper[4826]: E0129 08:08:15.028613 4826 secret.go:188] Couldn't get secret openstack/cert-ovn-metrics: failed to sync secret cache: timed out waiting for the condition Jan 29 08:08:15 crc kubenswrapper[4826]: E0129 08:08:15.028723 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/948956a4-cb1c-4cb0-bb88-a749d1ab5990-metrics-certs-tls-certs podName:948956a4-cb1c-4cb0-bb88-a749d1ab5990 nodeName:}" failed. No retries permitted until 2026-01-29 08:08:15.528695825 +0000 UTC m=+5079.390488924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs-tls-certs" (UniqueName: "kubernetes.io/secret/948956a4-cb1c-4cb0-bb88-a749d1ab5990-metrics-certs-tls-certs") pod "ovsdbserver-nb-2" (UID: "948956a4-cb1c-4cb0-bb88-a749d1ab5990") : failed to sync secret cache: timed out waiting for the condition Jan 29 08:08:15 crc kubenswrapper[4826]: E0129 08:08:15.133420 4826 secret.go:188] Couldn't get secret openstack/cert-ovn-metrics: failed to sync secret cache: timed out waiting for the condition Jan 29 08:08:15 crc kubenswrapper[4826]: E0129 08:08:15.133738 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26b5252c-28c5-44b9-a17f-6afb24926978-metrics-certs-tls-certs podName:26b5252c-28c5-44b9-a17f-6afb24926978 nodeName:}" failed. No retries permitted until 2026-01-29 08:08:15.633719525 +0000 UTC m=+5079.495512594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs-tls-certs" (UniqueName: "kubernetes.io/secret/26b5252c-28c5-44b9-a17f-6afb24926978-metrics-certs-tls-certs") pod "ovsdbserver-nb-1" (UID: "26b5252c-28c5-44b9-a17f-6afb24926978") : failed to sync secret cache: timed out waiting for the condition Jan 29 08:08:15 crc kubenswrapper[4826]: E0129 08:08:15.133447 4826 secret.go:188] Couldn't get secret openstack/cert-ovn-metrics: failed to sync secret cache: timed out waiting for the condition Jan 29 08:08:15 crc kubenswrapper[4826]: E0129 08:08:15.133958 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0277bfc7-7497-454e-a4d0-efd51c1c50a4-metrics-certs-tls-certs podName:0277bfc7-7497-454e-a4d0-efd51c1c50a4 nodeName:}" failed. No retries permitted until 2026-01-29 08:08:15.633950721 +0000 UTC m=+5079.495743790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs-tls-certs" (UniqueName: "kubernetes.io/secret/0277bfc7-7497-454e-a4d0-efd51c1c50a4-metrics-certs-tls-certs") pod "ovsdbserver-nb-0" (UID: "0277bfc7-7497-454e-a4d0-efd51c1c50a4") : failed to sync secret cache: timed out waiting for the condition Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.332555 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.340247 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eac3a17b-f262-4a99-a017-bdb7c57da317-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eac3a17b-f262-4a99-a017-bdb7c57da317\") " pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.344221 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bb5c8e6-d9f0-45e3-9023-665dc8b3b323-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323\") " pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.345410 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6a63471-debe-4dc8-8eeb-8e9b115aef32-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"f6a63471-debe-4dc8-8eeb-8e9b115aef32\") " pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.370498 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.391231 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.569804 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/948956a4-cb1c-4cb0-bb88-a749d1ab5990-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.576194 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/948956a4-cb1c-4cb0-bb88-a749d1ab5990-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"948956a4-cb1c-4cb0-bb88-a749d1ab5990\") " pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.607164 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.671434 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0277bfc7-7497-454e-a4d0-efd51c1c50a4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.671577 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b5252c-28c5-44b9-a17f-6afb24926978-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.675964 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0277bfc7-7497-454e-a4d0-efd51c1c50a4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0277bfc7-7497-454e-a4d0-efd51c1c50a4\") " pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.676111 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b5252c-28c5-44b9-a17f-6afb24926978-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"26b5252c-28c5-44b9-a17f-6afb24926978\") " pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.705506 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.722191 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.865372 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.959205 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:15 crc kubenswrapper[4826]: I0129 08:08:15.988109 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 29 08:08:16 crc kubenswrapper[4826]: W0129 08:08:16.011226 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb5c8e6_d9f0_45e3_9023_665dc8b3b323.slice/crio-92f087ecfeacea8f9e1e16287d790b6b28cf3a3dbdc04e331821d79aa157cd30 WatchSource:0}: Error finding container 92f087ecfeacea8f9e1e16287d790b6b28cf3a3dbdc04e331821d79aa157cd30: Status 404 returned error can't find the container with id 92f087ecfeacea8f9e1e16287d790b6b28cf3a3dbdc04e331821d79aa157cd30 Jan 29 08:08:16 crc kubenswrapper[4826]: I0129 08:08:16.056408 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323","Type":"ContainerStarted","Data":"92f087ecfeacea8f9e1e16287d790b6b28cf3a3dbdc04e331821d79aa157cd30"} Jan 29 08:08:16 crc kubenswrapper[4826]: I0129 08:08:16.065807 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"f6a63471-debe-4dc8-8eeb-8e9b115aef32","Type":"ContainerStarted","Data":"e25d7e18a679896d10992e62f3ba76002e058018f949e40a400a107979097c1f"} Jan 29 08:08:16 crc kubenswrapper[4826]: I0129 08:08:16.135380 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 08:08:16 crc kubenswrapper[4826]: W0129 08:08:16.147171 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeac3a17b_f262_4a99_a017_bdb7c57da317.slice/crio-864faeda604af8802118b5fcd3e19ecb86045a64e5d0123bd4fb89e94502cdca WatchSource:0}: Error finding container 864faeda604af8802118b5fcd3e19ecb86045a64e5d0123bd4fb89e94502cdca: Status 404 returned error can't find the container with id 864faeda604af8802118b5fcd3e19ecb86045a64e5d0123bd4fb89e94502cdca Jan 29 08:08:16 crc kubenswrapper[4826]: I0129 08:08:16.234205 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 29 08:08:16 crc kubenswrapper[4826]: W0129 08:08:16.274834 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod948956a4_cb1c_4cb0_bb88_a749d1ab5990.slice/crio-112602415e14654e065e49e9a95786ab59b58f8002bc237435086130c57ad6a5 WatchSource:0}: Error finding container 112602415e14654e065e49e9a95786ab59b58f8002bc237435086130c57ad6a5: Status 404 returned error can't find the container with id 112602415e14654e065e49e9a95786ab59b58f8002bc237435086130c57ad6a5 Jan 29 08:08:16 crc kubenswrapper[4826]: I0129 08:08:16.332069 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 29 08:08:16 crc kubenswrapper[4826]: W0129 08:08:16.339475 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26b5252c_28c5_44b9_a17f_6afb24926978.slice/crio-c954469b87b99e598e0957c25db5b3ae351b735d7c62a4b44537a035f0607f19 WatchSource:0}: Error finding container c954469b87b99e598e0957c25db5b3ae351b735d7c62a4b44537a035f0607f19: Status 404 returned error can't find the container with id c954469b87b99e598e0957c25db5b3ae351b735d7c62a4b44537a035f0607f19 Jan 29 08:08:16 crc kubenswrapper[4826]: I0129 08:08:16.519645 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 08:08:17 crc kubenswrapper[4826]: I0129 08:08:17.073669 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"26b5252c-28c5-44b9-a17f-6afb24926978","Type":"ContainerStarted","Data":"c954469b87b99e598e0957c25db5b3ae351b735d7c62a4b44537a035f0607f19"} Jan 29 08:08:17 crc kubenswrapper[4826]: I0129 08:08:17.074780 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0277bfc7-7497-454e-a4d0-efd51c1c50a4","Type":"ContainerStarted","Data":"d8087e903fe8c789cf31ad41e28ae3b799cd9a0221ff6b65fa485c6c28b1f8b4"} Jan 29 08:08:17 crc kubenswrapper[4826]: I0129 08:08:17.076002 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eac3a17b-f262-4a99-a017-bdb7c57da317","Type":"ContainerStarted","Data":"864faeda604af8802118b5fcd3e19ecb86045a64e5d0123bd4fb89e94502cdca"} Jan 29 08:08:17 crc kubenswrapper[4826]: I0129 08:08:17.077166 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"948956a4-cb1c-4cb0-bb88-a749d1ab5990","Type":"ContainerStarted","Data":"112602415e14654e065e49e9a95786ab59b58f8002bc237435086130c57ad6a5"} Jan 29 08:08:22 crc kubenswrapper[4826]: I0129 08:08:22.125623 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323","Type":"ContainerStarted","Data":"f3931ec4b1f32cf781ba03292a098e2e0c0d53c24672b8c095e6c90f9c9083db"} Jan 29 08:08:22 crc kubenswrapper[4826]: I0129 08:08:22.133640 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"26b5252c-28c5-44b9-a17f-6afb24926978","Type":"ContainerStarted","Data":"478566c539210f07eca53f5469410bff69ec2f98e9957243856cceab4d3e4122"} Jan 29 08:08:22 crc kubenswrapper[4826]: I0129 08:08:22.140127 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0277bfc7-7497-454e-a4d0-efd51c1c50a4","Type":"ContainerStarted","Data":"5ed5381d08d95e11a80cd87063b172be0fc09437a7a1e494d11f4f14504d482c"} Jan 29 08:08:22 crc kubenswrapper[4826]: I0129 08:08:22.144225 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eac3a17b-f262-4a99-a017-bdb7c57da317","Type":"ContainerStarted","Data":"5a3a4a1b3cb60f61c037592d074c2fa975b3a99a27bd55c0d7ea3343b35a5a17"} Jan 29 08:08:22 crc kubenswrapper[4826]: I0129 08:08:22.150820 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"948956a4-cb1c-4cb0-bb88-a749d1ab5990","Type":"ContainerStarted","Data":"8aa8b7d12769bf296957f6052695eb3526e8191653f4018a9eec03627a1374bd"} Jan 29 08:08:22 crc kubenswrapper[4826]: I0129 08:08:22.162419 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"f6a63471-debe-4dc8-8eeb-8e9b115aef32","Type":"ContainerStarted","Data":"37ba6e1745b62e208d5db5623831bf3884049d694cccbe86f18c09734497608d"} Jan 29 08:08:23 crc kubenswrapper[4826]: I0129 08:08:23.176464 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eac3a17b-f262-4a99-a017-bdb7c57da317","Type":"ContainerStarted","Data":"39aa3f5114fb50842a582f52240812b3eb0e789d5ad0b30991c6a39280809d28"} Jan 29 08:08:23 crc kubenswrapper[4826]: I0129 08:08:23.181387 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"948956a4-cb1c-4cb0-bb88-a749d1ab5990","Type":"ContainerStarted","Data":"77319863792d046f7d12efd10d4d94e7e753d5dbc6cfff11a66005b5d02de94d"} Jan 29 08:08:23 crc kubenswrapper[4826]: I0129 08:08:23.186378 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"9bb5c8e6-d9f0-45e3-9023-665dc8b3b323","Type":"ContainerStarted","Data":"39984500d2e4c3f7da67cb75c4f36c0e858d89f79a91fb7b1a4770f5f551834a"} Jan 29 08:08:23 crc kubenswrapper[4826]: I0129 08:08:23.188939 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"f6a63471-debe-4dc8-8eeb-8e9b115aef32","Type":"ContainerStarted","Data":"16c4b997611b03c5e10c44da2796b43c06d1dc7196cc42cadf7392b74cc60dac"} Jan 29 08:08:23 crc kubenswrapper[4826]: I0129 08:08:23.192088 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"26b5252c-28c5-44b9-a17f-6afb24926978","Type":"ContainerStarted","Data":"283bfb3aa33711ea5f7e05bd32b790a93d327eb03e87d71af44a3f792a93d9e6"} Jan 29 08:08:23 crc kubenswrapper[4826]: I0129 08:08:23.197711 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0277bfc7-7497-454e-a4d0-efd51c1c50a4","Type":"ContainerStarted","Data":"de351249ea09e40c200cb5e590c6bd0e92dce1dc217a38638d5ba61bfb65398c"} Jan 29 08:08:23 crc kubenswrapper[4826]: I0129 08:08:23.221697 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.727705784 podStartE2EDuration="10.221669538s" podCreationTimestamp="2026-01-29 08:08:13 +0000 UTC" firstStartedPulling="2026-01-29 08:08:16.153150751 +0000 UTC m=+5080.014943820" lastFinishedPulling="2026-01-29 08:08:21.647114515 +0000 UTC m=+5085.508907574" observedRunningTime="2026-01-29 08:08:23.20843982 +0000 UTC m=+5087.070232939" watchObservedRunningTime="2026-01-29 08:08:23.221669538 +0000 UTC m=+5087.083462647" Jan 29 08:08:23 crc kubenswrapper[4826]: I0129 08:08:23.249406 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.507436568 podStartE2EDuration="10.249365846s" podCreationTimestamp="2026-01-29 08:08:13 +0000 UTC" firstStartedPulling="2026-01-29 08:08:15.866582862 +0000 UTC m=+5079.728375921" lastFinishedPulling="2026-01-29 08:08:21.60851209 +0000 UTC m=+5085.470305199" observedRunningTime="2026-01-29 08:08:23.243773999 +0000 UTC m=+5087.105567098" watchObservedRunningTime="2026-01-29 08:08:23.249365846 +0000 UTC m=+5087.111158995" Jan 29 08:08:23 crc kubenswrapper[4826]: I0129 08:08:23.286727 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=5.956541918 podStartE2EDuration="11.286696807s" podCreationTimestamp="2026-01-29 08:08:12 +0000 UTC" firstStartedPulling="2026-01-29 08:08:16.281489034 +0000 UTC m=+5080.143282103" lastFinishedPulling="2026-01-29 08:08:21.611643903 +0000 UTC m=+5085.473436992" observedRunningTime="2026-01-29 08:08:23.273918061 +0000 UTC m=+5087.135711200" watchObservedRunningTime="2026-01-29 08:08:23.286696807 +0000 UTC m=+5087.148489916" Jan 29 08:08:23 crc kubenswrapper[4826]: I0129 08:08:23.361181 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=6.089425289 podStartE2EDuration="11.361154843s" podCreationTimestamp="2026-01-29 08:08:12 +0000 UTC" firstStartedPulling="2026-01-29 08:08:16.341130751 +0000 UTC m=+5080.202923820" lastFinishedPulling="2026-01-29 08:08:21.612860305 +0000 UTC m=+5085.474653374" observedRunningTime="2026-01-29 08:08:23.299901494 +0000 UTC m=+5087.161694573" watchObservedRunningTime="2026-01-29 08:08:23.361154843 +0000 UTC m=+5087.222947932" Jan 29 08:08:23 crc kubenswrapper[4826]: I0129 08:08:23.367372 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.773445688 podStartE2EDuration="10.367346546s" podCreationTimestamp="2026-01-29 08:08:13 +0000 UTC" firstStartedPulling="2026-01-29 08:08:16.014856779 +0000 UTC m=+5079.876649858" lastFinishedPulling="2026-01-29 08:08:21.608757607 +0000 UTC m=+5085.470550716" observedRunningTime="2026-01-29 08:08:23.365700203 +0000 UTC m=+5087.227493282" watchObservedRunningTime="2026-01-29 08:08:23.367346546 +0000 UTC m=+5087.229139645" Jan 29 08:08:23 crc kubenswrapper[4826]: I0129 08:08:23.396937 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.317228535 podStartE2EDuration="11.396912933s" podCreationTimestamp="2026-01-29 08:08:12 +0000 UTC" firstStartedPulling="2026-01-29 08:08:16.529526401 +0000 UTC m=+5080.391319480" lastFinishedPulling="2026-01-29 08:08:21.609210799 +0000 UTC m=+5085.471003878" observedRunningTime="2026-01-29 08:08:23.391632304 +0000 UTC m=+5087.253425373" watchObservedRunningTime="2026-01-29 08:08:23.396912933 +0000 UTC m=+5087.258706002" Jan 29 08:08:24 crc kubenswrapper[4826]: I0129 08:08:24.376792 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:24 crc kubenswrapper[4826]: I0129 08:08:24.392257 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:24 crc kubenswrapper[4826]: I0129 08:08:24.608135 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:24 crc kubenswrapper[4826]: I0129 08:08:24.705864 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:24 crc kubenswrapper[4826]: I0129 08:08:24.722931 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:24 crc kubenswrapper[4826]: I0129 08:08:24.959967 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:25 crc kubenswrapper[4826]: I0129 08:08:25.371360 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:25 crc kubenswrapper[4826]: I0129 08:08:25.391718 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:25 crc kubenswrapper[4826]: I0129 08:08:25.607517 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:25 crc kubenswrapper[4826]: I0129 08:08:25.706337 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:25 crc kubenswrapper[4826]: I0129 08:08:25.723778 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:25 crc kubenswrapper[4826]: I0129 08:08:25.959592 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.428062 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.475401 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.485626 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.543721 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.667029 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.719087 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.759370 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7757559cb9-kz7j6"] Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.761033 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.773725 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.795632 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.799525 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7757559cb9-kz7j6"] Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.900946 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-config\") pod \"dnsmasq-dns-7757559cb9-kz7j6\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.900987 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mlr4\" (UniqueName: \"kubernetes.io/projected/f2780b25-582a-4f5e-b8b5-a3418ed9d558-kube-api-access-7mlr4\") pod \"dnsmasq-dns-7757559cb9-kz7j6\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.901005 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-dns-svc\") pod \"dnsmasq-dns-7757559cb9-kz7j6\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.901106 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-ovsdbserver-sb\") pod \"dnsmasq-dns-7757559cb9-kz7j6\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:27 crc kubenswrapper[4826]: I0129 08:08:27.978942 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.002329 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-config\") pod \"dnsmasq-dns-7757559cb9-kz7j6\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.002366 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mlr4\" (UniqueName: \"kubernetes.io/projected/f2780b25-582a-4f5e-b8b5-a3418ed9d558-kube-api-access-7mlr4\") pod \"dnsmasq-dns-7757559cb9-kz7j6\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.002389 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-dns-svc\") pod \"dnsmasq-dns-7757559cb9-kz7j6\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.002451 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-ovsdbserver-sb\") pod \"dnsmasq-dns-7757559cb9-kz7j6\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.003306 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-ovsdbserver-sb\") pod \"dnsmasq-dns-7757559cb9-kz7j6\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.003859 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-config\") pod \"dnsmasq-dns-7757559cb9-kz7j6\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.004623 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-dns-svc\") pod \"dnsmasq-dns-7757559cb9-kz7j6\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.017315 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.030196 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mlr4\" (UniqueName: \"kubernetes.io/projected/f2780b25-582a-4f5e-b8b5-a3418ed9d558-kube-api-access-7mlr4\") pod \"dnsmasq-dns-7757559cb9-kz7j6\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.044979 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.056710 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.096499 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.151040 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.457325 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7757559cb9-kz7j6"] Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.481556 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f6f6589-rnvnq"] Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.482711 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.485865 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.497544 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f6f6589-rnvnq"] Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.613830 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-ovsdbserver-nb\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.613885 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-dns-svc\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.613987 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6xq7\" (UniqueName: \"kubernetes.io/projected/17a94dd0-3150-4b2b-9511-edac7a033212-kube-api-access-s6xq7\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.614260 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-config\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.614288 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-ovsdbserver-sb\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.618193 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7757559cb9-kz7j6"] Jan 29 08:08:28 crc kubenswrapper[4826]: W0129 08:08:28.623668 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2780b25_582a_4f5e_b8b5_a3418ed9d558.slice/crio-19120d3287a3167fd35c6535d03a76be2d369a759c39efba41250fdc2f0aa322 WatchSource:0}: Error finding container 19120d3287a3167fd35c6535d03a76be2d369a759c39efba41250fdc2f0aa322: Status 404 returned error can't find the container with id 19120d3287a3167fd35c6535d03a76be2d369a759c39efba41250fdc2f0aa322 Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.716053 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6xq7\" (UniqueName: \"kubernetes.io/projected/17a94dd0-3150-4b2b-9511-edac7a033212-kube-api-access-s6xq7\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.716413 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-config\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.716442 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-ovsdbserver-sb\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.716508 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-ovsdbserver-nb\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.716537 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-dns-svc\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.717374 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-ovsdbserver-sb\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.717425 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-dns-svc\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.718185 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-config\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.718249 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-ovsdbserver-nb\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.733710 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6xq7\" (UniqueName: \"kubernetes.io/projected/17a94dd0-3150-4b2b-9511-edac7a033212-kube-api-access-s6xq7\") pod \"dnsmasq-dns-55f6f6589-rnvnq\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:28 crc kubenswrapper[4826]: I0129 08:08:28.812930 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.264902 4826 generic.go:334] "Generic (PLEG): container finished" podID="f2780b25-582a-4f5e-b8b5-a3418ed9d558" containerID="f50f78bc75d00c31d68ea7d626d615f8cce9a535924273aa5a97d9758e6ec660" exitCode=0 Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.265008 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" event={"ID":"f2780b25-582a-4f5e-b8b5-a3418ed9d558","Type":"ContainerDied","Data":"f50f78bc75d00c31d68ea7d626d615f8cce9a535924273aa5a97d9758e6ec660"} Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.265376 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" event={"ID":"f2780b25-582a-4f5e-b8b5-a3418ed9d558","Type":"ContainerStarted","Data":"19120d3287a3167fd35c6535d03a76be2d369a759c39efba41250fdc2f0aa322"} Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.282241 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f6f6589-rnvnq"] Jan 29 08:08:29 crc kubenswrapper[4826]: W0129 08:08:29.296244 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17a94dd0_3150_4b2b_9511_edac7a033212.slice/crio-bbebf64875500f69a8f56ec7766a89a3deea687e227e043a9e315bc4fc5ec4ec WatchSource:0}: Error finding container bbebf64875500f69a8f56ec7766a89a3deea687e227e043a9e315bc4fc5ec4ec: Status 404 returned error can't find the container with id bbebf64875500f69a8f56ec7766a89a3deea687e227e043a9e315bc4fc5ec4ec Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.688920 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.834391 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-ovsdbserver-sb\") pod \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.834537 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-dns-svc\") pod \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.834572 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mlr4\" (UniqueName: \"kubernetes.io/projected/f2780b25-582a-4f5e-b8b5-a3418ed9d558-kube-api-access-7mlr4\") pod \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.834608 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-config\") pod \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\" (UID: \"f2780b25-582a-4f5e-b8b5-a3418ed9d558\") " Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.841526 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2780b25-582a-4f5e-b8b5-a3418ed9d558-kube-api-access-7mlr4" (OuterVolumeSpecName: "kube-api-access-7mlr4") pod "f2780b25-582a-4f5e-b8b5-a3418ed9d558" (UID: "f2780b25-582a-4f5e-b8b5-a3418ed9d558"). InnerVolumeSpecName "kube-api-access-7mlr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.861054 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2780b25-582a-4f5e-b8b5-a3418ed9d558" (UID: "f2780b25-582a-4f5e-b8b5-a3418ed9d558"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.872317 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-config" (OuterVolumeSpecName: "config") pod "f2780b25-582a-4f5e-b8b5-a3418ed9d558" (UID: "f2780b25-582a-4f5e-b8b5-a3418ed9d558"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.874620 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2780b25-582a-4f5e-b8b5-a3418ed9d558" (UID: "f2780b25-582a-4f5e-b8b5-a3418ed9d558"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.937416 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.937466 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mlr4\" (UniqueName: \"kubernetes.io/projected/f2780b25-582a-4f5e-b8b5-a3418ed9d558-kube-api-access-7mlr4\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.937488 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:29 crc kubenswrapper[4826]: I0129 08:08:29.937504 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2780b25-582a-4f5e-b8b5-a3418ed9d558-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:30 crc kubenswrapper[4826]: I0129 08:08:30.277469 4826 generic.go:334] "Generic (PLEG): container finished" podID="17a94dd0-3150-4b2b-9511-edac7a033212" containerID="4c5112a45c83cea2a6e958a4b92428138c3bfb749a18590ad0bd6fdfdf165c4a" exitCode=0 Jan 29 08:08:30 crc kubenswrapper[4826]: I0129 08:08:30.278490 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" event={"ID":"17a94dd0-3150-4b2b-9511-edac7a033212","Type":"ContainerDied","Data":"4c5112a45c83cea2a6e958a4b92428138c3bfb749a18590ad0bd6fdfdf165c4a"} Jan 29 08:08:30 crc kubenswrapper[4826]: I0129 08:08:30.278517 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" event={"ID":"17a94dd0-3150-4b2b-9511-edac7a033212","Type":"ContainerStarted","Data":"bbebf64875500f69a8f56ec7766a89a3deea687e227e043a9e315bc4fc5ec4ec"} Jan 29 08:08:30 crc kubenswrapper[4826]: I0129 08:08:30.282313 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" event={"ID":"f2780b25-582a-4f5e-b8b5-a3418ed9d558","Type":"ContainerDied","Data":"19120d3287a3167fd35c6535d03a76be2d369a759c39efba41250fdc2f0aa322"} Jan 29 08:08:30 crc kubenswrapper[4826]: I0129 08:08:30.282367 4826 scope.go:117] "RemoveContainer" containerID="f50f78bc75d00c31d68ea7d626d615f8cce9a535924273aa5a97d9758e6ec660" Jan 29 08:08:30 crc kubenswrapper[4826]: I0129 08:08:30.282378 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7757559cb9-kz7j6" Jan 29 08:08:30 crc kubenswrapper[4826]: I0129 08:08:30.526666 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7757559cb9-kz7j6"] Jan 29 08:08:30 crc kubenswrapper[4826]: I0129 08:08:30.534782 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7757559cb9-kz7j6"] Jan 29 08:08:30 crc kubenswrapper[4826]: I0129 08:08:30.826498 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2780b25-582a-4f5e-b8b5-a3418ed9d558" path="/var/lib/kubelet/pods/f2780b25-582a-4f5e-b8b5-a3418ed9d558/volumes" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.235471 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 29 08:08:31 crc kubenswrapper[4826]: E0129 08:08:31.236340 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2780b25-582a-4f5e-b8b5-a3418ed9d558" containerName="init" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.236411 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2780b25-582a-4f5e-b8b5-a3418ed9d558" containerName="init" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.236629 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2780b25-582a-4f5e-b8b5-a3418ed9d558" containerName="init" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.237237 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.239954 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.242411 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.292353 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" event={"ID":"17a94dd0-3150-4b2b-9511-edac7a033212","Type":"ContainerStarted","Data":"aa5733609cc4d54bbe7bfd6da21a24af57cd2ba5916bbac9294318e7dc3d1b5e"} Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.292400 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.313827 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" podStartSLOduration=3.313806993 podStartE2EDuration="3.313806993s" podCreationTimestamp="2026-01-29 08:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:08:31.307242881 +0000 UTC m=+5095.169035970" watchObservedRunningTime="2026-01-29 08:08:31.313806993 +0000 UTC m=+5095.175600072" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.367942 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4lbl\" (UniqueName: \"kubernetes.io/projected/05cde62b-3d8f-402e-993b-244facc60c7f-kube-api-access-z4lbl\") pod \"ovn-copy-data\" (UID: \"05cde62b-3d8f-402e-993b-244facc60c7f\") " pod="openstack/ovn-copy-data" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.368091 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-32315dc2-dce8-4946-a51f-2775a0aab875\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-32315dc2-dce8-4946-a51f-2775a0aab875\") pod \"ovn-copy-data\" (UID: \"05cde62b-3d8f-402e-993b-244facc60c7f\") " pod="openstack/ovn-copy-data" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.368130 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/05cde62b-3d8f-402e-993b-244facc60c7f-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"05cde62b-3d8f-402e-993b-244facc60c7f\") " pod="openstack/ovn-copy-data" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.469881 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4lbl\" (UniqueName: \"kubernetes.io/projected/05cde62b-3d8f-402e-993b-244facc60c7f-kube-api-access-z4lbl\") pod \"ovn-copy-data\" (UID: \"05cde62b-3d8f-402e-993b-244facc60c7f\") " pod="openstack/ovn-copy-data" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.470055 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-32315dc2-dce8-4946-a51f-2775a0aab875\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-32315dc2-dce8-4946-a51f-2775a0aab875\") pod \"ovn-copy-data\" (UID: \"05cde62b-3d8f-402e-993b-244facc60c7f\") " pod="openstack/ovn-copy-data" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.470099 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/05cde62b-3d8f-402e-993b-244facc60c7f-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"05cde62b-3d8f-402e-993b-244facc60c7f\") " pod="openstack/ovn-copy-data" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.473212 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.473338 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-32315dc2-dce8-4946-a51f-2775a0aab875\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-32315dc2-dce8-4946-a51f-2775a0aab875\") pod \"ovn-copy-data\" (UID: \"05cde62b-3d8f-402e-993b-244facc60c7f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cdfc5888789623068de13721a2c5f13a58f86b6d155acfe0aca3bfc3dd097c03/globalmount\"" pod="openstack/ovn-copy-data" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.489032 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/05cde62b-3d8f-402e-993b-244facc60c7f-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"05cde62b-3d8f-402e-993b-244facc60c7f\") " pod="openstack/ovn-copy-data" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.498023 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4lbl\" (UniqueName: \"kubernetes.io/projected/05cde62b-3d8f-402e-993b-244facc60c7f-kube-api-access-z4lbl\") pod \"ovn-copy-data\" (UID: \"05cde62b-3d8f-402e-993b-244facc60c7f\") " pod="openstack/ovn-copy-data" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.513205 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-32315dc2-dce8-4946-a51f-2775a0aab875\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-32315dc2-dce8-4946-a51f-2775a0aab875\") pod \"ovn-copy-data\" (UID: \"05cde62b-3d8f-402e-993b-244facc60c7f\") " pod="openstack/ovn-copy-data" Jan 29 08:08:31 crc kubenswrapper[4826]: I0129 08:08:31.550650 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 29 08:08:32 crc kubenswrapper[4826]: I0129 08:08:32.184161 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 29 08:08:32 crc kubenswrapper[4826]: I0129 08:08:32.305739 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"05cde62b-3d8f-402e-993b-244facc60c7f","Type":"ContainerStarted","Data":"d68e39ce2a9b2359f9af1de0a2725358fbbb0c4a0ab48d586c862c68d6e8c985"} Jan 29 08:08:33 crc kubenswrapper[4826]: I0129 08:08:33.317161 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"05cde62b-3d8f-402e-993b-244facc60c7f","Type":"ContainerStarted","Data":"7fbe0117a117180fec7ede910f9a96340719f29886cb5d0282e444ad134255a4"} Jan 29 08:08:33 crc kubenswrapper[4826]: I0129 08:08:33.353355 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.175709519 podStartE2EDuration="3.353336936s" podCreationTimestamp="2026-01-29 08:08:30 +0000 UTC" firstStartedPulling="2026-01-29 08:08:32.192748259 +0000 UTC m=+5096.054541338" lastFinishedPulling="2026-01-29 08:08:32.370375686 +0000 UTC m=+5096.232168755" observedRunningTime="2026-01-29 08:08:33.345675994 +0000 UTC m=+5097.207469073" watchObservedRunningTime="2026-01-29 08:08:33.353336936 +0000 UTC m=+5097.215130015" Jan 29 08:08:35 crc kubenswrapper[4826]: I0129 08:08:35.656107 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:08:35 crc kubenswrapper[4826]: I0129 08:08:35.656478 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:08:38 crc kubenswrapper[4826]: I0129 08:08:38.825496 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:08:38 crc kubenswrapper[4826]: I0129 08:08:38.941512 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf4cf88bf-259rh"] Jan 29 08:08:38 crc kubenswrapper[4826]: I0129 08:08:38.941799 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" podUID="55efbdae-46d2-40b3-8ce0-482112cdeb2b" containerName="dnsmasq-dns" containerID="cri-o://0e8fd8d21e89cab4935a601407d27a2b474d066c7e9aac3f9518cba0d516abd1" gracePeriod=10 Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.095698 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.102421 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.105829 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.106168 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-nn7g2" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.106223 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.106366 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.132277 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.208235 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f28c19ea-7404-4f03-b020-068bbd66be87-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.208315 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f28c19ea-7404-4f03-b020-068bbd66be87-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.208366 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gps4r\" (UniqueName: \"kubernetes.io/projected/f28c19ea-7404-4f03-b020-068bbd66be87-kube-api-access-gps4r\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.208388 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28c19ea-7404-4f03-b020-068bbd66be87-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.208428 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28c19ea-7404-4f03-b020-068bbd66be87-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.208462 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f28c19ea-7404-4f03-b020-068bbd66be87-config\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.208565 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f28c19ea-7404-4f03-b020-068bbd66be87-scripts\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.309967 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f28c19ea-7404-4f03-b020-068bbd66be87-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.310265 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f28c19ea-7404-4f03-b020-068bbd66be87-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.310325 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gps4r\" (UniqueName: \"kubernetes.io/projected/f28c19ea-7404-4f03-b020-068bbd66be87-kube-api-access-gps4r\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.310351 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28c19ea-7404-4f03-b020-068bbd66be87-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.310388 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28c19ea-7404-4f03-b020-068bbd66be87-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.310433 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f28c19ea-7404-4f03-b020-068bbd66be87-config\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.310454 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f28c19ea-7404-4f03-b020-068bbd66be87-scripts\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.310814 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f28c19ea-7404-4f03-b020-068bbd66be87-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.311538 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f28c19ea-7404-4f03-b020-068bbd66be87-config\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.311550 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f28c19ea-7404-4f03-b020-068bbd66be87-scripts\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.318145 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f28c19ea-7404-4f03-b020-068bbd66be87-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.322892 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28c19ea-7404-4f03-b020-068bbd66be87-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.327183 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28c19ea-7404-4f03-b020-068bbd66be87-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.337084 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gps4r\" (UniqueName: \"kubernetes.io/projected/f28c19ea-7404-4f03-b020-068bbd66be87-kube-api-access-gps4r\") pod \"ovn-northd-0\" (UID: \"f28c19ea-7404-4f03-b020-068bbd66be87\") " pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.373076 4826 generic.go:334] "Generic (PLEG): container finished" podID="55efbdae-46d2-40b3-8ce0-482112cdeb2b" containerID="0e8fd8d21e89cab4935a601407d27a2b474d066c7e9aac3f9518cba0d516abd1" exitCode=0 Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.373373 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" event={"ID":"55efbdae-46d2-40b3-8ce0-482112cdeb2b","Type":"ContainerDied","Data":"0e8fd8d21e89cab4935a601407d27a2b474d066c7e9aac3f9518cba0d516abd1"} Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.435657 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.480752 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.617547 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9lx2\" (UniqueName: \"kubernetes.io/projected/55efbdae-46d2-40b3-8ce0-482112cdeb2b-kube-api-access-l9lx2\") pod \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\" (UID: \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\") " Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.618161 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55efbdae-46d2-40b3-8ce0-482112cdeb2b-dns-svc\") pod \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\" (UID: \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\") " Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.618282 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55efbdae-46d2-40b3-8ce0-482112cdeb2b-config\") pod \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\" (UID: \"55efbdae-46d2-40b3-8ce0-482112cdeb2b\") " Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.621939 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55efbdae-46d2-40b3-8ce0-482112cdeb2b-kube-api-access-l9lx2" (OuterVolumeSpecName: "kube-api-access-l9lx2") pod "55efbdae-46d2-40b3-8ce0-482112cdeb2b" (UID: "55efbdae-46d2-40b3-8ce0-482112cdeb2b"). InnerVolumeSpecName "kube-api-access-l9lx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.657804 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55efbdae-46d2-40b3-8ce0-482112cdeb2b-config" (OuterVolumeSpecName: "config") pod "55efbdae-46d2-40b3-8ce0-482112cdeb2b" (UID: "55efbdae-46d2-40b3-8ce0-482112cdeb2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.674233 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55efbdae-46d2-40b3-8ce0-482112cdeb2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "55efbdae-46d2-40b3-8ce0-482112cdeb2b" (UID: "55efbdae-46d2-40b3-8ce0-482112cdeb2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.722137 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9lx2\" (UniqueName: \"kubernetes.io/projected/55efbdae-46d2-40b3-8ce0-482112cdeb2b-kube-api-access-l9lx2\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.722204 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/55efbdae-46d2-40b3-8ce0-482112cdeb2b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.722233 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55efbdae-46d2-40b3-8ce0-482112cdeb2b-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:39 crc kubenswrapper[4826]: I0129 08:08:39.899819 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 08:08:39 crc kubenswrapper[4826]: W0129 08:08:39.905334 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf28c19ea_7404_4f03_b020_068bbd66be87.slice/crio-f13c7d67d30a8cba3668ce1329c253313eea3456ee43a9a08aef0ad99f034f7d WatchSource:0}: Error finding container f13c7d67d30a8cba3668ce1329c253313eea3456ee43a9a08aef0ad99f034f7d: Status 404 returned error can't find the container with id f13c7d67d30a8cba3668ce1329c253313eea3456ee43a9a08aef0ad99f034f7d Jan 29 08:08:40 crc kubenswrapper[4826]: I0129 08:08:40.381935 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f28c19ea-7404-4f03-b020-068bbd66be87","Type":"ContainerStarted","Data":"f13c7d67d30a8cba3668ce1329c253313eea3456ee43a9a08aef0ad99f034f7d"} Jan 29 08:08:40 crc kubenswrapper[4826]: I0129 08:08:40.384392 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" event={"ID":"55efbdae-46d2-40b3-8ce0-482112cdeb2b","Type":"ContainerDied","Data":"4a874d3acb5fc495893a6c3e2830e68235c1f84ee012d7ef02eb2cee4a71ddc5"} Jan 29 08:08:40 crc kubenswrapper[4826]: I0129 08:08:40.384435 4826 scope.go:117] "RemoveContainer" containerID="0e8fd8d21e89cab4935a601407d27a2b474d066c7e9aac3f9518cba0d516abd1" Jan 29 08:08:40 crc kubenswrapper[4826]: I0129 08:08:40.384475 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf4cf88bf-259rh" Jan 29 08:08:40 crc kubenswrapper[4826]: I0129 08:08:40.426563 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf4cf88bf-259rh"] Jan 29 08:08:40 crc kubenswrapper[4826]: I0129 08:08:40.433248 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cf4cf88bf-259rh"] Jan 29 08:08:40 crc kubenswrapper[4826]: I0129 08:08:40.466436 4826 scope.go:117] "RemoveContainer" containerID="4dcbd755b79c5cedcd58ad8f78cbe022c654d62d55e9995554ee26fb59f6a266" Jan 29 08:08:40 crc kubenswrapper[4826]: I0129 08:08:40.850655 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55efbdae-46d2-40b3-8ce0-482112cdeb2b" path="/var/lib/kubelet/pods/55efbdae-46d2-40b3-8ce0-482112cdeb2b/volumes" Jan 29 08:08:41 crc kubenswrapper[4826]: I0129 08:08:41.397744 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f28c19ea-7404-4f03-b020-068bbd66be87","Type":"ContainerStarted","Data":"bd60e0e49fee0b7805aee9f543cb1304a8e92fd9300038ebd3cdb3f722393039"} Jan 29 08:08:41 crc kubenswrapper[4826]: I0129 08:08:41.397791 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f28c19ea-7404-4f03-b020-068bbd66be87","Type":"ContainerStarted","Data":"0603e41666552c452dace7aff0b19d9c56f40b6047f1bf3461b72f70cceb3720"} Jan 29 08:08:41 crc kubenswrapper[4826]: I0129 08:08:41.398007 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 08:08:41 crc kubenswrapper[4826]: I0129 08:08:41.434265 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.820757983 podStartE2EDuration="2.434233413s" podCreationTimestamp="2026-01-29 08:08:39 +0000 UTC" firstStartedPulling="2026-01-29 08:08:39.908770689 +0000 UTC m=+5103.770563758" lastFinishedPulling="2026-01-29 08:08:40.522246119 +0000 UTC m=+5104.384039188" observedRunningTime="2026-01-29 08:08:41.422574886 +0000 UTC m=+5105.284367995" watchObservedRunningTime="2026-01-29 08:08:41.434233413 +0000 UTC m=+5105.296026522" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.463211 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-k84pd"] Jan 29 08:08:44 crc kubenswrapper[4826]: E0129 08:08:44.465917 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55efbdae-46d2-40b3-8ce0-482112cdeb2b" containerName="init" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.465946 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="55efbdae-46d2-40b3-8ce0-482112cdeb2b" containerName="init" Jan 29 08:08:44 crc kubenswrapper[4826]: E0129 08:08:44.465968 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55efbdae-46d2-40b3-8ce0-482112cdeb2b" containerName="dnsmasq-dns" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.466004 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="55efbdae-46d2-40b3-8ce0-482112cdeb2b" containerName="dnsmasq-dns" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.466387 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="55efbdae-46d2-40b3-8ce0-482112cdeb2b" containerName="dnsmasq-dns" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.467512 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k84pd" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.474333 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0d26-account-create-update-p7z5r"] Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.475659 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0d26-account-create-update-p7z5r" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.477784 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.501574 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-k84pd"] Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.509618 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0d26-account-create-update-p7z5r"] Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.644424 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a71cd30-8e95-4f1e-855a-53f82a51a03e-operator-scripts\") pod \"keystone-0d26-account-create-update-p7z5r\" (UID: \"4a71cd30-8e95-4f1e-855a-53f82a51a03e\") " pod="openstack/keystone-0d26-account-create-update-p7z5r" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.644498 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a-operator-scripts\") pod \"keystone-db-create-k84pd\" (UID: \"1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a\") " pod="openstack/keystone-db-create-k84pd" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.644668 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8779l\" (UniqueName: \"kubernetes.io/projected/4a71cd30-8e95-4f1e-855a-53f82a51a03e-kube-api-access-8779l\") pod \"keystone-0d26-account-create-update-p7z5r\" (UID: \"4a71cd30-8e95-4f1e-855a-53f82a51a03e\") " pod="openstack/keystone-0d26-account-create-update-p7z5r" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.644721 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzxld\" (UniqueName: \"kubernetes.io/projected/1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a-kube-api-access-pzxld\") pod \"keystone-db-create-k84pd\" (UID: \"1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a\") " pod="openstack/keystone-db-create-k84pd" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.746244 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8779l\" (UniqueName: \"kubernetes.io/projected/4a71cd30-8e95-4f1e-855a-53f82a51a03e-kube-api-access-8779l\") pod \"keystone-0d26-account-create-update-p7z5r\" (UID: \"4a71cd30-8e95-4f1e-855a-53f82a51a03e\") " pod="openstack/keystone-0d26-account-create-update-p7z5r" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.746760 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzxld\" (UniqueName: \"kubernetes.io/projected/1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a-kube-api-access-pzxld\") pod \"keystone-db-create-k84pd\" (UID: \"1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a\") " pod="openstack/keystone-db-create-k84pd" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.747097 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a71cd30-8e95-4f1e-855a-53f82a51a03e-operator-scripts\") pod \"keystone-0d26-account-create-update-p7z5r\" (UID: \"4a71cd30-8e95-4f1e-855a-53f82a51a03e\") " pod="openstack/keystone-0d26-account-create-update-p7z5r" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.747346 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a-operator-scripts\") pod \"keystone-db-create-k84pd\" (UID: \"1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a\") " pod="openstack/keystone-db-create-k84pd" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.748054 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a71cd30-8e95-4f1e-855a-53f82a51a03e-operator-scripts\") pod \"keystone-0d26-account-create-update-p7z5r\" (UID: \"4a71cd30-8e95-4f1e-855a-53f82a51a03e\") " pod="openstack/keystone-0d26-account-create-update-p7z5r" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.748487 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a-operator-scripts\") pod \"keystone-db-create-k84pd\" (UID: \"1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a\") " pod="openstack/keystone-db-create-k84pd" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.768021 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8779l\" (UniqueName: \"kubernetes.io/projected/4a71cd30-8e95-4f1e-855a-53f82a51a03e-kube-api-access-8779l\") pod \"keystone-0d26-account-create-update-p7z5r\" (UID: \"4a71cd30-8e95-4f1e-855a-53f82a51a03e\") " pod="openstack/keystone-0d26-account-create-update-p7z5r" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.772456 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzxld\" (UniqueName: \"kubernetes.io/projected/1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a-kube-api-access-pzxld\") pod \"keystone-db-create-k84pd\" (UID: \"1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a\") " pod="openstack/keystone-db-create-k84pd" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.793320 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k84pd" Jan 29 08:08:44 crc kubenswrapper[4826]: I0129 08:08:44.804978 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0d26-account-create-update-p7z5r" Jan 29 08:08:45 crc kubenswrapper[4826]: W0129 08:08:45.277037 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a71cd30_8e95_4f1e_855a_53f82a51a03e.slice/crio-74ef06c0687aa4cd23d036b48e70bbefc3f44ebd0db7ab3aaa75614b6fe78ac8 WatchSource:0}: Error finding container 74ef06c0687aa4cd23d036b48e70bbefc3f44ebd0db7ab3aaa75614b6fe78ac8: Status 404 returned error can't find the container with id 74ef06c0687aa4cd23d036b48e70bbefc3f44ebd0db7ab3aaa75614b6fe78ac8 Jan 29 08:08:45 crc kubenswrapper[4826]: I0129 08:08:45.283030 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0d26-account-create-update-p7z5r"] Jan 29 08:08:45 crc kubenswrapper[4826]: I0129 08:08:45.346159 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-k84pd"] Jan 29 08:08:45 crc kubenswrapper[4826]: W0129 08:08:45.369182 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ee5ef9b_b0e3_4ccf_b1cf_f06608bf006a.slice/crio-ee38f950c236fac31018ada9d3a02d9d91e88c4953301b057ad86b143af4573c WatchSource:0}: Error finding container ee38f950c236fac31018ada9d3a02d9d91e88c4953301b057ad86b143af4573c: Status 404 returned error can't find the container with id ee38f950c236fac31018ada9d3a02d9d91e88c4953301b057ad86b143af4573c Jan 29 08:08:45 crc kubenswrapper[4826]: I0129 08:08:45.463875 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-k84pd" event={"ID":"1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a","Type":"ContainerStarted","Data":"ee38f950c236fac31018ada9d3a02d9d91e88c4953301b057ad86b143af4573c"} Jan 29 08:08:45 crc kubenswrapper[4826]: I0129 08:08:45.469061 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0d26-account-create-update-p7z5r" event={"ID":"4a71cd30-8e95-4f1e-855a-53f82a51a03e","Type":"ContainerStarted","Data":"74ef06c0687aa4cd23d036b48e70bbefc3f44ebd0db7ab3aaa75614b6fe78ac8"} Jan 29 08:08:46 crc kubenswrapper[4826]: I0129 08:08:46.483083 4826 generic.go:334] "Generic (PLEG): container finished" podID="4a71cd30-8e95-4f1e-855a-53f82a51a03e" containerID="fb795335729c52f24cf4bb223d62fdfb32d743b67f9f61d228e47503000b66e7" exitCode=0 Jan 29 08:08:46 crc kubenswrapper[4826]: I0129 08:08:46.483531 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0d26-account-create-update-p7z5r" event={"ID":"4a71cd30-8e95-4f1e-855a-53f82a51a03e","Type":"ContainerDied","Data":"fb795335729c52f24cf4bb223d62fdfb32d743b67f9f61d228e47503000b66e7"} Jan 29 08:08:46 crc kubenswrapper[4826]: I0129 08:08:46.486121 4826 generic.go:334] "Generic (PLEG): container finished" podID="1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a" containerID="a24ae2fc987ff1fe9cd383552f2e623120b1c144e2c348d01f7ffa3cdf59db5c" exitCode=0 Jan 29 08:08:46 crc kubenswrapper[4826]: I0129 08:08:46.486183 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-k84pd" event={"ID":"1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a","Type":"ContainerDied","Data":"a24ae2fc987ff1fe9cd383552f2e623120b1c144e2c348d01f7ffa3cdf59db5c"} Jan 29 08:08:47 crc kubenswrapper[4826]: I0129 08:08:47.961425 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k84pd" Jan 29 08:08:47 crc kubenswrapper[4826]: I0129 08:08:47.965334 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0d26-account-create-update-p7z5r" Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.112499 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzxld\" (UniqueName: \"kubernetes.io/projected/1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a-kube-api-access-pzxld\") pod \"1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a\" (UID: \"1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a\") " Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.112605 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8779l\" (UniqueName: \"kubernetes.io/projected/4a71cd30-8e95-4f1e-855a-53f82a51a03e-kube-api-access-8779l\") pod \"4a71cd30-8e95-4f1e-855a-53f82a51a03e\" (UID: \"4a71cd30-8e95-4f1e-855a-53f82a51a03e\") " Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.112722 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a-operator-scripts\") pod \"1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a\" (UID: \"1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a\") " Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.112757 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a71cd30-8e95-4f1e-855a-53f82a51a03e-operator-scripts\") pod \"4a71cd30-8e95-4f1e-855a-53f82a51a03e\" (UID: \"4a71cd30-8e95-4f1e-855a-53f82a51a03e\") " Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.114621 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a71cd30-8e95-4f1e-855a-53f82a51a03e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a71cd30-8e95-4f1e-855a-53f82a51a03e" (UID: "4a71cd30-8e95-4f1e-855a-53f82a51a03e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.115054 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a" (UID: "1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.121667 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a71cd30-8e95-4f1e-855a-53f82a51a03e-kube-api-access-8779l" (OuterVolumeSpecName: "kube-api-access-8779l") pod "4a71cd30-8e95-4f1e-855a-53f82a51a03e" (UID: "4a71cd30-8e95-4f1e-855a-53f82a51a03e"). InnerVolumeSpecName "kube-api-access-8779l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.121972 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a-kube-api-access-pzxld" (OuterVolumeSpecName: "kube-api-access-pzxld") pod "1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a" (UID: "1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a"). InnerVolumeSpecName "kube-api-access-pzxld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.215234 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8779l\" (UniqueName: \"kubernetes.io/projected/4a71cd30-8e95-4f1e-855a-53f82a51a03e-kube-api-access-8779l\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.215266 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.215277 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a71cd30-8e95-4f1e-855a-53f82a51a03e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.215286 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzxld\" (UniqueName: \"kubernetes.io/projected/1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a-kube-api-access-pzxld\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.507137 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-k84pd" Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.507163 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-k84pd" event={"ID":"1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a","Type":"ContainerDied","Data":"ee38f950c236fac31018ada9d3a02d9d91e88c4953301b057ad86b143af4573c"} Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.507528 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee38f950c236fac31018ada9d3a02d9d91e88c4953301b057ad86b143af4573c" Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.509079 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0d26-account-create-update-p7z5r" event={"ID":"4a71cd30-8e95-4f1e-855a-53f82a51a03e","Type":"ContainerDied","Data":"74ef06c0687aa4cd23d036b48e70bbefc3f44ebd0db7ab3aaa75614b6fe78ac8"} Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.509157 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74ef06c0687aa4cd23d036b48e70bbefc3f44ebd0db7ab3aaa75614b6fe78ac8" Jan 29 08:08:48 crc kubenswrapper[4826]: I0129 08:08:48.509183 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0d26-account-create-update-p7z5r" Jan 29 08:08:49 crc kubenswrapper[4826]: I0129 08:08:49.948240 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-vxbg9"] Jan 29 08:08:49 crc kubenswrapper[4826]: E0129 08:08:49.949031 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a71cd30-8e95-4f1e-855a-53f82a51a03e" containerName="mariadb-account-create-update" Jan 29 08:08:49 crc kubenswrapper[4826]: I0129 08:08:49.949243 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a71cd30-8e95-4f1e-855a-53f82a51a03e" containerName="mariadb-account-create-update" Jan 29 08:08:49 crc kubenswrapper[4826]: E0129 08:08:49.949276 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a" containerName="mariadb-database-create" Jan 29 08:08:49 crc kubenswrapper[4826]: I0129 08:08:49.949284 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a" containerName="mariadb-database-create" Jan 29 08:08:49 crc kubenswrapper[4826]: I0129 08:08:49.949511 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a71cd30-8e95-4f1e-855a-53f82a51a03e" containerName="mariadb-account-create-update" Jan 29 08:08:49 crc kubenswrapper[4826]: I0129 08:08:49.949524 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a" containerName="mariadb-database-create" Jan 29 08:08:49 crc kubenswrapper[4826]: I0129 08:08:49.950378 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vxbg9" Jan 29 08:08:49 crc kubenswrapper[4826]: I0129 08:08:49.954786 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 08:08:49 crc kubenswrapper[4826]: I0129 08:08:49.954825 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 08:08:49 crc kubenswrapper[4826]: I0129 08:08:49.954826 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7qtgw" Jan 29 08:08:49 crc kubenswrapper[4826]: I0129 08:08:49.954827 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 08:08:49 crc kubenswrapper[4826]: I0129 08:08:49.971025 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vxbg9"] Jan 29 08:08:50 crc kubenswrapper[4826]: I0129 08:08:50.047874 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-combined-ca-bundle\") pod \"keystone-db-sync-vxbg9\" (UID: \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\") " pod="openstack/keystone-db-sync-vxbg9" Jan 29 08:08:50 crc kubenswrapper[4826]: I0129 08:08:50.047935 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgskd\" (UniqueName: \"kubernetes.io/projected/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-kube-api-access-wgskd\") pod \"keystone-db-sync-vxbg9\" (UID: \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\") " pod="openstack/keystone-db-sync-vxbg9" Jan 29 08:08:50 crc kubenswrapper[4826]: I0129 08:08:50.048023 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-config-data\") pod \"keystone-db-sync-vxbg9\" (UID: \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\") " pod="openstack/keystone-db-sync-vxbg9" Jan 29 08:08:50 crc kubenswrapper[4826]: I0129 08:08:50.149915 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgskd\" (UniqueName: \"kubernetes.io/projected/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-kube-api-access-wgskd\") pod \"keystone-db-sync-vxbg9\" (UID: \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\") " pod="openstack/keystone-db-sync-vxbg9" Jan 29 08:08:50 crc kubenswrapper[4826]: I0129 08:08:50.150398 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-config-data\") pod \"keystone-db-sync-vxbg9\" (UID: \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\") " pod="openstack/keystone-db-sync-vxbg9" Jan 29 08:08:50 crc kubenswrapper[4826]: I0129 08:08:50.151556 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-combined-ca-bundle\") pod \"keystone-db-sync-vxbg9\" (UID: \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\") " pod="openstack/keystone-db-sync-vxbg9" Jan 29 08:08:50 crc kubenswrapper[4826]: I0129 08:08:50.158901 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-config-data\") pod \"keystone-db-sync-vxbg9\" (UID: \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\") " pod="openstack/keystone-db-sync-vxbg9" Jan 29 08:08:50 crc kubenswrapper[4826]: I0129 08:08:50.160435 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-combined-ca-bundle\") pod \"keystone-db-sync-vxbg9\" (UID: \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\") " pod="openstack/keystone-db-sync-vxbg9" Jan 29 08:08:50 crc kubenswrapper[4826]: I0129 08:08:50.174058 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgskd\" (UniqueName: \"kubernetes.io/projected/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-kube-api-access-wgskd\") pod \"keystone-db-sync-vxbg9\" (UID: \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\") " pod="openstack/keystone-db-sync-vxbg9" Jan 29 08:08:50 crc kubenswrapper[4826]: I0129 08:08:50.288864 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vxbg9" Jan 29 08:08:50 crc kubenswrapper[4826]: I0129 08:08:50.794577 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vxbg9"] Jan 29 08:08:50 crc kubenswrapper[4826]: W0129 08:08:50.799517 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f15e2ae_56b9_40ec_9be5_0a3cb8d0389d.slice/crio-25b3b466a3c2332a6ba5c0cb846bdcde3799d31103e204ac2d386df421fac577 WatchSource:0}: Error finding container 25b3b466a3c2332a6ba5c0cb846bdcde3799d31103e204ac2d386df421fac577: Status 404 returned error can't find the container with id 25b3b466a3c2332a6ba5c0cb846bdcde3799d31103e204ac2d386df421fac577 Jan 29 08:08:51 crc kubenswrapper[4826]: I0129 08:08:51.546407 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vxbg9" event={"ID":"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d","Type":"ContainerStarted","Data":"25b3b466a3c2332a6ba5c0cb846bdcde3799d31103e204ac2d386df421fac577"} Jan 29 08:08:56 crc kubenswrapper[4826]: I0129 08:08:56.598798 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vxbg9" event={"ID":"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d","Type":"ContainerStarted","Data":"5bf889f117ff85de0f49ccce3f0b96a1728b7ac34206806117135b6d018e1826"} Jan 29 08:08:56 crc kubenswrapper[4826]: I0129 08:08:56.636442 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-vxbg9" podStartSLOduration=2.761759865 podStartE2EDuration="7.636385423s" podCreationTimestamp="2026-01-29 08:08:49 +0000 UTC" firstStartedPulling="2026-01-29 08:08:50.801852442 +0000 UTC m=+5114.663645511" lastFinishedPulling="2026-01-29 08:08:55.67647799 +0000 UTC m=+5119.538271069" observedRunningTime="2026-01-29 08:08:56.63282641 +0000 UTC m=+5120.494619509" watchObservedRunningTime="2026-01-29 08:08:56.636385423 +0000 UTC m=+5120.498178522" Jan 29 08:08:57 crc kubenswrapper[4826]: I0129 08:08:57.611400 4826 generic.go:334] "Generic (PLEG): container finished" podID="6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d" containerID="5bf889f117ff85de0f49ccce3f0b96a1728b7ac34206806117135b6d018e1826" exitCode=0 Jan 29 08:08:57 crc kubenswrapper[4826]: I0129 08:08:57.611587 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vxbg9" event={"ID":"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d","Type":"ContainerDied","Data":"5bf889f117ff85de0f49ccce3f0b96a1728b7ac34206806117135b6d018e1826"} Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.034046 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vxbg9" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.132776 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-combined-ca-bundle\") pod \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\" (UID: \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\") " Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.132929 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-config-data\") pod \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\" (UID: \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\") " Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.133157 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgskd\" (UniqueName: \"kubernetes.io/projected/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-kube-api-access-wgskd\") pod \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\" (UID: \"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d\") " Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.138015 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-kube-api-access-wgskd" (OuterVolumeSpecName: "kube-api-access-wgskd") pod "6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d" (UID: "6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d"). InnerVolumeSpecName "kube-api-access-wgskd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.177367 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d" (UID: "6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.185229 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-config-data" (OuterVolumeSpecName: "config-data") pod "6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d" (UID: "6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.236181 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.236359 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.236386 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgskd\" (UniqueName: \"kubernetes.io/projected/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d-kube-api-access-wgskd\") on node \"crc\" DevicePath \"\"" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.540897 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.632099 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vxbg9" event={"ID":"6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d","Type":"ContainerDied","Data":"25b3b466a3c2332a6ba5c0cb846bdcde3799d31103e204ac2d386df421fac577"} Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.632140 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25b3b466a3c2332a6ba5c0cb846bdcde3799d31103e204ac2d386df421fac577" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.632207 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vxbg9" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.852647 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fb84b94d5-6m96b"] Jan 29 08:08:59 crc kubenswrapper[4826]: E0129 08:08:59.852993 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d" containerName="keystone-db-sync" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.853010 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d" containerName="keystone-db-sync" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.853744 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d" containerName="keystone-db-sync" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.856200 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.866655 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb84b94d5-6m96b"] Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.908890 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jj4gn"] Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.910116 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.914965 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.915616 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.915828 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.916071 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.916292 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7qtgw" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.929629 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jj4gn"] Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.951948 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.951994 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.952044 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-dns-svc\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.952136 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fhjb\" (UniqueName: \"kubernetes.io/projected/f7bfb3f3-701c-4b2a-a145-67de08df6dba-kube-api-access-8fhjb\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:08:59 crc kubenswrapper[4826]: I0129 08:08:59.952194 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-config\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.053567 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56n77\" (UniqueName: \"kubernetes.io/projected/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-kube-api-access-56n77\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.053634 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fhjb\" (UniqueName: \"kubernetes.io/projected/f7bfb3f3-701c-4b2a-a145-67de08df6dba-kube-api-access-8fhjb\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.053676 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-config-data\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.053722 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-config\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.053747 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-combined-ca-bundle\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.053766 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.053784 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.053928 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-scripts\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.054003 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-fernet-keys\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.054035 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-dns-svc\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.054084 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-credential-keys\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.054569 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-config\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.054605 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.054928 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-dns-svc\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.055357 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.093592 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fhjb\" (UniqueName: \"kubernetes.io/projected/f7bfb3f3-701c-4b2a-a145-67de08df6dba-kube-api-access-8fhjb\") pod \"dnsmasq-dns-7fb84b94d5-6m96b\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.156446 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-config-data\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.156940 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-combined-ca-bundle\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.157024 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-scripts\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.157075 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-fernet-keys\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.157133 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-credential-keys\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.157272 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56n77\" (UniqueName: \"kubernetes.io/projected/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-kube-api-access-56n77\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.162924 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-credential-keys\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.163129 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-config-data\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.163390 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-scripts\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.164521 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-fernet-keys\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.167878 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-combined-ca-bundle\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.179861 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56n77\" (UniqueName: \"kubernetes.io/projected/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-kube-api-access-56n77\") pod \"keystone-bootstrap-jj4gn\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.185780 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.230457 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.763642 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb84b94d5-6m96b"] Jan 29 08:09:00 crc kubenswrapper[4826]: W0129 08:09:00.779877 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7bfb3f3_701c_4b2a_a145_67de08df6dba.slice/crio-0a8f66b49b15983cad68867079f080cebc28630f71c4a71c3dc936259c8121bf WatchSource:0}: Error finding container 0a8f66b49b15983cad68867079f080cebc28630f71c4a71c3dc936259c8121bf: Status 404 returned error can't find the container with id 0a8f66b49b15983cad68867079f080cebc28630f71c4a71c3dc936259c8121bf Jan 29 08:09:00 crc kubenswrapper[4826]: I0129 08:09:00.886805 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jj4gn"] Jan 29 08:09:00 crc kubenswrapper[4826]: W0129 08:09:00.892186 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb2675d6_8c0b_42d1_b8a6_3d8ca15d4d1c.slice/crio-f74e3a8809aba0a269728d61e04378152721d51461242c5a71e7bb29a8bb1c9d WatchSource:0}: Error finding container f74e3a8809aba0a269728d61e04378152721d51461242c5a71e7bb29a8bb1c9d: Status 404 returned error can't find the container with id f74e3a8809aba0a269728d61e04378152721d51461242c5a71e7bb29a8bb1c9d Jan 29 08:09:01 crc kubenswrapper[4826]: I0129 08:09:01.650042 4826 generic.go:334] "Generic (PLEG): container finished" podID="f7bfb3f3-701c-4b2a-a145-67de08df6dba" containerID="e9d01d82a1f14d183252dcbb715ff6a344d08d92f037c1578fbbcceb138cb332" exitCode=0 Jan 29 08:09:01 crc kubenswrapper[4826]: I0129 08:09:01.650103 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" event={"ID":"f7bfb3f3-701c-4b2a-a145-67de08df6dba","Type":"ContainerDied","Data":"e9d01d82a1f14d183252dcbb715ff6a344d08d92f037c1578fbbcceb138cb332"} Jan 29 08:09:01 crc kubenswrapper[4826]: I0129 08:09:01.650413 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" event={"ID":"f7bfb3f3-701c-4b2a-a145-67de08df6dba","Type":"ContainerStarted","Data":"0a8f66b49b15983cad68867079f080cebc28630f71c4a71c3dc936259c8121bf"} Jan 29 08:09:01 crc kubenswrapper[4826]: I0129 08:09:01.659501 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jj4gn" event={"ID":"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c","Type":"ContainerStarted","Data":"b46d35339a20cc1923f1e8ae7dbc8c2546b07b7a89a8cab68e5f6a9c94110af4"} Jan 29 08:09:01 crc kubenswrapper[4826]: I0129 08:09:01.659552 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jj4gn" event={"ID":"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c","Type":"ContainerStarted","Data":"f74e3a8809aba0a269728d61e04378152721d51461242c5a71e7bb29a8bb1c9d"} Jan 29 08:09:01 crc kubenswrapper[4826]: I0129 08:09:01.714623 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jj4gn" podStartSLOduration=2.714605342 podStartE2EDuration="2.714605342s" podCreationTimestamp="2026-01-29 08:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:09:01.70692063 +0000 UTC m=+5125.568713709" watchObservedRunningTime="2026-01-29 08:09:01.714605342 +0000 UTC m=+5125.576398411" Jan 29 08:09:02 crc kubenswrapper[4826]: I0129 08:09:02.670381 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" event={"ID":"f7bfb3f3-701c-4b2a-a145-67de08df6dba","Type":"ContainerStarted","Data":"22082cb5616c57d5ccd12c69b1679e2429b542d7709c60a130e16e3f9f322e7b"} Jan 29 08:09:02 crc kubenswrapper[4826]: I0129 08:09:02.670734 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:02 crc kubenswrapper[4826]: I0129 08:09:02.706327 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" podStartSLOduration=3.706284139 podStartE2EDuration="3.706284139s" podCreationTimestamp="2026-01-29 08:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:09:02.697501438 +0000 UTC m=+5126.559294517" watchObservedRunningTime="2026-01-29 08:09:02.706284139 +0000 UTC m=+5126.568077208" Jan 29 08:09:04 crc kubenswrapper[4826]: I0129 08:09:04.694716 4826 generic.go:334] "Generic (PLEG): container finished" podID="db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c" containerID="b46d35339a20cc1923f1e8ae7dbc8c2546b07b7a89a8cab68e5f6a9c94110af4" exitCode=0 Jan 29 08:09:04 crc kubenswrapper[4826]: I0129 08:09:04.694807 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jj4gn" event={"ID":"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c","Type":"ContainerDied","Data":"b46d35339a20cc1923f1e8ae7dbc8c2546b07b7a89a8cab68e5f6a9c94110af4"} Jan 29 08:09:05 crc kubenswrapper[4826]: I0129 08:09:05.656189 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:09:05 crc kubenswrapper[4826]: I0129 08:09:05.656293 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.173192 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.287444 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-combined-ca-bundle\") pod \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.287571 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56n77\" (UniqueName: \"kubernetes.io/projected/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-kube-api-access-56n77\") pod \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.287646 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-scripts\") pod \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.287730 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-credential-keys\") pod \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.287764 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-fernet-keys\") pod \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.287840 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-config-data\") pod \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\" (UID: \"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c\") " Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.299500 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c" (UID: "db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.299552 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-kube-api-access-56n77" (OuterVolumeSpecName: "kube-api-access-56n77") pod "db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c" (UID: "db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c"). InnerVolumeSpecName "kube-api-access-56n77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.299559 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-scripts" (OuterVolumeSpecName: "scripts") pod "db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c" (UID: "db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.308253 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c" (UID: "db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.314037 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-config-data" (OuterVolumeSpecName: "config-data") pod "db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c" (UID: "db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.331779 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c" (UID: "db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.390190 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.390231 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56n77\" (UniqueName: \"kubernetes.io/projected/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-kube-api-access-56n77\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.390246 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.390258 4826 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.390269 4826 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.390280 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.716370 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jj4gn" event={"ID":"db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c","Type":"ContainerDied","Data":"f74e3a8809aba0a269728d61e04378152721d51461242c5a71e7bb29a8bb1c9d"} Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.716432 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f74e3a8809aba0a269728d61e04378152721d51461242c5a71e7bb29a8bb1c9d" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.716908 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jj4gn" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.857814 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jj4gn"] Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.873635 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jj4gn"] Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.897962 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-c9d8z"] Jan 29 08:09:06 crc kubenswrapper[4826]: E0129 08:09:06.898452 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c" containerName="keystone-bootstrap" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.898468 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c" containerName="keystone-bootstrap" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.898670 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c" containerName="keystone-bootstrap" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.899344 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.902710 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7qtgw" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.902863 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.902910 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.903329 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.903521 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 08:09:06 crc kubenswrapper[4826]: I0129 08:09:06.906965 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-c9d8z"] Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.002735 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-credential-keys\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.002827 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqphl\" (UniqueName: \"kubernetes.io/projected/78e6de92-9f55-4628-97aa-cb6c36a92332-kube-api-access-qqphl\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.002870 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-fernet-keys\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.002912 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-config-data\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.002935 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-combined-ca-bundle\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.002972 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-scripts\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.104230 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-scripts\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.104454 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-credential-keys\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.104581 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqphl\" (UniqueName: \"kubernetes.io/projected/78e6de92-9f55-4628-97aa-cb6c36a92332-kube-api-access-qqphl\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.104666 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-fernet-keys\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.104747 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-config-data\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.104788 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-combined-ca-bundle\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.110092 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-config-data\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.114655 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-credential-keys\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.114863 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-fernet-keys\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.116029 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-combined-ca-bundle\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.122528 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-scripts\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.133044 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqphl\" (UniqueName: \"kubernetes.io/projected/78e6de92-9f55-4628-97aa-cb6c36a92332-kube-api-access-qqphl\") pod \"keystone-bootstrap-c9d8z\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.226488 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:07 crc kubenswrapper[4826]: I0129 08:09:07.715313 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-c9d8z"] Jan 29 08:09:07 crc kubenswrapper[4826]: W0129 08:09:07.719618 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78e6de92_9f55_4628_97aa_cb6c36a92332.slice/crio-9a594754f240a57e4bfce0ca5a6bb074c19cf02854db128da0506eccaa1855bf WatchSource:0}: Error finding container 9a594754f240a57e4bfce0ca5a6bb074c19cf02854db128da0506eccaa1855bf: Status 404 returned error can't find the container with id 9a594754f240a57e4bfce0ca5a6bb074c19cf02854db128da0506eccaa1855bf Jan 29 08:09:08 crc kubenswrapper[4826]: I0129 08:09:08.736987 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c9d8z" event={"ID":"78e6de92-9f55-4628-97aa-cb6c36a92332","Type":"ContainerStarted","Data":"f8f0e5efba82aa7a5c9f1d86b0db42fb7a60d14c1424c55901acf2deb7c3bcd5"} Jan 29 08:09:08 crc kubenswrapper[4826]: I0129 08:09:08.737472 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c9d8z" event={"ID":"78e6de92-9f55-4628-97aa-cb6c36a92332","Type":"ContainerStarted","Data":"9a594754f240a57e4bfce0ca5a6bb074c19cf02854db128da0506eccaa1855bf"} Jan 29 08:09:08 crc kubenswrapper[4826]: I0129 08:09:08.832565 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c" path="/var/lib/kubelet/pods/db2675d6-8c0b-42d1-b8a6-3d8ca15d4d1c/volumes" Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.188451 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.224238 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-c9d8z" podStartSLOduration=4.224221946 podStartE2EDuration="4.224221946s" podCreationTimestamp="2026-01-29 08:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:09:08.768750382 +0000 UTC m=+5132.630543461" watchObservedRunningTime="2026-01-29 08:09:10.224221946 +0000 UTC m=+5134.086015015" Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.270420 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f6f6589-rnvnq"] Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.270841 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" podUID="17a94dd0-3150-4b2b-9511-edac7a033212" containerName="dnsmasq-dns" containerID="cri-o://aa5733609cc4d54bbe7bfd6da21a24af57cd2ba5916bbac9294318e7dc3d1b5e" gracePeriod=10 Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.759792 4826 generic.go:334] "Generic (PLEG): container finished" podID="78e6de92-9f55-4628-97aa-cb6c36a92332" containerID="f8f0e5efba82aa7a5c9f1d86b0db42fb7a60d14c1424c55901acf2deb7c3bcd5" exitCode=0 Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.759887 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c9d8z" event={"ID":"78e6de92-9f55-4628-97aa-cb6c36a92332","Type":"ContainerDied","Data":"f8f0e5efba82aa7a5c9f1d86b0db42fb7a60d14c1424c55901acf2deb7c3bcd5"} Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.764035 4826 generic.go:334] "Generic (PLEG): container finished" podID="17a94dd0-3150-4b2b-9511-edac7a033212" containerID="aa5733609cc4d54bbe7bfd6da21a24af57cd2ba5916bbac9294318e7dc3d1b5e" exitCode=0 Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.764098 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" event={"ID":"17a94dd0-3150-4b2b-9511-edac7a033212","Type":"ContainerDied","Data":"aa5733609cc4d54bbe7bfd6da21a24af57cd2ba5916bbac9294318e7dc3d1b5e"} Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.764126 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" event={"ID":"17a94dd0-3150-4b2b-9511-edac7a033212","Type":"ContainerDied","Data":"bbebf64875500f69a8f56ec7766a89a3deea687e227e043a9e315bc4fc5ec4ec"} Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.764138 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbebf64875500f69a8f56ec7766a89a3deea687e227e043a9e315bc4fc5ec4ec" Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.800747 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.916291 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-ovsdbserver-sb\") pod \"17a94dd0-3150-4b2b-9511-edac7a033212\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.916436 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-dns-svc\") pod \"17a94dd0-3150-4b2b-9511-edac7a033212\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.916525 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6xq7\" (UniqueName: \"kubernetes.io/projected/17a94dd0-3150-4b2b-9511-edac7a033212-kube-api-access-s6xq7\") pod \"17a94dd0-3150-4b2b-9511-edac7a033212\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.916568 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-ovsdbserver-nb\") pod \"17a94dd0-3150-4b2b-9511-edac7a033212\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.916607 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-config\") pod \"17a94dd0-3150-4b2b-9511-edac7a033212\" (UID: \"17a94dd0-3150-4b2b-9511-edac7a033212\") " Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.926716 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17a94dd0-3150-4b2b-9511-edac7a033212-kube-api-access-s6xq7" (OuterVolumeSpecName: "kube-api-access-s6xq7") pod "17a94dd0-3150-4b2b-9511-edac7a033212" (UID: "17a94dd0-3150-4b2b-9511-edac7a033212"). InnerVolumeSpecName "kube-api-access-s6xq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.960283 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-config" (OuterVolumeSpecName: "config") pod "17a94dd0-3150-4b2b-9511-edac7a033212" (UID: "17a94dd0-3150-4b2b-9511-edac7a033212"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.961054 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "17a94dd0-3150-4b2b-9511-edac7a033212" (UID: "17a94dd0-3150-4b2b-9511-edac7a033212"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.971008 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "17a94dd0-3150-4b2b-9511-edac7a033212" (UID: "17a94dd0-3150-4b2b-9511-edac7a033212"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:09:10 crc kubenswrapper[4826]: I0129 08:09:10.978717 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "17a94dd0-3150-4b2b-9511-edac7a033212" (UID: "17a94dd0-3150-4b2b-9511-edac7a033212"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:09:11 crc kubenswrapper[4826]: I0129 08:09:11.018731 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:11 crc kubenswrapper[4826]: I0129 08:09:11.018894 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:11 crc kubenswrapper[4826]: I0129 08:09:11.018960 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:11 crc kubenswrapper[4826]: I0129 08:09:11.019016 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17a94dd0-3150-4b2b-9511-edac7a033212-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:11 crc kubenswrapper[4826]: I0129 08:09:11.019066 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6xq7\" (UniqueName: \"kubernetes.io/projected/17a94dd0-3150-4b2b-9511-edac7a033212-kube-api-access-s6xq7\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:11 crc kubenswrapper[4826]: I0129 08:09:11.777776 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f6f6589-rnvnq" Jan 29 08:09:11 crc kubenswrapper[4826]: I0129 08:09:11.843540 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f6f6589-rnvnq"] Jan 29 08:09:11 crc kubenswrapper[4826]: I0129 08:09:11.856771 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f6f6589-rnvnq"] Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.197660 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.239478 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-combined-ca-bundle\") pod \"78e6de92-9f55-4628-97aa-cb6c36a92332\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.239758 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-credential-keys\") pod \"78e6de92-9f55-4628-97aa-cb6c36a92332\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.239928 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-fernet-keys\") pod \"78e6de92-9f55-4628-97aa-cb6c36a92332\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.240058 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-config-data\") pod \"78e6de92-9f55-4628-97aa-cb6c36a92332\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.240146 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqphl\" (UniqueName: \"kubernetes.io/projected/78e6de92-9f55-4628-97aa-cb6c36a92332-kube-api-access-qqphl\") pod \"78e6de92-9f55-4628-97aa-cb6c36a92332\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.240217 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-scripts\") pod \"78e6de92-9f55-4628-97aa-cb6c36a92332\" (UID: \"78e6de92-9f55-4628-97aa-cb6c36a92332\") " Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.247063 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e6de92-9f55-4628-97aa-cb6c36a92332-kube-api-access-qqphl" (OuterVolumeSpecName: "kube-api-access-qqphl") pod "78e6de92-9f55-4628-97aa-cb6c36a92332" (UID: "78e6de92-9f55-4628-97aa-cb6c36a92332"). InnerVolumeSpecName "kube-api-access-qqphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.247119 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "78e6de92-9f55-4628-97aa-cb6c36a92332" (UID: "78e6de92-9f55-4628-97aa-cb6c36a92332"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.248748 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "78e6de92-9f55-4628-97aa-cb6c36a92332" (UID: "78e6de92-9f55-4628-97aa-cb6c36a92332"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.250815 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-scripts" (OuterVolumeSpecName: "scripts") pod "78e6de92-9f55-4628-97aa-cb6c36a92332" (UID: "78e6de92-9f55-4628-97aa-cb6c36a92332"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.261095 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-config-data" (OuterVolumeSpecName: "config-data") pod "78e6de92-9f55-4628-97aa-cb6c36a92332" (UID: "78e6de92-9f55-4628-97aa-cb6c36a92332"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.283485 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78e6de92-9f55-4628-97aa-cb6c36a92332" (UID: "78e6de92-9f55-4628-97aa-cb6c36a92332"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.341780 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqphl\" (UniqueName: \"kubernetes.io/projected/78e6de92-9f55-4628-97aa-cb6c36a92332-kube-api-access-qqphl\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.341846 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.341862 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.341877 4826 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.341894 4826 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.341910 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e6de92-9f55-4628-97aa-cb6c36a92332-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.791358 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-c9d8z" event={"ID":"78e6de92-9f55-4628-97aa-cb6c36a92332","Type":"ContainerDied","Data":"9a594754f240a57e4bfce0ca5a6bb074c19cf02854db128da0506eccaa1855bf"} Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.791803 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a594754f240a57e4bfce0ca5a6bb074c19cf02854db128da0506eccaa1855bf" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.791494 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-c9d8z" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.831443 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17a94dd0-3150-4b2b-9511-edac7a033212" path="/var/lib/kubelet/pods/17a94dd0-3150-4b2b-9511-edac7a033212/volumes" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.904446 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bf9bfd559-x7h62"] Jan 29 08:09:12 crc kubenswrapper[4826]: E0129 08:09:12.905106 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a94dd0-3150-4b2b-9511-edac7a033212" containerName="init" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.905228 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a94dd0-3150-4b2b-9511-edac7a033212" containerName="init" Jan 29 08:09:12 crc kubenswrapper[4826]: E0129 08:09:12.905378 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a94dd0-3150-4b2b-9511-edac7a033212" containerName="dnsmasq-dns" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.905654 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a94dd0-3150-4b2b-9511-edac7a033212" containerName="dnsmasq-dns" Jan 29 08:09:12 crc kubenswrapper[4826]: E0129 08:09:12.905785 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e6de92-9f55-4628-97aa-cb6c36a92332" containerName="keystone-bootstrap" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.905898 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e6de92-9f55-4628-97aa-cb6c36a92332" containerName="keystone-bootstrap" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.906224 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a94dd0-3150-4b2b-9511-edac7a033212" containerName="dnsmasq-dns" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.909492 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e6de92-9f55-4628-97aa-cb6c36a92332" containerName="keystone-bootstrap" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.910317 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.912491 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.912988 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.913073 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7qtgw" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.914160 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.914189 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.916921 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.952927 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-public-tls-certs\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.952979 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-config-data\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.953019 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-fernet-keys\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.953042 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-scripts\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.953069 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-internal-tls-certs\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.953336 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nc2h\" (UniqueName: \"kubernetes.io/projected/f31b43ea-4f64-437d-9bfe-7c16eced7589-kube-api-access-7nc2h\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.953426 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-combined-ca-bundle\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.953702 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-credential-keys\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:12 crc kubenswrapper[4826]: I0129 08:09:12.969012 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bf9bfd559-x7h62"] Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.056423 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-fernet-keys\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.062452 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-scripts\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.062680 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-internal-tls-certs\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.062876 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nc2h\" (UniqueName: \"kubernetes.io/projected/f31b43ea-4f64-437d-9bfe-7c16eced7589-kube-api-access-7nc2h\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.063001 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-combined-ca-bundle\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.063217 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-credential-keys\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.063336 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-public-tls-certs\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.063419 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-config-data\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.068175 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-fernet-keys\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.070357 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-config-data\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.070703 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-credential-keys\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.071856 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-scripts\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.072746 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-internal-tls-certs\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.088157 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-public-tls-certs\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.088621 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nc2h\" (UniqueName: \"kubernetes.io/projected/f31b43ea-4f64-437d-9bfe-7c16eced7589-kube-api-access-7nc2h\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.098650 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31b43ea-4f64-437d-9bfe-7c16eced7589-combined-ca-bundle\") pod \"keystone-bf9bfd559-x7h62\" (UID: \"f31b43ea-4f64-437d-9bfe-7c16eced7589\") " pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.229061 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.704643 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bf9bfd559-x7h62"] Jan 29 08:09:13 crc kubenswrapper[4826]: W0129 08:09:13.713719 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf31b43ea_4f64_437d_9bfe_7c16eced7589.slice/crio-069ad86873b28eb7a49f71c65237f093e33fe8eb722a88ec571e4f0f0f82c619 WatchSource:0}: Error finding container 069ad86873b28eb7a49f71c65237f093e33fe8eb722a88ec571e4f0f0f82c619: Status 404 returned error can't find the container with id 069ad86873b28eb7a49f71c65237f093e33fe8eb722a88ec571e4f0f0f82c619 Jan 29 08:09:13 crc kubenswrapper[4826]: I0129 08:09:13.805881 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bf9bfd559-x7h62" event={"ID":"f31b43ea-4f64-437d-9bfe-7c16eced7589","Type":"ContainerStarted","Data":"069ad86873b28eb7a49f71c65237f093e33fe8eb722a88ec571e4f0f0f82c619"} Jan 29 08:09:14 crc kubenswrapper[4826]: I0129 08:09:14.819739 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:14 crc kubenswrapper[4826]: I0129 08:09:14.819809 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bf9bfd559-x7h62" event={"ID":"f31b43ea-4f64-437d-9bfe-7c16eced7589","Type":"ContainerStarted","Data":"a78eebe591c45e26e47ab57092cb68bd2adf9d997569ea2b4e0190b148d225cf"} Jan 29 08:09:14 crc kubenswrapper[4826]: I0129 08:09:14.856402 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bf9bfd559-x7h62" podStartSLOduration=2.856373994 podStartE2EDuration="2.856373994s" podCreationTimestamp="2026-01-29 08:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:09:14.84555837 +0000 UTC m=+5138.707351439" watchObservedRunningTime="2026-01-29 08:09:14.856373994 +0000 UTC m=+5138.718167093" Jan 29 08:09:19 crc kubenswrapper[4826]: I0129 08:09:19.721003 4826 scope.go:117] "RemoveContainer" containerID="817a5f95f307240cd2a67ca35028c75fefb430c3777588c3c7c0002ffcf45078" Jan 29 08:09:35 crc kubenswrapper[4826]: I0129 08:09:35.657098 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:09:35 crc kubenswrapper[4826]: I0129 08:09:35.657755 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:09:35 crc kubenswrapper[4826]: I0129 08:09:35.657817 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 08:09:35 crc kubenswrapper[4826]: I0129 08:09:35.658809 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:09:35 crc kubenswrapper[4826]: I0129 08:09:35.658889 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" gracePeriod=600 Jan 29 08:09:35 crc kubenswrapper[4826]: E0129 08:09:35.785394 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:09:36 crc kubenswrapper[4826]: I0129 08:09:36.038040 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" exitCode=0 Jan 29 08:09:36 crc kubenswrapper[4826]: I0129 08:09:36.038089 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a"} Jan 29 08:09:36 crc kubenswrapper[4826]: I0129 08:09:36.038163 4826 scope.go:117] "RemoveContainer" containerID="1fff344b8a55b0b4a3914b487742a26e3d0886958427c810117738efef01b20e" Jan 29 08:09:36 crc kubenswrapper[4826]: I0129 08:09:36.038869 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:09:36 crc kubenswrapper[4826]: E0129 08:09:36.039345 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:09:44 crc kubenswrapper[4826]: I0129 08:09:44.769650 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-bf9bfd559-x7h62" Jan 29 08:09:46 crc kubenswrapper[4826]: I0129 08:09:46.823418 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:09:46 crc kubenswrapper[4826]: E0129 08:09:46.826865 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.450000 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.451473 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.455902 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.455981 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.456287 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-wgpdz" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.492779 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.604527 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffeb5ae-2969-47db-841f-cfe964097f10-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " pod="openstack/openstackclient" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.604862 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dffeb5ae-2969-47db-841f-cfe964097f10-openstack-config-secret\") pod \"openstackclient\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " pod="openstack/openstackclient" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.604930 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpvvd\" (UniqueName: \"kubernetes.io/projected/dffeb5ae-2969-47db-841f-cfe964097f10-kube-api-access-xpvvd\") pod \"openstackclient\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " pod="openstack/openstackclient" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.605027 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dffeb5ae-2969-47db-841f-cfe964097f10-openstack-config\") pod \"openstackclient\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " pod="openstack/openstackclient" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.706832 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dffeb5ae-2969-47db-841f-cfe964097f10-openstack-config-secret\") pod \"openstackclient\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " pod="openstack/openstackclient" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.706925 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpvvd\" (UniqueName: \"kubernetes.io/projected/dffeb5ae-2969-47db-841f-cfe964097f10-kube-api-access-xpvvd\") pod \"openstackclient\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " pod="openstack/openstackclient" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.706998 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dffeb5ae-2969-47db-841f-cfe964097f10-openstack-config\") pod \"openstackclient\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " pod="openstack/openstackclient" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.707083 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffeb5ae-2969-47db-841f-cfe964097f10-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " pod="openstack/openstackclient" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.708457 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dffeb5ae-2969-47db-841f-cfe964097f10-openstack-config\") pod \"openstackclient\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " pod="openstack/openstackclient" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.715678 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dffeb5ae-2969-47db-841f-cfe964097f10-openstack-config-secret\") pod \"openstackclient\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " pod="openstack/openstackclient" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.717356 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffeb5ae-2969-47db-841f-cfe964097f10-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " pod="openstack/openstackclient" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.731149 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpvvd\" (UniqueName: \"kubernetes.io/projected/dffeb5ae-2969-47db-841f-cfe964097f10-kube-api-access-xpvvd\") pod \"openstackclient\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " pod="openstack/openstackclient" Jan 29 08:09:49 crc kubenswrapper[4826]: I0129 08:09:49.779434 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:09:50 crc kubenswrapper[4826]: I0129 08:09:50.373139 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 08:09:50 crc kubenswrapper[4826]: W0129 08:09:50.382549 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddffeb5ae_2969_47db_841f_cfe964097f10.slice/crio-053431087e5524c4cbc90cf41670a43c25912b1b33943ed3a7fa8bf228c87e45 WatchSource:0}: Error finding container 053431087e5524c4cbc90cf41670a43c25912b1b33943ed3a7fa8bf228c87e45: Status 404 returned error can't find the container with id 053431087e5524c4cbc90cf41670a43c25912b1b33943ed3a7fa8bf228c87e45 Jan 29 08:09:50 crc kubenswrapper[4826]: I0129 08:09:50.386099 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:09:51 crc kubenswrapper[4826]: I0129 08:09:51.172889 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"dffeb5ae-2969-47db-841f-cfe964097f10","Type":"ContainerStarted","Data":"053431087e5524c4cbc90cf41670a43c25912b1b33943ed3a7fa8bf228c87e45"} Jan 29 08:09:59 crc kubenswrapper[4826]: I0129 08:09:59.809956 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:09:59 crc kubenswrapper[4826]: E0129 08:09:59.811349 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:10:02 crc kubenswrapper[4826]: I0129 08:10:02.283248 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"dffeb5ae-2969-47db-841f-cfe964097f10","Type":"ContainerStarted","Data":"d0aaa1bc60096e5255d8993f7e980e990f1ac40aa3b8edb801d49c1c630fdf08"} Jan 29 08:10:02 crc kubenswrapper[4826]: I0129 08:10:02.313107 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.295970083 podStartE2EDuration="13.313072945s" podCreationTimestamp="2026-01-29 08:09:49 +0000 UTC" firstStartedPulling="2026-01-29 08:09:50.385511229 +0000 UTC m=+5174.247304298" lastFinishedPulling="2026-01-29 08:10:01.402614051 +0000 UTC m=+5185.264407160" observedRunningTime="2026-01-29 08:10:02.310540018 +0000 UTC m=+5186.172333117" watchObservedRunningTime="2026-01-29 08:10:02.313072945 +0000 UTC m=+5186.174866054" Jan 29 08:10:14 crc kubenswrapper[4826]: I0129 08:10:14.808403 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:10:14 crc kubenswrapper[4826]: E0129 08:10:14.809277 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:10:28 crc kubenswrapper[4826]: I0129 08:10:28.808902 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:10:28 crc kubenswrapper[4826]: E0129 08:10:28.811773 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:10:42 crc kubenswrapper[4826]: I0129 08:10:42.810739 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:10:42 crc kubenswrapper[4826]: E0129 08:10:42.811650 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:10:54 crc kubenswrapper[4826]: I0129 08:10:54.809204 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:10:54 crc kubenswrapper[4826]: E0129 08:10:54.810284 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:11:04 crc kubenswrapper[4826]: E0129 08:11:04.088201 4826 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.173:51946->38.102.83.173:41327: write tcp 38.102.83.173:51946->38.102.83.173:41327: write: broken pipe Jan 29 08:11:08 crc kubenswrapper[4826]: I0129 08:11:08.809681 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:11:08 crc kubenswrapper[4826]: E0129 08:11:08.811174 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:11:21 crc kubenswrapper[4826]: I0129 08:11:21.902450 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-m9xx7"] Jan 29 08:11:21 crc kubenswrapper[4826]: I0129 08:11:21.904905 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m9xx7" Jan 29 08:11:21 crc kubenswrapper[4826]: I0129 08:11:21.921204 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-275b-account-create-update-r6hnm"] Jan 29 08:11:21 crc kubenswrapper[4826]: I0129 08:11:21.922477 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-275b-account-create-update-r6hnm" Jan 29 08:11:21 crc kubenswrapper[4826]: I0129 08:11:21.970767 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 08:11:21 crc kubenswrapper[4826]: I0129 08:11:21.971718 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvln8\" (UniqueName: \"kubernetes.io/projected/415b6ea2-bd07-49eb-adc3-832640e78058-kube-api-access-gvln8\") pod \"barbican-db-create-m9xx7\" (UID: \"415b6ea2-bd07-49eb-adc3-832640e78058\") " pod="openstack/barbican-db-create-m9xx7" Jan 29 08:11:21 crc kubenswrapper[4826]: I0129 08:11:21.971766 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415b6ea2-bd07-49eb-adc3-832640e78058-operator-scripts\") pod \"barbican-db-create-m9xx7\" (UID: \"415b6ea2-bd07-49eb-adc3-832640e78058\") " pod="openstack/barbican-db-create-m9xx7" Jan 29 08:11:21 crc kubenswrapper[4826]: I0129 08:11:21.980161 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-m9xx7"] Jan 29 08:11:21 crc kubenswrapper[4826]: I0129 08:11:21.992515 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-275b-account-create-update-r6hnm"] Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.074008 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6785afcf-73dd-42b9-9984-0c73d4883b53-operator-scripts\") pod \"barbican-275b-account-create-update-r6hnm\" (UID: \"6785afcf-73dd-42b9-9984-0c73d4883b53\") " pod="openstack/barbican-275b-account-create-update-r6hnm" Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.074082 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvln8\" (UniqueName: \"kubernetes.io/projected/415b6ea2-bd07-49eb-adc3-832640e78058-kube-api-access-gvln8\") pod \"barbican-db-create-m9xx7\" (UID: \"415b6ea2-bd07-49eb-adc3-832640e78058\") " pod="openstack/barbican-db-create-m9xx7" Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.074127 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415b6ea2-bd07-49eb-adc3-832640e78058-operator-scripts\") pod \"barbican-db-create-m9xx7\" (UID: \"415b6ea2-bd07-49eb-adc3-832640e78058\") " pod="openstack/barbican-db-create-m9xx7" Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.074400 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhwr2\" (UniqueName: \"kubernetes.io/projected/6785afcf-73dd-42b9-9984-0c73d4883b53-kube-api-access-jhwr2\") pod \"barbican-275b-account-create-update-r6hnm\" (UID: \"6785afcf-73dd-42b9-9984-0c73d4883b53\") " pod="openstack/barbican-275b-account-create-update-r6hnm" Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.075246 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415b6ea2-bd07-49eb-adc3-832640e78058-operator-scripts\") pod \"barbican-db-create-m9xx7\" (UID: \"415b6ea2-bd07-49eb-adc3-832640e78058\") " pod="openstack/barbican-db-create-m9xx7" Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.101520 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvln8\" (UniqueName: \"kubernetes.io/projected/415b6ea2-bd07-49eb-adc3-832640e78058-kube-api-access-gvln8\") pod \"barbican-db-create-m9xx7\" (UID: \"415b6ea2-bd07-49eb-adc3-832640e78058\") " pod="openstack/barbican-db-create-m9xx7" Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.175956 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6785afcf-73dd-42b9-9984-0c73d4883b53-operator-scripts\") pod \"barbican-275b-account-create-update-r6hnm\" (UID: \"6785afcf-73dd-42b9-9984-0c73d4883b53\") " pod="openstack/barbican-275b-account-create-update-r6hnm" Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.176613 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhwr2\" (UniqueName: \"kubernetes.io/projected/6785afcf-73dd-42b9-9984-0c73d4883b53-kube-api-access-jhwr2\") pod \"barbican-275b-account-create-update-r6hnm\" (UID: \"6785afcf-73dd-42b9-9984-0c73d4883b53\") " pod="openstack/barbican-275b-account-create-update-r6hnm" Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.177363 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6785afcf-73dd-42b9-9984-0c73d4883b53-operator-scripts\") pod \"barbican-275b-account-create-update-r6hnm\" (UID: \"6785afcf-73dd-42b9-9984-0c73d4883b53\") " pod="openstack/barbican-275b-account-create-update-r6hnm" Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.203017 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhwr2\" (UniqueName: \"kubernetes.io/projected/6785afcf-73dd-42b9-9984-0c73d4883b53-kube-api-access-jhwr2\") pod \"barbican-275b-account-create-update-r6hnm\" (UID: \"6785afcf-73dd-42b9-9984-0c73d4883b53\") " pod="openstack/barbican-275b-account-create-update-r6hnm" Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.280489 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m9xx7" Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.292039 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-275b-account-create-update-r6hnm" Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.770453 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-m9xx7"] Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.809592 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:11:22 crc kubenswrapper[4826]: E0129 08:11:22.809869 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:11:22 crc kubenswrapper[4826]: I0129 08:11:22.849210 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-275b-account-create-update-r6hnm"] Jan 29 08:11:22 crc kubenswrapper[4826]: W0129 08:11:22.860317 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6785afcf_73dd_42b9_9984_0c73d4883b53.slice/crio-27800c50fbc99289cd5c7cf0b75e5e0870fe90280d5ce9b1fda7fc6fdffaa321 WatchSource:0}: Error finding container 27800c50fbc99289cd5c7cf0b75e5e0870fe90280d5ce9b1fda7fc6fdffaa321: Status 404 returned error can't find the container with id 27800c50fbc99289cd5c7cf0b75e5e0870fe90280d5ce9b1fda7fc6fdffaa321 Jan 29 08:11:23 crc kubenswrapper[4826]: I0129 08:11:23.112836 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m9xx7" event={"ID":"415b6ea2-bd07-49eb-adc3-832640e78058","Type":"ContainerStarted","Data":"6a7463cdc73974e7ead8520ecca2579f50d1ada3808b30335d7f16bfb17ac155"} Jan 29 08:11:23 crc kubenswrapper[4826]: I0129 08:11:23.112936 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m9xx7" event={"ID":"415b6ea2-bd07-49eb-adc3-832640e78058","Type":"ContainerStarted","Data":"e9e25294e6fea9352e7230e16ce99fb9cc917c4da9e1fd94b7d77992f69125d4"} Jan 29 08:11:23 crc kubenswrapper[4826]: I0129 08:11:23.119375 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-275b-account-create-update-r6hnm" event={"ID":"6785afcf-73dd-42b9-9984-0c73d4883b53","Type":"ContainerStarted","Data":"44cf12dd79ca811ab09dbcfc6aaa2a06a2bad41207f7297e6a40ae2ce208af38"} Jan 29 08:11:23 crc kubenswrapper[4826]: I0129 08:11:23.119444 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-275b-account-create-update-r6hnm" event={"ID":"6785afcf-73dd-42b9-9984-0c73d4883b53","Type":"ContainerStarted","Data":"27800c50fbc99289cd5c7cf0b75e5e0870fe90280d5ce9b1fda7fc6fdffaa321"} Jan 29 08:11:23 crc kubenswrapper[4826]: I0129 08:11:23.167012 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-m9xx7" podStartSLOduration=2.16698828 podStartE2EDuration="2.16698828s" podCreationTimestamp="2026-01-29 08:11:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:11:23.1384538 +0000 UTC m=+5267.000246879" watchObservedRunningTime="2026-01-29 08:11:23.16698828 +0000 UTC m=+5267.028781389" Jan 29 08:11:23 crc kubenswrapper[4826]: I0129 08:11:23.170346 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-275b-account-create-update-r6hnm" podStartSLOduration=2.170337448 podStartE2EDuration="2.170337448s" podCreationTimestamp="2026-01-29 08:11:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:11:23.16089277 +0000 UTC m=+5267.022685869" watchObservedRunningTime="2026-01-29 08:11:23.170337448 +0000 UTC m=+5267.032130547" Jan 29 08:11:24 crc kubenswrapper[4826]: I0129 08:11:24.131577 4826 generic.go:334] "Generic (PLEG): container finished" podID="415b6ea2-bd07-49eb-adc3-832640e78058" containerID="6a7463cdc73974e7ead8520ecca2579f50d1ada3808b30335d7f16bfb17ac155" exitCode=0 Jan 29 08:11:24 crc kubenswrapper[4826]: I0129 08:11:24.131642 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m9xx7" event={"ID":"415b6ea2-bd07-49eb-adc3-832640e78058","Type":"ContainerDied","Data":"6a7463cdc73974e7ead8520ecca2579f50d1ada3808b30335d7f16bfb17ac155"} Jan 29 08:11:24 crc kubenswrapper[4826]: I0129 08:11:24.135202 4826 generic.go:334] "Generic (PLEG): container finished" podID="6785afcf-73dd-42b9-9984-0c73d4883b53" containerID="44cf12dd79ca811ab09dbcfc6aaa2a06a2bad41207f7297e6a40ae2ce208af38" exitCode=0 Jan 29 08:11:24 crc kubenswrapper[4826]: I0129 08:11:24.135285 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-275b-account-create-update-r6hnm" event={"ID":"6785afcf-73dd-42b9-9984-0c73d4883b53","Type":"ContainerDied","Data":"44cf12dd79ca811ab09dbcfc6aaa2a06a2bad41207f7297e6a40ae2ce208af38"} Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.456391 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-275b-account-create-update-r6hnm" Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.540932 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m9xx7" Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.559433 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6785afcf-73dd-42b9-9984-0c73d4883b53-operator-scripts\") pod \"6785afcf-73dd-42b9-9984-0c73d4883b53\" (UID: \"6785afcf-73dd-42b9-9984-0c73d4883b53\") " Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.559512 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhwr2\" (UniqueName: \"kubernetes.io/projected/6785afcf-73dd-42b9-9984-0c73d4883b53-kube-api-access-jhwr2\") pod \"6785afcf-73dd-42b9-9984-0c73d4883b53\" (UID: \"6785afcf-73dd-42b9-9984-0c73d4883b53\") " Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.560344 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6785afcf-73dd-42b9-9984-0c73d4883b53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6785afcf-73dd-42b9-9984-0c73d4883b53" (UID: "6785afcf-73dd-42b9-9984-0c73d4883b53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.566491 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6785afcf-73dd-42b9-9984-0c73d4883b53-kube-api-access-jhwr2" (OuterVolumeSpecName: "kube-api-access-jhwr2") pod "6785afcf-73dd-42b9-9984-0c73d4883b53" (UID: "6785afcf-73dd-42b9-9984-0c73d4883b53"). InnerVolumeSpecName "kube-api-access-jhwr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.661637 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415b6ea2-bd07-49eb-adc3-832640e78058-operator-scripts\") pod \"415b6ea2-bd07-49eb-adc3-832640e78058\" (UID: \"415b6ea2-bd07-49eb-adc3-832640e78058\") " Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.661715 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvln8\" (UniqueName: \"kubernetes.io/projected/415b6ea2-bd07-49eb-adc3-832640e78058-kube-api-access-gvln8\") pod \"415b6ea2-bd07-49eb-adc3-832640e78058\" (UID: \"415b6ea2-bd07-49eb-adc3-832640e78058\") " Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.662027 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6785afcf-73dd-42b9-9984-0c73d4883b53-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.662046 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhwr2\" (UniqueName: \"kubernetes.io/projected/6785afcf-73dd-42b9-9984-0c73d4883b53-kube-api-access-jhwr2\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.662439 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415b6ea2-bd07-49eb-adc3-832640e78058-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "415b6ea2-bd07-49eb-adc3-832640e78058" (UID: "415b6ea2-bd07-49eb-adc3-832640e78058"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.664367 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/415b6ea2-bd07-49eb-adc3-832640e78058-kube-api-access-gvln8" (OuterVolumeSpecName: "kube-api-access-gvln8") pod "415b6ea2-bd07-49eb-adc3-832640e78058" (UID: "415b6ea2-bd07-49eb-adc3-832640e78058"). InnerVolumeSpecName "kube-api-access-gvln8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.763950 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/415b6ea2-bd07-49eb-adc3-832640e78058-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:25 crc kubenswrapper[4826]: I0129 08:11:25.763982 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvln8\" (UniqueName: \"kubernetes.io/projected/415b6ea2-bd07-49eb-adc3-832640e78058-kube-api-access-gvln8\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:26 crc kubenswrapper[4826]: I0129 08:11:26.158506 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m9xx7" event={"ID":"415b6ea2-bd07-49eb-adc3-832640e78058","Type":"ContainerDied","Data":"e9e25294e6fea9352e7230e16ce99fb9cc917c4da9e1fd94b7d77992f69125d4"} Jan 29 08:11:26 crc kubenswrapper[4826]: I0129 08:11:26.158966 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m9xx7" Jan 29 08:11:26 crc kubenswrapper[4826]: I0129 08:11:26.159653 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9e25294e6fea9352e7230e16ce99fb9cc917c4da9e1fd94b7d77992f69125d4" Jan 29 08:11:26 crc kubenswrapper[4826]: I0129 08:11:26.161860 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-275b-account-create-update-r6hnm" event={"ID":"6785afcf-73dd-42b9-9984-0c73d4883b53","Type":"ContainerDied","Data":"27800c50fbc99289cd5c7cf0b75e5e0870fe90280d5ce9b1fda7fc6fdffaa321"} Jan 29 08:11:26 crc kubenswrapper[4826]: I0129 08:11:26.161907 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27800c50fbc99289cd5c7cf0b75e5e0870fe90280d5ce9b1fda7fc6fdffaa321" Jan 29 08:11:26 crc kubenswrapper[4826]: I0129 08:11:26.161968 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-275b-account-create-update-r6hnm" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.233194 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-v87n9"] Jan 29 08:11:27 crc kubenswrapper[4826]: E0129 08:11:27.235536 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6785afcf-73dd-42b9-9984-0c73d4883b53" containerName="mariadb-account-create-update" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.235728 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6785afcf-73dd-42b9-9984-0c73d4883b53" containerName="mariadb-account-create-update" Jan 29 08:11:27 crc kubenswrapper[4826]: E0129 08:11:27.235893 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415b6ea2-bd07-49eb-adc3-832640e78058" containerName="mariadb-database-create" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.236047 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="415b6ea2-bd07-49eb-adc3-832640e78058" containerName="mariadb-database-create" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.236528 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="415b6ea2-bd07-49eb-adc3-832640e78058" containerName="mariadb-database-create" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.236734 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6785afcf-73dd-42b9-9984-0c73d4883b53" containerName="mariadb-account-create-update" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.237856 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v87n9" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.240991 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.241166 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rd4cg" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.273147 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-v87n9"] Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.396782 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6023eba8-a02a-4d6d-8975-41e2a8c3e771-db-sync-config-data\") pod \"barbican-db-sync-v87n9\" (UID: \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\") " pod="openstack/barbican-db-sync-v87n9" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.396955 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6023eba8-a02a-4d6d-8975-41e2a8c3e771-combined-ca-bundle\") pod \"barbican-db-sync-v87n9\" (UID: \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\") " pod="openstack/barbican-db-sync-v87n9" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.397159 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf5kf\" (UniqueName: \"kubernetes.io/projected/6023eba8-a02a-4d6d-8975-41e2a8c3e771-kube-api-access-nf5kf\") pod \"barbican-db-sync-v87n9\" (UID: \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\") " pod="openstack/barbican-db-sync-v87n9" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.499794 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6023eba8-a02a-4d6d-8975-41e2a8c3e771-db-sync-config-data\") pod \"barbican-db-sync-v87n9\" (UID: \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\") " pod="openstack/barbican-db-sync-v87n9" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.499876 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6023eba8-a02a-4d6d-8975-41e2a8c3e771-combined-ca-bundle\") pod \"barbican-db-sync-v87n9\" (UID: \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\") " pod="openstack/barbican-db-sync-v87n9" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.499941 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5kf\" (UniqueName: \"kubernetes.io/projected/6023eba8-a02a-4d6d-8975-41e2a8c3e771-kube-api-access-nf5kf\") pod \"barbican-db-sync-v87n9\" (UID: \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\") " pod="openstack/barbican-db-sync-v87n9" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.514418 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6023eba8-a02a-4d6d-8975-41e2a8c3e771-db-sync-config-data\") pod \"barbican-db-sync-v87n9\" (UID: \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\") " pod="openstack/barbican-db-sync-v87n9" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.514462 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6023eba8-a02a-4d6d-8975-41e2a8c3e771-combined-ca-bundle\") pod \"barbican-db-sync-v87n9\" (UID: \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\") " pod="openstack/barbican-db-sync-v87n9" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.523153 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf5kf\" (UniqueName: \"kubernetes.io/projected/6023eba8-a02a-4d6d-8975-41e2a8c3e771-kube-api-access-nf5kf\") pod \"barbican-db-sync-v87n9\" (UID: \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\") " pod="openstack/barbican-db-sync-v87n9" Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.582088 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v87n9" Jan 29 08:11:27 crc kubenswrapper[4826]: W0129 08:11:27.844170 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6023eba8_a02a_4d6d_8975_41e2a8c3e771.slice/crio-56c4b21ffcb1aaa9d93ca53ea2b49530ad1b5024f4d4d5cec9af11acea225d07 WatchSource:0}: Error finding container 56c4b21ffcb1aaa9d93ca53ea2b49530ad1b5024f4d4d5cec9af11acea225d07: Status 404 returned error can't find the container with id 56c4b21ffcb1aaa9d93ca53ea2b49530ad1b5024f4d4d5cec9af11acea225d07 Jan 29 08:11:27 crc kubenswrapper[4826]: I0129 08:11:27.844804 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-v87n9"] Jan 29 08:11:28 crc kubenswrapper[4826]: I0129 08:11:28.183688 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v87n9" event={"ID":"6023eba8-a02a-4d6d-8975-41e2a8c3e771","Type":"ContainerStarted","Data":"56c4b21ffcb1aaa9d93ca53ea2b49530ad1b5024f4d4d5cec9af11acea225d07"} Jan 29 08:11:33 crc kubenswrapper[4826]: I0129 08:11:33.251369 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v87n9" event={"ID":"6023eba8-a02a-4d6d-8975-41e2a8c3e771","Type":"ContainerStarted","Data":"721c203f56bc7b697ba97fa4ded2f30d628d034c75837ea8f22e5af5b24fb9a2"} Jan 29 08:11:33 crc kubenswrapper[4826]: I0129 08:11:33.294236 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-v87n9" podStartSLOduration=1.526520191 podStartE2EDuration="6.294209859s" podCreationTimestamp="2026-01-29 08:11:27 +0000 UTC" firstStartedPulling="2026-01-29 08:11:27.847213641 +0000 UTC m=+5271.709006710" lastFinishedPulling="2026-01-29 08:11:32.614903309 +0000 UTC m=+5276.476696378" observedRunningTime="2026-01-29 08:11:33.276514314 +0000 UTC m=+5277.138307413" watchObservedRunningTime="2026-01-29 08:11:33.294209859 +0000 UTC m=+5277.156002968" Jan 29 08:11:34 crc kubenswrapper[4826]: I0129 08:11:34.267263 4826 generic.go:334] "Generic (PLEG): container finished" podID="6023eba8-a02a-4d6d-8975-41e2a8c3e771" containerID="721c203f56bc7b697ba97fa4ded2f30d628d034c75837ea8f22e5af5b24fb9a2" exitCode=0 Jan 29 08:11:34 crc kubenswrapper[4826]: I0129 08:11:34.267397 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v87n9" event={"ID":"6023eba8-a02a-4d6d-8975-41e2a8c3e771","Type":"ContainerDied","Data":"721c203f56bc7b697ba97fa4ded2f30d628d034c75837ea8f22e5af5b24fb9a2"} Jan 29 08:11:35 crc kubenswrapper[4826]: I0129 08:11:35.680458 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v87n9" Jan 29 08:11:35 crc kubenswrapper[4826]: I0129 08:11:35.739254 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6023eba8-a02a-4d6d-8975-41e2a8c3e771-db-sync-config-data\") pod \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\" (UID: \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\") " Jan 29 08:11:35 crc kubenswrapper[4826]: I0129 08:11:35.739554 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6023eba8-a02a-4d6d-8975-41e2a8c3e771-combined-ca-bundle\") pod \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\" (UID: \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\") " Jan 29 08:11:35 crc kubenswrapper[4826]: I0129 08:11:35.739756 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf5kf\" (UniqueName: \"kubernetes.io/projected/6023eba8-a02a-4d6d-8975-41e2a8c3e771-kube-api-access-nf5kf\") pod \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\" (UID: \"6023eba8-a02a-4d6d-8975-41e2a8c3e771\") " Jan 29 08:11:35 crc kubenswrapper[4826]: I0129 08:11:35.751619 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6023eba8-a02a-4d6d-8975-41e2a8c3e771-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6023eba8-a02a-4d6d-8975-41e2a8c3e771" (UID: "6023eba8-a02a-4d6d-8975-41e2a8c3e771"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:11:35 crc kubenswrapper[4826]: I0129 08:11:35.752787 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6023eba8-a02a-4d6d-8975-41e2a8c3e771-kube-api-access-nf5kf" (OuterVolumeSpecName: "kube-api-access-nf5kf") pod "6023eba8-a02a-4d6d-8975-41e2a8c3e771" (UID: "6023eba8-a02a-4d6d-8975-41e2a8c3e771"). InnerVolumeSpecName "kube-api-access-nf5kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:11:35 crc kubenswrapper[4826]: I0129 08:11:35.782576 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6023eba8-a02a-4d6d-8975-41e2a8c3e771-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6023eba8-a02a-4d6d-8975-41e2a8c3e771" (UID: "6023eba8-a02a-4d6d-8975-41e2a8c3e771"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:11:35 crc kubenswrapper[4826]: I0129 08:11:35.808647 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:11:35 crc kubenswrapper[4826]: E0129 08:11:35.809033 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:11:35 crc kubenswrapper[4826]: I0129 08:11:35.842461 4826 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6023eba8-a02a-4d6d-8975-41e2a8c3e771-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:35 crc kubenswrapper[4826]: I0129 08:11:35.842493 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6023eba8-a02a-4d6d-8975-41e2a8c3e771-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:35 crc kubenswrapper[4826]: I0129 08:11:35.842503 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf5kf\" (UniqueName: \"kubernetes.io/projected/6023eba8-a02a-4d6d-8975-41e2a8c3e771-kube-api-access-nf5kf\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.287992 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v87n9" event={"ID":"6023eba8-a02a-4d6d-8975-41e2a8c3e771","Type":"ContainerDied","Data":"56c4b21ffcb1aaa9d93ca53ea2b49530ad1b5024f4d4d5cec9af11acea225d07"} Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.288050 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56c4b21ffcb1aaa9d93ca53ea2b49530ad1b5024f4d4d5cec9af11acea225d07" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.288059 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v87n9" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.520384 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7757bff99c-cgbsc"] Jan 29 08:11:36 crc kubenswrapper[4826]: E0129 08:11:36.520764 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6023eba8-a02a-4d6d-8975-41e2a8c3e771" containerName="barbican-db-sync" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.520786 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6023eba8-a02a-4d6d-8975-41e2a8c3e771" containerName="barbican-db-sync" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.520984 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6023eba8-a02a-4d6d-8975-41e2a8c3e771" containerName="barbican-db-sync" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.522023 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.525529 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.525735 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rd4cg" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.530332 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.533046 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7757bff99c-cgbsc"] Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.551381 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-59b95dcdb6-fwmqw"] Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.555718 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a714542-5a9f-4e52-a120-aa5340d5c21e-config-data-custom\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.556082 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a714542-5a9f-4e52-a120-aa5340d5c21e-logs\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.556238 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a714542-5a9f-4e52-a120-aa5340d5c21e-config-data\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.555759 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.556392 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmqkf\" (UniqueName: \"kubernetes.io/projected/0a714542-5a9f-4e52-a120-aa5340d5c21e-kube-api-access-fmqkf\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.556612 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a714542-5a9f-4e52-a120-aa5340d5c21e-combined-ca-bundle\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.559187 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.585708 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-59b95dcdb6-fwmqw"] Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.658180 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0b58a6f-0e9e-4bab-9492-50efc2437486-config-data-custom\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.658241 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a714542-5a9f-4e52-a120-aa5340d5c21e-config-data-custom\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.658269 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b58a6f-0e9e-4bab-9492-50efc2437486-config-data\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.658287 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b58a6f-0e9e-4bab-9492-50efc2437486-combined-ca-bundle\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.658333 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xszlf\" (UniqueName: \"kubernetes.io/projected/b0b58a6f-0e9e-4bab-9492-50efc2437486-kube-api-access-xszlf\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.658368 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a714542-5a9f-4e52-a120-aa5340d5c21e-logs\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.658384 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a714542-5a9f-4e52-a120-aa5340d5c21e-config-data\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.658407 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmqkf\" (UniqueName: \"kubernetes.io/projected/0a714542-5a9f-4e52-a120-aa5340d5c21e-kube-api-access-fmqkf\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.658453 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a714542-5a9f-4e52-a120-aa5340d5c21e-combined-ca-bundle\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.658476 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0b58a6f-0e9e-4bab-9492-50efc2437486-logs\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.664656 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a714542-5a9f-4e52-a120-aa5340d5c21e-logs\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.667436 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-756bb75757-mdq4x"] Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.668719 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.676063 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a714542-5a9f-4e52-a120-aa5340d5c21e-combined-ca-bundle\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.683830 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a714542-5a9f-4e52-a120-aa5340d5c21e-config-data-custom\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.686726 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a714542-5a9f-4e52-a120-aa5340d5c21e-config-data\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.700911 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmqkf\" (UniqueName: \"kubernetes.io/projected/0a714542-5a9f-4e52-a120-aa5340d5c21e-kube-api-access-fmqkf\") pod \"barbican-worker-7757bff99c-cgbsc\" (UID: \"0a714542-5a9f-4e52-a120-aa5340d5c21e\") " pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.721093 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-756bb75757-mdq4x"] Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.762257 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0b58a6f-0e9e-4bab-9492-50efc2437486-config-data-custom\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.762339 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b58a6f-0e9e-4bab-9492-50efc2437486-config-data\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.762366 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n88t9\" (UniqueName: \"kubernetes.io/projected/f2c964f6-3323-4fe8-949e-066624fc29d7-kube-api-access-n88t9\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.762388 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b58a6f-0e9e-4bab-9492-50efc2437486-combined-ca-bundle\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.762421 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xszlf\" (UniqueName: \"kubernetes.io/projected/b0b58a6f-0e9e-4bab-9492-50efc2437486-kube-api-access-xszlf\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.762455 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-dns-svc\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.762473 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-ovsdbserver-sb\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.762508 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-ovsdbserver-nb\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.762530 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0b58a6f-0e9e-4bab-9492-50efc2437486-logs\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.762555 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-config\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.794104 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0b58a6f-0e9e-4bab-9492-50efc2437486-logs\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.794890 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b58a6f-0e9e-4bab-9492-50efc2437486-combined-ca-bundle\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.795202 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b58a6f-0e9e-4bab-9492-50efc2437486-config-data\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.797361 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.870206 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xszlf\" (UniqueName: \"kubernetes.io/projected/b0b58a6f-0e9e-4bab-9492-50efc2437486-kube-api-access-xszlf\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.885525 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0b58a6f-0e9e-4bab-9492-50efc2437486-config-data-custom\") pod \"barbican-keystone-listener-59b95dcdb6-fwmqw\" (UID: \"b0b58a6f-0e9e-4bab-9492-50efc2437486\") " pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.912551 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-dns-svc\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.920846 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-ovsdbserver-sb\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.921020 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-ovsdbserver-nb\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.921124 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-config\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.921309 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n88t9\" (UniqueName: \"kubernetes.io/projected/f2c964f6-3323-4fe8-949e-066624fc29d7-kube-api-access-n88t9\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.922562 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-ovsdbserver-nb\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.923059 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rd4cg" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.923496 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-dns-svc\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.924286 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-config\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.929684 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-85bf844b76-54kvs"] Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.931466 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7757bff99c-cgbsc" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.932167 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.932635 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85bf844b76-54kvs"] Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.932711 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.940546 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n88t9\" (UniqueName: \"kubernetes.io/projected/f2c964f6-3323-4fe8-949e-066624fc29d7-kube-api-access-n88t9\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.940867 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 08:11:36 crc kubenswrapper[4826]: I0129 08:11:36.946009 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-ovsdbserver-sb\") pod \"dnsmasq-dns-756bb75757-mdq4x\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.131340 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-config-data\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.131624 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-combined-ca-bundle\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.131643 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjl4f\" (UniqueName: \"kubernetes.io/projected/dc166452-75bf-4ed5-b0c1-e5a78d366b31-kube-api-access-wjl4f\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.131679 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc166452-75bf-4ed5-b0c1-e5a78d366b31-logs\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.131733 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-config-data-custom\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.232811 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.233273 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc166452-75bf-4ed5-b0c1-e5a78d366b31-logs\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.233398 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-config-data-custom\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.233466 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-config-data\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.233527 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-combined-ca-bundle\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.233546 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjl4f\" (UniqueName: \"kubernetes.io/projected/dc166452-75bf-4ed5-b0c1-e5a78d366b31-kube-api-access-wjl4f\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.234150 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc166452-75bf-4ed5-b0c1-e5a78d366b31-logs\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.238620 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-combined-ca-bundle\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.239499 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-config-data\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.244055 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-config-data-custom\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.252076 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjl4f\" (UniqueName: \"kubernetes.io/projected/dc166452-75bf-4ed5-b0c1-e5a78d366b31-kube-api-access-wjl4f\") pod \"barbican-api-85bf844b76-54kvs\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.264692 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.406717 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-59b95dcdb6-fwmqw"] Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.466632 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7757bff99c-cgbsc"] Jan 29 08:11:37 crc kubenswrapper[4826]: W0129 08:11:37.474988 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a714542_5a9f_4e52_a120_aa5340d5c21e.slice/crio-f7c67bb0c13e17ede037a4f20f1032cecca6e3bf37c79aaef5003fa340cb2b26 WatchSource:0}: Error finding container f7c67bb0c13e17ede037a4f20f1032cecca6e3bf37c79aaef5003fa340cb2b26: Status 404 returned error can't find the container with id f7c67bb0c13e17ede037a4f20f1032cecca6e3bf37c79aaef5003fa340cb2b26 Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.735870 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-756bb75757-mdq4x"] Jan 29 08:11:37 crc kubenswrapper[4826]: W0129 08:11:37.751382 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2c964f6_3323_4fe8_949e_066624fc29d7.slice/crio-5f9b5b78f6a98edcdc066ae7459151d6e715be1257c33dd4e8b227389b4e0ee2 WatchSource:0}: Error finding container 5f9b5b78f6a98edcdc066ae7459151d6e715be1257c33dd4e8b227389b4e0ee2: Status 404 returned error can't find the container with id 5f9b5b78f6a98edcdc066ae7459151d6e715be1257c33dd4e8b227389b4e0ee2 Jan 29 08:11:37 crc kubenswrapper[4826]: I0129 08:11:37.838264 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85bf844b76-54kvs"] Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.316626 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7757bff99c-cgbsc" event={"ID":"0a714542-5a9f-4e52-a120-aa5340d5c21e","Type":"ContainerStarted","Data":"f7c67bb0c13e17ede037a4f20f1032cecca6e3bf37c79aaef5003fa340cb2b26"} Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.318672 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" event={"ID":"b0b58a6f-0e9e-4bab-9492-50efc2437486","Type":"ContainerStarted","Data":"56402e6676a6d0b592a6646b6c8e87d24a009cac5a8a30a94317eae6a732f0ae"} Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.320046 4826 generic.go:334] "Generic (PLEG): container finished" podID="f2c964f6-3323-4fe8-949e-066624fc29d7" containerID="bb02234ad800074f69ecf93dd410c55eb3fea27843e1d7c41e8ec580b90760e5" exitCode=0 Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.320114 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756bb75757-mdq4x" event={"ID":"f2c964f6-3323-4fe8-949e-066624fc29d7","Type":"ContainerDied","Data":"bb02234ad800074f69ecf93dd410c55eb3fea27843e1d7c41e8ec580b90760e5"} Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.320170 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756bb75757-mdq4x" event={"ID":"f2c964f6-3323-4fe8-949e-066624fc29d7","Type":"ContainerStarted","Data":"5f9b5b78f6a98edcdc066ae7459151d6e715be1257c33dd4e8b227389b4e0ee2"} Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.322702 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85bf844b76-54kvs" event={"ID":"dc166452-75bf-4ed5-b0c1-e5a78d366b31","Type":"ContainerStarted","Data":"3a160638879c093b8b41f4408941314fb4c22bf89c8093d75d067549cb847bdb"} Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.322730 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85bf844b76-54kvs" event={"ID":"dc166452-75bf-4ed5-b0c1-e5a78d366b31","Type":"ContainerStarted","Data":"af81f51797ed66c16eebf0e8c54cfd3e9e2db7c7e97f35bc102c826cd8787145"} Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.322740 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85bf844b76-54kvs" event={"ID":"dc166452-75bf-4ed5-b0c1-e5a78d366b31","Type":"ContainerStarted","Data":"ed68b47d1b534ec1a1c525432f8677385937538aa0eea557510ef7d477b3692e"} Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.322881 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.322989 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.361947 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-85bf844b76-54kvs" podStartSLOduration=2.3619194820000002 podStartE2EDuration="2.361919482s" podCreationTimestamp="2026-01-29 08:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:11:38.356237902 +0000 UTC m=+5282.218030971" watchObservedRunningTime="2026-01-29 08:11:38.361919482 +0000 UTC m=+5282.223712551" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.806201 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7745d4877d-gkrhq"] Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.809248 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.811068 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.812464 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.868360 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-combined-ca-bundle\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.868431 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-config-data-custom\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.868702 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-internal-tls-certs\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.868742 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tczrq\" (UniqueName: \"kubernetes.io/projected/08790220-3de0-47f4-a6b2-87fcf15ecdfa-kube-api-access-tczrq\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.868788 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08790220-3de0-47f4-a6b2-87fcf15ecdfa-logs\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.868836 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-public-tls-certs\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.868873 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-config-data\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.873186 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7745d4877d-gkrhq"] Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.970642 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-public-tls-certs\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.970711 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-config-data\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.970852 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-combined-ca-bundle\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.970898 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-config-data-custom\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.971004 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-internal-tls-certs\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.971030 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tczrq\" (UniqueName: \"kubernetes.io/projected/08790220-3de0-47f4-a6b2-87fcf15ecdfa-kube-api-access-tczrq\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.971077 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08790220-3de0-47f4-a6b2-87fcf15ecdfa-logs\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.972981 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08790220-3de0-47f4-a6b2-87fcf15ecdfa-logs\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.977037 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-config-data-custom\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.977465 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-internal-tls-certs\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.978397 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-config-data\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.978567 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-combined-ca-bundle\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.983880 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08790220-3de0-47f4-a6b2-87fcf15ecdfa-public-tls-certs\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:38 crc kubenswrapper[4826]: I0129 08:11:38.988300 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tczrq\" (UniqueName: \"kubernetes.io/projected/08790220-3de0-47f4-a6b2-87fcf15ecdfa-kube-api-access-tczrq\") pod \"barbican-api-7745d4877d-gkrhq\" (UID: \"08790220-3de0-47f4-a6b2-87fcf15ecdfa\") " pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:39 crc kubenswrapper[4826]: I0129 08:11:39.206167 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:39 crc kubenswrapper[4826]: I0129 08:11:39.669280 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7745d4877d-gkrhq"] Jan 29 08:11:39 crc kubenswrapper[4826]: W0129 08:11:39.673904 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08790220_3de0_47f4_a6b2_87fcf15ecdfa.slice/crio-5ff23fbdace16a26d8e75a2a03f07df55c8bd177b16aaa3f9f56e3c5504f5be9 WatchSource:0}: Error finding container 5ff23fbdace16a26d8e75a2a03f07df55c8bd177b16aaa3f9f56e3c5504f5be9: Status 404 returned error can't find the container with id 5ff23fbdace16a26d8e75a2a03f07df55c8bd177b16aaa3f9f56e3c5504f5be9 Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.347856 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" event={"ID":"b0b58a6f-0e9e-4bab-9492-50efc2437486","Type":"ContainerStarted","Data":"7a508e49e0db9dec917b6ca0c51eb10236350236d86e4d995b0a5c888164b12a"} Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.348183 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" event={"ID":"b0b58a6f-0e9e-4bab-9492-50efc2437486","Type":"ContainerStarted","Data":"ee7868cfdb9920a566e735ef8afbce9b3128ae4fd3e0ab972a79e0da93f0c8d0"} Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.354952 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756bb75757-mdq4x" event={"ID":"f2c964f6-3323-4fe8-949e-066624fc29d7","Type":"ContainerStarted","Data":"ec279a703333da719630f2e0c6d148b5384ea39751419c0374a4b84c8c570303"} Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.355149 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.359940 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7745d4877d-gkrhq" event={"ID":"08790220-3de0-47f4-a6b2-87fcf15ecdfa","Type":"ContainerStarted","Data":"7da5d01d0778a53ce6f4845c834f942f82348cc1743c266281f0e834c51db4ef"} Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.359978 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7745d4877d-gkrhq" event={"ID":"08790220-3de0-47f4-a6b2-87fcf15ecdfa","Type":"ContainerStarted","Data":"07894d9fc881fabc8c42cab99cb1c993edf96869bb85364185eba5f82558f464"} Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.359992 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7745d4877d-gkrhq" event={"ID":"08790220-3de0-47f4-a6b2-87fcf15ecdfa","Type":"ContainerStarted","Data":"5ff23fbdace16a26d8e75a2a03f07df55c8bd177b16aaa3f9f56e3c5504f5be9"} Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.360117 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.363375 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7757bff99c-cgbsc" event={"ID":"0a714542-5a9f-4e52-a120-aa5340d5c21e","Type":"ContainerStarted","Data":"cc372672de966cc04d77434840824fc2b8ea21598489518acaf2e6756e261eda"} Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.363441 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7757bff99c-cgbsc" event={"ID":"0a714542-5a9f-4e52-a120-aa5340d5c21e","Type":"ContainerStarted","Data":"62153691152cd5e9ac7fd298873b365cb4a1b8b90e20a591fac16d08ce9a0646"} Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.373497 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-59b95dcdb6-fwmqw" podStartSLOduration=2.628989799 podStartE2EDuration="4.373477508s" podCreationTimestamp="2026-01-29 08:11:36 +0000 UTC" firstStartedPulling="2026-01-29 08:11:37.448381877 +0000 UTC m=+5281.310174946" lastFinishedPulling="2026-01-29 08:11:39.192869586 +0000 UTC m=+5283.054662655" observedRunningTime="2026-01-29 08:11:40.366092704 +0000 UTC m=+5284.227885783" watchObservedRunningTime="2026-01-29 08:11:40.373477508 +0000 UTC m=+5284.235270597" Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.395489 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7745d4877d-gkrhq" podStartSLOduration=2.395473696 podStartE2EDuration="2.395473696s" podCreationTimestamp="2026-01-29 08:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:11:40.39143515 +0000 UTC m=+5284.253228239" watchObservedRunningTime="2026-01-29 08:11:40.395473696 +0000 UTC m=+5284.257266765" Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.420352 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-756bb75757-mdq4x" podStartSLOduration=4.420330339 podStartE2EDuration="4.420330339s" podCreationTimestamp="2026-01-29 08:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:11:40.413999773 +0000 UTC m=+5284.275792842" watchObservedRunningTime="2026-01-29 08:11:40.420330339 +0000 UTC m=+5284.282123408" Jan 29 08:11:40 crc kubenswrapper[4826]: I0129 08:11:40.438794 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7757bff99c-cgbsc" podStartSLOduration=2.7234252100000003 podStartE2EDuration="4.438777764s" podCreationTimestamp="2026-01-29 08:11:36 +0000 UTC" firstStartedPulling="2026-01-29 08:11:37.477907943 +0000 UTC m=+5281.339701012" lastFinishedPulling="2026-01-29 08:11:39.193260497 +0000 UTC m=+5283.055053566" observedRunningTime="2026-01-29 08:11:40.435223721 +0000 UTC m=+5284.297016790" watchObservedRunningTime="2026-01-29 08:11:40.438777764 +0000 UTC m=+5284.300570833" Jan 29 08:11:41 crc kubenswrapper[4826]: I0129 08:11:41.372985 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:43 crc kubenswrapper[4826]: I0129 08:11:43.700260 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:44 crc kubenswrapper[4826]: I0129 08:11:44.928487 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:45 crc kubenswrapper[4826]: I0129 08:11:45.628742 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:46 crc kubenswrapper[4826]: I0129 08:11:46.829532 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:11:46 crc kubenswrapper[4826]: E0129 08:11:46.830019 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:11:46 crc kubenswrapper[4826]: I0129 08:11:46.941345 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7745d4877d-gkrhq" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.026697 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85bf844b76-54kvs"] Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.029250 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85bf844b76-54kvs" podUID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerName="barbican-api-log" containerID="cri-o://af81f51797ed66c16eebf0e8c54cfd3e9e2db7c7e97f35bc102c826cd8787145" gracePeriod=30 Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.029481 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85bf844b76-54kvs" podUID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerName="barbican-api" containerID="cri-o://3a160638879c093b8b41f4408941314fb4c22bf89c8093d75d067549cb847bdb" gracePeriod=30 Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.040625 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-85bf844b76-54kvs" podUID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.32:9311/healthcheck\": EOF" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.040773 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-85bf844b76-54kvs" podUID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.32:9311/healthcheck\": EOF" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.235769 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.318208 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fb84b94d5-6m96b"] Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.318514 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" podUID="f7bfb3f3-701c-4b2a-a145-67de08df6dba" containerName="dnsmasq-dns" containerID="cri-o://22082cb5616c57d5ccd12c69b1679e2429b542d7709c60a130e16e3f9f322e7b" gracePeriod=10 Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.448922 4826 generic.go:334] "Generic (PLEG): container finished" podID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerID="af81f51797ed66c16eebf0e8c54cfd3e9e2db7c7e97f35bc102c826cd8787145" exitCode=143 Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.448985 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85bf844b76-54kvs" event={"ID":"dc166452-75bf-4ed5-b0c1-e5a78d366b31","Type":"ContainerDied","Data":"af81f51797ed66c16eebf0e8c54cfd3e9e2db7c7e97f35bc102c826cd8787145"} Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.765787 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.813958 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-ovsdbserver-nb\") pod \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.814108 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-ovsdbserver-sb\") pod \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.814155 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-config\") pod \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.814193 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fhjb\" (UniqueName: \"kubernetes.io/projected/f7bfb3f3-701c-4b2a-a145-67de08df6dba-kube-api-access-8fhjb\") pod \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.814231 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-dns-svc\") pod \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\" (UID: \"f7bfb3f3-701c-4b2a-a145-67de08df6dba\") " Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.821008 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7bfb3f3-701c-4b2a-a145-67de08df6dba-kube-api-access-8fhjb" (OuterVolumeSpecName: "kube-api-access-8fhjb") pod "f7bfb3f3-701c-4b2a-a145-67de08df6dba" (UID: "f7bfb3f3-701c-4b2a-a145-67de08df6dba"). InnerVolumeSpecName "kube-api-access-8fhjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.857414 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f7bfb3f3-701c-4b2a-a145-67de08df6dba" (UID: "f7bfb3f3-701c-4b2a-a145-67de08df6dba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.858975 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-config" (OuterVolumeSpecName: "config") pod "f7bfb3f3-701c-4b2a-a145-67de08df6dba" (UID: "f7bfb3f3-701c-4b2a-a145-67de08df6dba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.866142 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f7bfb3f3-701c-4b2a-a145-67de08df6dba" (UID: "f7bfb3f3-701c-4b2a-a145-67de08df6dba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.875450 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7bfb3f3-701c-4b2a-a145-67de08df6dba" (UID: "f7bfb3f3-701c-4b2a-a145-67de08df6dba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.918230 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.918268 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.918279 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.918289 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fhjb\" (UniqueName: \"kubernetes.io/projected/f7bfb3f3-701c-4b2a-a145-67de08df6dba-kube-api-access-8fhjb\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:47 crc kubenswrapper[4826]: I0129 08:11:47.918311 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7bfb3f3-701c-4b2a-a145-67de08df6dba-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:48 crc kubenswrapper[4826]: I0129 08:11:48.461154 4826 generic.go:334] "Generic (PLEG): container finished" podID="f7bfb3f3-701c-4b2a-a145-67de08df6dba" containerID="22082cb5616c57d5ccd12c69b1679e2429b542d7709c60a130e16e3f9f322e7b" exitCode=0 Jan 29 08:11:48 crc kubenswrapper[4826]: I0129 08:11:48.461207 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" event={"ID":"f7bfb3f3-701c-4b2a-a145-67de08df6dba","Type":"ContainerDied","Data":"22082cb5616c57d5ccd12c69b1679e2429b542d7709c60a130e16e3f9f322e7b"} Jan 29 08:11:48 crc kubenswrapper[4826]: I0129 08:11:48.461235 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" Jan 29 08:11:48 crc kubenswrapper[4826]: I0129 08:11:48.462110 4826 scope.go:117] "RemoveContainer" containerID="22082cb5616c57d5ccd12c69b1679e2429b542d7709c60a130e16e3f9f322e7b" Jan 29 08:11:48 crc kubenswrapper[4826]: I0129 08:11:48.461990 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb84b94d5-6m96b" event={"ID":"f7bfb3f3-701c-4b2a-a145-67de08df6dba","Type":"ContainerDied","Data":"0a8f66b49b15983cad68867079f080cebc28630f71c4a71c3dc936259c8121bf"} Jan 29 08:11:48 crc kubenswrapper[4826]: I0129 08:11:48.494120 4826 scope.go:117] "RemoveContainer" containerID="e9d01d82a1f14d183252dcbb715ff6a344d08d92f037c1578fbbcceb138cb332" Jan 29 08:11:48 crc kubenswrapper[4826]: I0129 08:11:48.507259 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fb84b94d5-6m96b"] Jan 29 08:11:48 crc kubenswrapper[4826]: I0129 08:11:48.522006 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fb84b94d5-6m96b"] Jan 29 08:11:48 crc kubenswrapper[4826]: I0129 08:11:48.522286 4826 scope.go:117] "RemoveContainer" containerID="22082cb5616c57d5ccd12c69b1679e2429b542d7709c60a130e16e3f9f322e7b" Jan 29 08:11:48 crc kubenswrapper[4826]: E0129 08:11:48.522727 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22082cb5616c57d5ccd12c69b1679e2429b542d7709c60a130e16e3f9f322e7b\": container with ID starting with 22082cb5616c57d5ccd12c69b1679e2429b542d7709c60a130e16e3f9f322e7b not found: ID does not exist" containerID="22082cb5616c57d5ccd12c69b1679e2429b542d7709c60a130e16e3f9f322e7b" Jan 29 08:11:48 crc kubenswrapper[4826]: I0129 08:11:48.522771 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22082cb5616c57d5ccd12c69b1679e2429b542d7709c60a130e16e3f9f322e7b"} err="failed to get container status \"22082cb5616c57d5ccd12c69b1679e2429b542d7709c60a130e16e3f9f322e7b\": rpc error: code = NotFound desc = could not find container \"22082cb5616c57d5ccd12c69b1679e2429b542d7709c60a130e16e3f9f322e7b\": container with ID starting with 22082cb5616c57d5ccd12c69b1679e2429b542d7709c60a130e16e3f9f322e7b not found: ID does not exist" Jan 29 08:11:48 crc kubenswrapper[4826]: I0129 08:11:48.522798 4826 scope.go:117] "RemoveContainer" containerID="e9d01d82a1f14d183252dcbb715ff6a344d08d92f037c1578fbbcceb138cb332" Jan 29 08:11:48 crc kubenswrapper[4826]: E0129 08:11:48.523123 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9d01d82a1f14d183252dcbb715ff6a344d08d92f037c1578fbbcceb138cb332\": container with ID starting with e9d01d82a1f14d183252dcbb715ff6a344d08d92f037c1578fbbcceb138cb332 not found: ID does not exist" containerID="e9d01d82a1f14d183252dcbb715ff6a344d08d92f037c1578fbbcceb138cb332" Jan 29 08:11:48 crc kubenswrapper[4826]: I0129 08:11:48.523162 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9d01d82a1f14d183252dcbb715ff6a344d08d92f037c1578fbbcceb138cb332"} err="failed to get container status \"e9d01d82a1f14d183252dcbb715ff6a344d08d92f037c1578fbbcceb138cb332\": rpc error: code = NotFound desc = could not find container \"e9d01d82a1f14d183252dcbb715ff6a344d08d92f037c1578fbbcceb138cb332\": container with ID starting with e9d01d82a1f14d183252dcbb715ff6a344d08d92f037c1578fbbcceb138cb332 not found: ID does not exist" Jan 29 08:11:48 crc kubenswrapper[4826]: I0129 08:11:48.827045 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7bfb3f3-701c-4b2a-a145-67de08df6dba" path="/var/lib/kubelet/pods/f7bfb3f3-701c-4b2a-a145-67de08df6dba/volumes" Jan 29 08:11:51 crc kubenswrapper[4826]: I0129 08:11:51.480528 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85bf844b76-54kvs" podUID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.32:9311/healthcheck\": read tcp 10.217.0.2:48150->10.217.1.32:9311: read: connection reset by peer" Jan 29 08:11:51 crc kubenswrapper[4826]: I0129 08:11:51.480566 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85bf844b76-54kvs" podUID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.32:9311/healthcheck\": read tcp 10.217.0.2:48162->10.217.1.32:9311: read: connection reset by peer" Jan 29 08:11:51 crc kubenswrapper[4826]: I0129 08:11:51.963697 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.090919 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc166452-75bf-4ed5-b0c1-e5a78d366b31-logs\") pod \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.091084 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-config-data-custom\") pod \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.091282 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjl4f\" (UniqueName: \"kubernetes.io/projected/dc166452-75bf-4ed5-b0c1-e5a78d366b31-kube-api-access-wjl4f\") pod \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.091413 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-combined-ca-bundle\") pod \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.091484 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-config-data\") pod \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\" (UID: \"dc166452-75bf-4ed5-b0c1-e5a78d366b31\") " Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.091891 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc166452-75bf-4ed5-b0c1-e5a78d366b31-logs" (OuterVolumeSpecName: "logs") pod "dc166452-75bf-4ed5-b0c1-e5a78d366b31" (UID: "dc166452-75bf-4ed5-b0c1-e5a78d366b31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.092065 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc166452-75bf-4ed5-b0c1-e5a78d366b31-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.100588 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dc166452-75bf-4ed5-b0c1-e5a78d366b31" (UID: "dc166452-75bf-4ed5-b0c1-e5a78d366b31"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.100589 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc166452-75bf-4ed5-b0c1-e5a78d366b31-kube-api-access-wjl4f" (OuterVolumeSpecName: "kube-api-access-wjl4f") pod "dc166452-75bf-4ed5-b0c1-e5a78d366b31" (UID: "dc166452-75bf-4ed5-b0c1-e5a78d366b31"). InnerVolumeSpecName "kube-api-access-wjl4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.120644 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc166452-75bf-4ed5-b0c1-e5a78d366b31" (UID: "dc166452-75bf-4ed5-b0c1-e5a78d366b31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.138340 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-config-data" (OuterVolumeSpecName: "config-data") pod "dc166452-75bf-4ed5-b0c1-e5a78d366b31" (UID: "dc166452-75bf-4ed5-b0c1-e5a78d366b31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.196170 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.196213 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjl4f\" (UniqueName: \"kubernetes.io/projected/dc166452-75bf-4ed5-b0c1-e5a78d366b31-kube-api-access-wjl4f\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.196224 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.196238 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc166452-75bf-4ed5-b0c1-e5a78d366b31-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.515096 4826 generic.go:334] "Generic (PLEG): container finished" podID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerID="3a160638879c093b8b41f4408941314fb4c22bf89c8093d75d067549cb847bdb" exitCode=0 Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.515141 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85bf844b76-54kvs" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.515140 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85bf844b76-54kvs" event={"ID":"dc166452-75bf-4ed5-b0c1-e5a78d366b31","Type":"ContainerDied","Data":"3a160638879c093b8b41f4408941314fb4c22bf89c8093d75d067549cb847bdb"} Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.517131 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85bf844b76-54kvs" event={"ID":"dc166452-75bf-4ed5-b0c1-e5a78d366b31","Type":"ContainerDied","Data":"ed68b47d1b534ec1a1c525432f8677385937538aa0eea557510ef7d477b3692e"} Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.517209 4826 scope.go:117] "RemoveContainer" containerID="3a160638879c093b8b41f4408941314fb4c22bf89c8093d75d067549cb847bdb" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.556152 4826 scope.go:117] "RemoveContainer" containerID="af81f51797ed66c16eebf0e8c54cfd3e9e2db7c7e97f35bc102c826cd8787145" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.563312 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85bf844b76-54kvs"] Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.576313 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-85bf844b76-54kvs"] Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.612655 4826 scope.go:117] "RemoveContainer" containerID="3a160638879c093b8b41f4408941314fb4c22bf89c8093d75d067549cb847bdb" Jan 29 08:11:52 crc kubenswrapper[4826]: E0129 08:11:52.613343 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a160638879c093b8b41f4408941314fb4c22bf89c8093d75d067549cb847bdb\": container with ID starting with 3a160638879c093b8b41f4408941314fb4c22bf89c8093d75d067549cb847bdb not found: ID does not exist" containerID="3a160638879c093b8b41f4408941314fb4c22bf89c8093d75d067549cb847bdb" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.613601 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a160638879c093b8b41f4408941314fb4c22bf89c8093d75d067549cb847bdb"} err="failed to get container status \"3a160638879c093b8b41f4408941314fb4c22bf89c8093d75d067549cb847bdb\": rpc error: code = NotFound desc = could not find container \"3a160638879c093b8b41f4408941314fb4c22bf89c8093d75d067549cb847bdb\": container with ID starting with 3a160638879c093b8b41f4408941314fb4c22bf89c8093d75d067549cb847bdb not found: ID does not exist" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.613638 4826 scope.go:117] "RemoveContainer" containerID="af81f51797ed66c16eebf0e8c54cfd3e9e2db7c7e97f35bc102c826cd8787145" Jan 29 08:11:52 crc kubenswrapper[4826]: E0129 08:11:52.614050 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af81f51797ed66c16eebf0e8c54cfd3e9e2db7c7e97f35bc102c826cd8787145\": container with ID starting with af81f51797ed66c16eebf0e8c54cfd3e9e2db7c7e97f35bc102c826cd8787145 not found: ID does not exist" containerID="af81f51797ed66c16eebf0e8c54cfd3e9e2db7c7e97f35bc102c826cd8787145" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.614075 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af81f51797ed66c16eebf0e8c54cfd3e9e2db7c7e97f35bc102c826cd8787145"} err="failed to get container status \"af81f51797ed66c16eebf0e8c54cfd3e9e2db7c7e97f35bc102c826cd8787145\": rpc error: code = NotFound desc = could not find container \"af81f51797ed66c16eebf0e8c54cfd3e9e2db7c7e97f35bc102c826cd8787145\": container with ID starting with af81f51797ed66c16eebf0e8c54cfd3e9e2db7c7e97f35bc102c826cd8787145 not found: ID does not exist" Jan 29 08:11:52 crc kubenswrapper[4826]: I0129 08:11:52.831084 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" path="/var/lib/kubelet/pods/dc166452-75bf-4ed5-b0c1-e5a78d366b31/volumes" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.863753 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dbd0-account-create-update-qbplg"] Jan 29 08:11:57 crc kubenswrapper[4826]: E0129 08:11:57.864689 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7bfb3f3-701c-4b2a-a145-67de08df6dba" containerName="dnsmasq-dns" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.864705 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7bfb3f3-701c-4b2a-a145-67de08df6dba" containerName="dnsmasq-dns" Jan 29 08:11:57 crc kubenswrapper[4826]: E0129 08:11:57.864735 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7bfb3f3-701c-4b2a-a145-67de08df6dba" containerName="init" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.864743 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7bfb3f3-701c-4b2a-a145-67de08df6dba" containerName="init" Jan 29 08:11:57 crc kubenswrapper[4826]: E0129 08:11:57.864754 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerName="barbican-api" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.864764 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerName="barbican-api" Jan 29 08:11:57 crc kubenswrapper[4826]: E0129 08:11:57.864788 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerName="barbican-api-log" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.864797 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerName="barbican-api-log" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.865009 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerName="barbican-api-log" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.865026 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7bfb3f3-701c-4b2a-a145-67de08df6dba" containerName="dnsmasq-dns" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.865047 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc166452-75bf-4ed5-b0c1-e5a78d366b31" containerName="barbican-api" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.865772 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbd0-account-create-update-qbplg" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.867982 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.869941 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-krk5b"] Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.871079 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-krk5b" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.880365 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-krk5b"] Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.887411 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dbd0-account-create-update-qbplg"] Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.927582 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b70399-c3a4-4543-b284-fad06b2dff50-operator-scripts\") pod \"neutron-dbd0-account-create-update-qbplg\" (UID: \"86b70399-c3a4-4543-b284-fad06b2dff50\") " pod="openstack/neutron-dbd0-account-create-update-qbplg" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.927650 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12acfe9d-3c59-4b09-bc73-511763da0f97-operator-scripts\") pod \"neutron-db-create-krk5b\" (UID: \"12acfe9d-3c59-4b09-bc73-511763da0f97\") " pod="openstack/neutron-db-create-krk5b" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.927681 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcjl7\" (UniqueName: \"kubernetes.io/projected/12acfe9d-3c59-4b09-bc73-511763da0f97-kube-api-access-kcjl7\") pod \"neutron-db-create-krk5b\" (UID: \"12acfe9d-3c59-4b09-bc73-511763da0f97\") " pod="openstack/neutron-db-create-krk5b" Jan 29 08:11:57 crc kubenswrapper[4826]: I0129 08:11:57.927765 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdrbs\" (UniqueName: \"kubernetes.io/projected/86b70399-c3a4-4543-b284-fad06b2dff50-kube-api-access-bdrbs\") pod \"neutron-dbd0-account-create-update-qbplg\" (UID: \"86b70399-c3a4-4543-b284-fad06b2dff50\") " pod="openstack/neutron-dbd0-account-create-update-qbplg" Jan 29 08:11:58 crc kubenswrapper[4826]: I0129 08:11:58.029724 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b70399-c3a4-4543-b284-fad06b2dff50-operator-scripts\") pod \"neutron-dbd0-account-create-update-qbplg\" (UID: \"86b70399-c3a4-4543-b284-fad06b2dff50\") " pod="openstack/neutron-dbd0-account-create-update-qbplg" Jan 29 08:11:58 crc kubenswrapper[4826]: I0129 08:11:58.029803 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12acfe9d-3c59-4b09-bc73-511763da0f97-operator-scripts\") pod \"neutron-db-create-krk5b\" (UID: \"12acfe9d-3c59-4b09-bc73-511763da0f97\") " pod="openstack/neutron-db-create-krk5b" Jan 29 08:11:58 crc kubenswrapper[4826]: I0129 08:11:58.029834 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcjl7\" (UniqueName: \"kubernetes.io/projected/12acfe9d-3c59-4b09-bc73-511763da0f97-kube-api-access-kcjl7\") pod \"neutron-db-create-krk5b\" (UID: \"12acfe9d-3c59-4b09-bc73-511763da0f97\") " pod="openstack/neutron-db-create-krk5b" Jan 29 08:11:58 crc kubenswrapper[4826]: I0129 08:11:58.029894 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdrbs\" (UniqueName: \"kubernetes.io/projected/86b70399-c3a4-4543-b284-fad06b2dff50-kube-api-access-bdrbs\") pod \"neutron-dbd0-account-create-update-qbplg\" (UID: \"86b70399-c3a4-4543-b284-fad06b2dff50\") " pod="openstack/neutron-dbd0-account-create-update-qbplg" Jan 29 08:11:58 crc kubenswrapper[4826]: I0129 08:11:58.030923 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b70399-c3a4-4543-b284-fad06b2dff50-operator-scripts\") pod \"neutron-dbd0-account-create-update-qbplg\" (UID: \"86b70399-c3a4-4543-b284-fad06b2dff50\") " pod="openstack/neutron-dbd0-account-create-update-qbplg" Jan 29 08:11:58 crc kubenswrapper[4826]: I0129 08:11:58.030987 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12acfe9d-3c59-4b09-bc73-511763da0f97-operator-scripts\") pod \"neutron-db-create-krk5b\" (UID: \"12acfe9d-3c59-4b09-bc73-511763da0f97\") " pod="openstack/neutron-db-create-krk5b" Jan 29 08:11:58 crc kubenswrapper[4826]: I0129 08:11:58.051081 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdrbs\" (UniqueName: \"kubernetes.io/projected/86b70399-c3a4-4543-b284-fad06b2dff50-kube-api-access-bdrbs\") pod \"neutron-dbd0-account-create-update-qbplg\" (UID: \"86b70399-c3a4-4543-b284-fad06b2dff50\") " pod="openstack/neutron-dbd0-account-create-update-qbplg" Jan 29 08:11:58 crc kubenswrapper[4826]: I0129 08:11:58.051291 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcjl7\" (UniqueName: \"kubernetes.io/projected/12acfe9d-3c59-4b09-bc73-511763da0f97-kube-api-access-kcjl7\") pod \"neutron-db-create-krk5b\" (UID: \"12acfe9d-3c59-4b09-bc73-511763da0f97\") " pod="openstack/neutron-db-create-krk5b" Jan 29 08:11:58 crc kubenswrapper[4826]: I0129 08:11:58.188100 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbd0-account-create-update-qbplg" Jan 29 08:11:58 crc kubenswrapper[4826]: I0129 08:11:58.199681 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-krk5b" Jan 29 08:11:58 crc kubenswrapper[4826]: I0129 08:11:58.672954 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dbd0-account-create-update-qbplg"] Jan 29 08:11:58 crc kubenswrapper[4826]: I0129 08:11:58.755162 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-krk5b"] Jan 29 08:11:58 crc kubenswrapper[4826]: W0129 08:11:58.755717 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12acfe9d_3c59_4b09_bc73_511763da0f97.slice/crio-637f29aafeedd4f95a5103b4d57ecf09f9373656f555903bd68f0b0e60fe8045 WatchSource:0}: Error finding container 637f29aafeedd4f95a5103b4d57ecf09f9373656f555903bd68f0b0e60fe8045: Status 404 returned error can't find the container with id 637f29aafeedd4f95a5103b4d57ecf09f9373656f555903bd68f0b0e60fe8045 Jan 29 08:11:59 crc kubenswrapper[4826]: I0129 08:11:59.591529 4826 generic.go:334] "Generic (PLEG): container finished" podID="86b70399-c3a4-4543-b284-fad06b2dff50" containerID="0726c8bc8e1c1165698bc5a921b4b7c82a090dc6f3bb6842f46a017255eb995e" exitCode=0 Jan 29 08:11:59 crc kubenswrapper[4826]: I0129 08:11:59.591941 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbd0-account-create-update-qbplg" event={"ID":"86b70399-c3a4-4543-b284-fad06b2dff50","Type":"ContainerDied","Data":"0726c8bc8e1c1165698bc5a921b4b7c82a090dc6f3bb6842f46a017255eb995e"} Jan 29 08:11:59 crc kubenswrapper[4826]: I0129 08:11:59.591975 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbd0-account-create-update-qbplg" event={"ID":"86b70399-c3a4-4543-b284-fad06b2dff50","Type":"ContainerStarted","Data":"32d5e007046a7a4db4e48d053bdff4c7994e82aaf9f95a5744839100ad0b1710"} Jan 29 08:11:59 crc kubenswrapper[4826]: I0129 08:11:59.597418 4826 generic.go:334] "Generic (PLEG): container finished" podID="12acfe9d-3c59-4b09-bc73-511763da0f97" containerID="d5a6de88ebe70994f81bb40c776b10407595642ef804a4c446bafefa7b6bef49" exitCode=0 Jan 29 08:11:59 crc kubenswrapper[4826]: I0129 08:11:59.597465 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-krk5b" event={"ID":"12acfe9d-3c59-4b09-bc73-511763da0f97","Type":"ContainerDied","Data":"d5a6de88ebe70994f81bb40c776b10407595642ef804a4c446bafefa7b6bef49"} Jan 29 08:11:59 crc kubenswrapper[4826]: I0129 08:11:59.597491 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-krk5b" event={"ID":"12acfe9d-3c59-4b09-bc73-511763da0f97","Type":"ContainerStarted","Data":"637f29aafeedd4f95a5103b4d57ecf09f9373656f555903bd68f0b0e60fe8045"} Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.068332 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbd0-account-create-update-qbplg" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.072880 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-krk5b" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.081733 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdrbs\" (UniqueName: \"kubernetes.io/projected/86b70399-c3a4-4543-b284-fad06b2dff50-kube-api-access-bdrbs\") pod \"86b70399-c3a4-4543-b284-fad06b2dff50\" (UID: \"86b70399-c3a4-4543-b284-fad06b2dff50\") " Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.081791 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12acfe9d-3c59-4b09-bc73-511763da0f97-operator-scripts\") pod \"12acfe9d-3c59-4b09-bc73-511763da0f97\" (UID: \"12acfe9d-3c59-4b09-bc73-511763da0f97\") " Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.081830 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b70399-c3a4-4543-b284-fad06b2dff50-operator-scripts\") pod \"86b70399-c3a4-4543-b284-fad06b2dff50\" (UID: \"86b70399-c3a4-4543-b284-fad06b2dff50\") " Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.081895 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcjl7\" (UniqueName: \"kubernetes.io/projected/12acfe9d-3c59-4b09-bc73-511763da0f97-kube-api-access-kcjl7\") pod \"12acfe9d-3c59-4b09-bc73-511763da0f97\" (UID: \"12acfe9d-3c59-4b09-bc73-511763da0f97\") " Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.083712 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12acfe9d-3c59-4b09-bc73-511763da0f97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12acfe9d-3c59-4b09-bc73-511763da0f97" (UID: "12acfe9d-3c59-4b09-bc73-511763da0f97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.084182 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86b70399-c3a4-4543-b284-fad06b2dff50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86b70399-c3a4-4543-b284-fad06b2dff50" (UID: "86b70399-c3a4-4543-b284-fad06b2dff50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.089712 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12acfe9d-3c59-4b09-bc73-511763da0f97-kube-api-access-kcjl7" (OuterVolumeSpecName: "kube-api-access-kcjl7") pod "12acfe9d-3c59-4b09-bc73-511763da0f97" (UID: "12acfe9d-3c59-4b09-bc73-511763da0f97"). InnerVolumeSpecName "kube-api-access-kcjl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.092985 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b70399-c3a4-4543-b284-fad06b2dff50-kube-api-access-bdrbs" (OuterVolumeSpecName: "kube-api-access-bdrbs") pod "86b70399-c3a4-4543-b284-fad06b2dff50" (UID: "86b70399-c3a4-4543-b284-fad06b2dff50"). InnerVolumeSpecName "kube-api-access-bdrbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.183684 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b70399-c3a4-4543-b284-fad06b2dff50-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.183722 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcjl7\" (UniqueName: \"kubernetes.io/projected/12acfe9d-3c59-4b09-bc73-511763da0f97-kube-api-access-kcjl7\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.183736 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdrbs\" (UniqueName: \"kubernetes.io/projected/86b70399-c3a4-4543-b284-fad06b2dff50-kube-api-access-bdrbs\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.183748 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12acfe9d-3c59-4b09-bc73-511763da0f97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.621064 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-krk5b" event={"ID":"12acfe9d-3c59-4b09-bc73-511763da0f97","Type":"ContainerDied","Data":"637f29aafeedd4f95a5103b4d57ecf09f9373656f555903bd68f0b0e60fe8045"} Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.621115 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-krk5b" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.621123 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="637f29aafeedd4f95a5103b4d57ecf09f9373656f555903bd68f0b0e60fe8045" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.622940 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dbd0-account-create-update-qbplg" event={"ID":"86b70399-c3a4-4543-b284-fad06b2dff50","Type":"ContainerDied","Data":"32d5e007046a7a4db4e48d053bdff4c7994e82aaf9f95a5744839100ad0b1710"} Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.622968 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32d5e007046a7a4db4e48d053bdff4c7994e82aaf9f95a5744839100ad0b1710" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.623028 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dbd0-account-create-update-qbplg" Jan 29 08:12:01 crc kubenswrapper[4826]: I0129 08:12:01.809798 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:12:01 crc kubenswrapper[4826]: E0129 08:12:01.810034 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.106639 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qhcwq"] Jan 29 08:12:03 crc kubenswrapper[4826]: E0129 08:12:03.107620 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b70399-c3a4-4543-b284-fad06b2dff50" containerName="mariadb-account-create-update" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.107644 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b70399-c3a4-4543-b284-fad06b2dff50" containerName="mariadb-account-create-update" Jan 29 08:12:03 crc kubenswrapper[4826]: E0129 08:12:03.107695 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12acfe9d-3c59-4b09-bc73-511763da0f97" containerName="mariadb-database-create" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.107710 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="12acfe9d-3c59-4b09-bc73-511763da0f97" containerName="mariadb-database-create" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.107994 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b70399-c3a4-4543-b284-fad06b2dff50" containerName="mariadb-account-create-update" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.108023 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="12acfe9d-3c59-4b09-bc73-511763da0f97" containerName="mariadb-database-create" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.108854 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qhcwq" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.112348 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sh8rq" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.113648 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.124254 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.129539 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qhcwq"] Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.226903 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed258084-deca-4148-bc0a-5a56182f4a7e-config\") pod \"neutron-db-sync-qhcwq\" (UID: \"ed258084-deca-4148-bc0a-5a56182f4a7e\") " pod="openstack/neutron-db-sync-qhcwq" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.226983 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zbnc\" (UniqueName: \"kubernetes.io/projected/ed258084-deca-4148-bc0a-5a56182f4a7e-kube-api-access-5zbnc\") pod \"neutron-db-sync-qhcwq\" (UID: \"ed258084-deca-4148-bc0a-5a56182f4a7e\") " pod="openstack/neutron-db-sync-qhcwq" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.227034 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed258084-deca-4148-bc0a-5a56182f4a7e-combined-ca-bundle\") pod \"neutron-db-sync-qhcwq\" (UID: \"ed258084-deca-4148-bc0a-5a56182f4a7e\") " pod="openstack/neutron-db-sync-qhcwq" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.329405 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed258084-deca-4148-bc0a-5a56182f4a7e-config\") pod \"neutron-db-sync-qhcwq\" (UID: \"ed258084-deca-4148-bc0a-5a56182f4a7e\") " pod="openstack/neutron-db-sync-qhcwq" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.329483 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zbnc\" (UniqueName: \"kubernetes.io/projected/ed258084-deca-4148-bc0a-5a56182f4a7e-kube-api-access-5zbnc\") pod \"neutron-db-sync-qhcwq\" (UID: \"ed258084-deca-4148-bc0a-5a56182f4a7e\") " pod="openstack/neutron-db-sync-qhcwq" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.329529 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed258084-deca-4148-bc0a-5a56182f4a7e-combined-ca-bundle\") pod \"neutron-db-sync-qhcwq\" (UID: \"ed258084-deca-4148-bc0a-5a56182f4a7e\") " pod="openstack/neutron-db-sync-qhcwq" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.336441 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed258084-deca-4148-bc0a-5a56182f4a7e-combined-ca-bundle\") pod \"neutron-db-sync-qhcwq\" (UID: \"ed258084-deca-4148-bc0a-5a56182f4a7e\") " pod="openstack/neutron-db-sync-qhcwq" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.342753 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed258084-deca-4148-bc0a-5a56182f4a7e-config\") pod \"neutron-db-sync-qhcwq\" (UID: \"ed258084-deca-4148-bc0a-5a56182f4a7e\") " pod="openstack/neutron-db-sync-qhcwq" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.357555 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zbnc\" (UniqueName: \"kubernetes.io/projected/ed258084-deca-4148-bc0a-5a56182f4a7e-kube-api-access-5zbnc\") pod \"neutron-db-sync-qhcwq\" (UID: \"ed258084-deca-4148-bc0a-5a56182f4a7e\") " pod="openstack/neutron-db-sync-qhcwq" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.492655 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qhcwq" Jan 29 08:12:03 crc kubenswrapper[4826]: I0129 08:12:03.970979 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qhcwq"] Jan 29 08:12:04 crc kubenswrapper[4826]: I0129 08:12:04.655320 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qhcwq" event={"ID":"ed258084-deca-4148-bc0a-5a56182f4a7e","Type":"ContainerStarted","Data":"bb0484b1cb95cf1207588598d902edf20d859084ec3bb2584317baf8e27a8c1a"} Jan 29 08:12:04 crc kubenswrapper[4826]: I0129 08:12:04.656734 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qhcwq" event={"ID":"ed258084-deca-4148-bc0a-5a56182f4a7e","Type":"ContainerStarted","Data":"b8b2426a0161066c49e1ce11178173b1254c6bf46d0478b70042ff11d00247ce"} Jan 29 08:12:04 crc kubenswrapper[4826]: I0129 08:12:04.680324 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qhcwq" podStartSLOduration=1.6802756589999999 podStartE2EDuration="1.680275659s" podCreationTimestamp="2026-01-29 08:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:12:04.671238891 +0000 UTC m=+5308.533032000" watchObservedRunningTime="2026-01-29 08:12:04.680275659 +0000 UTC m=+5308.542068758" Jan 29 08:12:08 crc kubenswrapper[4826]: I0129 08:12:08.705590 4826 generic.go:334] "Generic (PLEG): container finished" podID="ed258084-deca-4148-bc0a-5a56182f4a7e" containerID="bb0484b1cb95cf1207588598d902edf20d859084ec3bb2584317baf8e27a8c1a" exitCode=0 Jan 29 08:12:08 crc kubenswrapper[4826]: I0129 08:12:08.705703 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qhcwq" event={"ID":"ed258084-deca-4148-bc0a-5a56182f4a7e","Type":"ContainerDied","Data":"bb0484b1cb95cf1207588598d902edf20d859084ec3bb2584317baf8e27a8c1a"} Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.151339 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qhcwq" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.275409 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed258084-deca-4148-bc0a-5a56182f4a7e-combined-ca-bundle\") pod \"ed258084-deca-4148-bc0a-5a56182f4a7e\" (UID: \"ed258084-deca-4148-bc0a-5a56182f4a7e\") " Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.275517 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zbnc\" (UniqueName: \"kubernetes.io/projected/ed258084-deca-4148-bc0a-5a56182f4a7e-kube-api-access-5zbnc\") pod \"ed258084-deca-4148-bc0a-5a56182f4a7e\" (UID: \"ed258084-deca-4148-bc0a-5a56182f4a7e\") " Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.275855 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed258084-deca-4148-bc0a-5a56182f4a7e-config\") pod \"ed258084-deca-4148-bc0a-5a56182f4a7e\" (UID: \"ed258084-deca-4148-bc0a-5a56182f4a7e\") " Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.284180 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed258084-deca-4148-bc0a-5a56182f4a7e-kube-api-access-5zbnc" (OuterVolumeSpecName: "kube-api-access-5zbnc") pod "ed258084-deca-4148-bc0a-5a56182f4a7e" (UID: "ed258084-deca-4148-bc0a-5a56182f4a7e"). InnerVolumeSpecName "kube-api-access-5zbnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.320462 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed258084-deca-4148-bc0a-5a56182f4a7e-config" (OuterVolumeSpecName: "config") pod "ed258084-deca-4148-bc0a-5a56182f4a7e" (UID: "ed258084-deca-4148-bc0a-5a56182f4a7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.329112 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed258084-deca-4148-bc0a-5a56182f4a7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed258084-deca-4148-bc0a-5a56182f4a7e" (UID: "ed258084-deca-4148-bc0a-5a56182f4a7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.378586 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed258084-deca-4148-bc0a-5a56182f4a7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.378637 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zbnc\" (UniqueName: \"kubernetes.io/projected/ed258084-deca-4148-bc0a-5a56182f4a7e-kube-api-access-5zbnc\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.378659 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed258084-deca-4148-bc0a-5a56182f4a7e-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.735772 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qhcwq" event={"ID":"ed258084-deca-4148-bc0a-5a56182f4a7e","Type":"ContainerDied","Data":"b8b2426a0161066c49e1ce11178173b1254c6bf46d0478b70042ff11d00247ce"} Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.735830 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8b2426a0161066c49e1ce11178173b1254c6bf46d0478b70042ff11d00247ce" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.735854 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qhcwq" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.912100 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6779c9d96f-pg8l2"] Jan 29 08:12:10 crc kubenswrapper[4826]: E0129 08:12:10.912532 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed258084-deca-4148-bc0a-5a56182f4a7e" containerName="neutron-db-sync" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.912553 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed258084-deca-4148-bc0a-5a56182f4a7e" containerName="neutron-db-sync" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.912753 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed258084-deca-4148-bc0a-5a56182f4a7e" containerName="neutron-db-sync" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.915340 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.947347 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6779c9d96f-pg8l2"] Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.990204 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-config\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.990282 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4fg5\" (UniqueName: \"kubernetes.io/projected/2f748bb0-1855-4f91-9fb3-b22c39e29eab-kube-api-access-r4fg5\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.990333 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-ovsdbserver-nb\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.990370 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-ovsdbserver-sb\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:10 crc kubenswrapper[4826]: I0129 08:12:10.990393 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-dns-svc\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.031243 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f446dc6b-46bwh"] Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.034185 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.036952 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.037158 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sh8rq" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.037467 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.038023 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.044072 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f446dc6b-46bwh"] Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.092011 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-config\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.092131 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-ovsdbserver-sb\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.092979 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-ovsdbserver-sb\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.093042 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-combined-ca-bundle\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.093069 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-dns-svc\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.093703 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-dns-svc\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.093774 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-config\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.093827 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvxcg\" (UniqueName: \"kubernetes.io/projected/db7ef629-be06-4915-bf03-b8c5106670a1-kube-api-access-cvxcg\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.093855 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-ovndb-tls-certs\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.093895 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4fg5\" (UniqueName: \"kubernetes.io/projected/2f748bb0-1855-4f91-9fb3-b22c39e29eab-kube-api-access-r4fg5\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.093918 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-httpd-config\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.093948 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-ovsdbserver-nb\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.094627 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-ovsdbserver-nb\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.095515 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-config\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.114702 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4fg5\" (UniqueName: \"kubernetes.io/projected/2f748bb0-1855-4f91-9fb3-b22c39e29eab-kube-api-access-r4fg5\") pod \"dnsmasq-dns-6779c9d96f-pg8l2\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.195537 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-combined-ca-bundle\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.195650 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvxcg\" (UniqueName: \"kubernetes.io/projected/db7ef629-be06-4915-bf03-b8c5106670a1-kube-api-access-cvxcg\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.195673 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-ovndb-tls-certs\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.195701 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-httpd-config\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.195743 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-config\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.199245 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-config\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.200074 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-combined-ca-bundle\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.203103 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-httpd-config\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.212123 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-ovndb-tls-certs\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.217552 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvxcg\" (UniqueName: \"kubernetes.io/projected/db7ef629-be06-4915-bf03-b8c5106670a1-kube-api-access-cvxcg\") pod \"neutron-6f446dc6b-46bwh\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.245279 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.350714 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.728428 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6779c9d96f-pg8l2"] Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.747028 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" event={"ID":"2f748bb0-1855-4f91-9fb3-b22c39e29eab","Type":"ContainerStarted","Data":"cb00235711f093c36ce471ad8bcbcc89b958045d3b3425d93f7ee8b8251dc104"} Jan 29 08:12:11 crc kubenswrapper[4826]: I0129 08:12:11.944418 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f446dc6b-46bwh"] Jan 29 08:12:12 crc kubenswrapper[4826]: I0129 08:12:12.761237 4826 generic.go:334] "Generic (PLEG): container finished" podID="2f748bb0-1855-4f91-9fb3-b22c39e29eab" containerID="8d8da36ab850fd1f1e4e00829ccb5de9959c5f29f3945d744e48a4ea4b727388" exitCode=0 Jan 29 08:12:12 crc kubenswrapper[4826]: I0129 08:12:12.761380 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" event={"ID":"2f748bb0-1855-4f91-9fb3-b22c39e29eab","Type":"ContainerDied","Data":"8d8da36ab850fd1f1e4e00829ccb5de9959c5f29f3945d744e48a4ea4b727388"} Jan 29 08:12:12 crc kubenswrapper[4826]: I0129 08:12:12.767081 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f446dc6b-46bwh" event={"ID":"db7ef629-be06-4915-bf03-b8c5106670a1","Type":"ContainerStarted","Data":"85ff00eea8a24d9b4087cdbea2f296645a54f8226ebfe9c87ade60efc4c77194"} Jan 29 08:12:12 crc kubenswrapper[4826]: I0129 08:12:12.767165 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f446dc6b-46bwh" event={"ID":"db7ef629-be06-4915-bf03-b8c5106670a1","Type":"ContainerStarted","Data":"5c60d1d69fd5d7c8fc28c2ab64cfe0b5e067de2241af32abd03eb7fd3de15ca2"} Jan 29 08:12:12 crc kubenswrapper[4826]: I0129 08:12:12.767185 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f446dc6b-46bwh" event={"ID":"db7ef629-be06-4915-bf03-b8c5106670a1","Type":"ContainerStarted","Data":"c051aedcbddafc00d48799b8f19db9b9d44e2a06553553c819c2f57b9552b335"} Jan 29 08:12:12 crc kubenswrapper[4826]: I0129 08:12:12.768174 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:12 crc kubenswrapper[4826]: I0129 08:12:12.816632 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f446dc6b-46bwh" podStartSLOduration=1.816617873 podStartE2EDuration="1.816617873s" podCreationTimestamp="2026-01-29 08:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:12:12.814488057 +0000 UTC m=+5316.676281136" watchObservedRunningTime="2026-01-29 08:12:12.816617873 +0000 UTC m=+5316.678410942" Jan 29 08:12:13 crc kubenswrapper[4826]: I0129 08:12:13.776705 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" event={"ID":"2f748bb0-1855-4f91-9fb3-b22c39e29eab","Type":"ContainerStarted","Data":"c07ab096b05162127deea8a753f92287fc72f737e9c6dbea450ee46d48d6c0f9"} Jan 29 08:12:13 crc kubenswrapper[4826]: I0129 08:12:13.798231 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" podStartSLOduration=3.798207766 podStartE2EDuration="3.798207766s" podCreationTimestamp="2026-01-29 08:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:12:13.795934236 +0000 UTC m=+5317.657727345" watchObservedRunningTime="2026-01-29 08:12:13.798207766 +0000 UTC m=+5317.660000875" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.105201 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f8bfccb8f-fkvqn"] Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.106697 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.109718 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.109996 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.129464 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f8bfccb8f-fkvqn"] Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.152618 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-httpd-config\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.152670 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-config\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.152723 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jljsd\" (UniqueName: \"kubernetes.io/projected/45765b70-e905-4d85-8d65-bbacf291190a-kube-api-access-jljsd\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.152973 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-ovndb-tls-certs\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.153025 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-internal-tls-certs\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.153116 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-public-tls-certs\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.153173 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-combined-ca-bundle\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.254868 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-httpd-config\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.254935 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-config\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.254978 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jljsd\" (UniqueName: \"kubernetes.io/projected/45765b70-e905-4d85-8d65-bbacf291190a-kube-api-access-jljsd\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.255046 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-ovndb-tls-certs\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.255072 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-internal-tls-certs\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.255102 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-public-tls-certs\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.255130 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-combined-ca-bundle\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.262635 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-public-tls-certs\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.263599 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-internal-tls-certs\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.265171 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-httpd-config\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.273340 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-ovndb-tls-certs\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.274555 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-config\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.275469 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45765b70-e905-4d85-8d65-bbacf291190a-combined-ca-bundle\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.296247 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jljsd\" (UniqueName: \"kubernetes.io/projected/45765b70-e905-4d85-8d65-bbacf291190a-kube-api-access-jljsd\") pod \"neutron-5f8bfccb8f-fkvqn\" (UID: \"45765b70-e905-4d85-8d65-bbacf291190a\") " pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.427450 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:14 crc kubenswrapper[4826]: I0129 08:12:14.785133 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:15 crc kubenswrapper[4826]: I0129 08:12:15.026229 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f8bfccb8f-fkvqn"] Jan 29 08:12:15 crc kubenswrapper[4826]: W0129 08:12:15.031458 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45765b70_e905_4d85_8d65_bbacf291190a.slice/crio-2239b2fc58746d125f55423121723bf1a015d89a161f37c79e311cf401d22bee WatchSource:0}: Error finding container 2239b2fc58746d125f55423121723bf1a015d89a161f37c79e311cf401d22bee: Status 404 returned error can't find the container with id 2239b2fc58746d125f55423121723bf1a015d89a161f37c79e311cf401d22bee Jan 29 08:12:15 crc kubenswrapper[4826]: I0129 08:12:15.794598 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f8bfccb8f-fkvqn" event={"ID":"45765b70-e905-4d85-8d65-bbacf291190a","Type":"ContainerStarted","Data":"ea3d11f9632ae8e5525db168f2674dfbe39234dc0c417dad69c8f463feee6f58"} Jan 29 08:12:15 crc kubenswrapper[4826]: I0129 08:12:15.794921 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f8bfccb8f-fkvqn" event={"ID":"45765b70-e905-4d85-8d65-bbacf291190a","Type":"ContainerStarted","Data":"89574f141dea88cdc4283319b0605298f7a7613054ab3305071b56f909578a33"} Jan 29 08:12:15 crc kubenswrapper[4826]: I0129 08:12:15.794933 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f8bfccb8f-fkvqn" event={"ID":"45765b70-e905-4d85-8d65-bbacf291190a","Type":"ContainerStarted","Data":"2239b2fc58746d125f55423121723bf1a015d89a161f37c79e311cf401d22bee"} Jan 29 08:12:15 crc kubenswrapper[4826]: I0129 08:12:15.818215 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f8bfccb8f-fkvqn" podStartSLOduration=1.8181918750000001 podStartE2EDuration="1.818191875s" podCreationTimestamp="2026-01-29 08:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:12:15.814331834 +0000 UTC m=+5319.676124903" watchObservedRunningTime="2026-01-29 08:12:15.818191875 +0000 UTC m=+5319.679984944" Jan 29 08:12:16 crc kubenswrapper[4826]: I0129 08:12:16.804561 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:16 crc kubenswrapper[4826]: I0129 08:12:16.814631 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:12:16 crc kubenswrapper[4826]: E0129 08:12:16.814940 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:12:21 crc kubenswrapper[4826]: I0129 08:12:21.246651 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:12:21 crc kubenswrapper[4826]: I0129 08:12:21.342587 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-756bb75757-mdq4x"] Jan 29 08:12:21 crc kubenswrapper[4826]: I0129 08:12:21.343342 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-756bb75757-mdq4x" podUID="f2c964f6-3323-4fe8-949e-066624fc29d7" containerName="dnsmasq-dns" containerID="cri-o://ec279a703333da719630f2e0c6d148b5384ea39751419c0374a4b84c8c570303" gracePeriod=10 Jan 29 08:12:21 crc kubenswrapper[4826]: I0129 08:12:21.879563 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:12:21 crc kubenswrapper[4826]: I0129 08:12:21.921168 4826 generic.go:334] "Generic (PLEG): container finished" podID="f2c964f6-3323-4fe8-949e-066624fc29d7" containerID="ec279a703333da719630f2e0c6d148b5384ea39751419c0374a4b84c8c570303" exitCode=0 Jan 29 08:12:21 crc kubenswrapper[4826]: I0129 08:12:21.921211 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756bb75757-mdq4x" event={"ID":"f2c964f6-3323-4fe8-949e-066624fc29d7","Type":"ContainerDied","Data":"ec279a703333da719630f2e0c6d148b5384ea39751419c0374a4b84c8c570303"} Jan 29 08:12:21 crc kubenswrapper[4826]: I0129 08:12:21.921251 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756bb75757-mdq4x" event={"ID":"f2c964f6-3323-4fe8-949e-066624fc29d7","Type":"ContainerDied","Data":"5f9b5b78f6a98edcdc066ae7459151d6e715be1257c33dd4e8b227389b4e0ee2"} Jan 29 08:12:21 crc kubenswrapper[4826]: I0129 08:12:21.921268 4826 scope.go:117] "RemoveContainer" containerID="ec279a703333da719630f2e0c6d148b5384ea39751419c0374a4b84c8c570303" Jan 29 08:12:21 crc kubenswrapper[4826]: I0129 08:12:21.921421 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756bb75757-mdq4x" Jan 29 08:12:21 crc kubenswrapper[4826]: I0129 08:12:21.993826 4826 scope.go:117] "RemoveContainer" containerID="bb02234ad800074f69ecf93dd410c55eb3fea27843e1d7c41e8ec580b90760e5" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.007872 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-dns-svc\") pod \"f2c964f6-3323-4fe8-949e-066624fc29d7\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.007939 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-ovsdbserver-nb\") pod \"f2c964f6-3323-4fe8-949e-066624fc29d7\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.007990 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-config\") pod \"f2c964f6-3323-4fe8-949e-066624fc29d7\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.008018 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-ovsdbserver-sb\") pod \"f2c964f6-3323-4fe8-949e-066624fc29d7\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.008321 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n88t9\" (UniqueName: \"kubernetes.io/projected/f2c964f6-3323-4fe8-949e-066624fc29d7-kube-api-access-n88t9\") pod \"f2c964f6-3323-4fe8-949e-066624fc29d7\" (UID: \"f2c964f6-3323-4fe8-949e-066624fc29d7\") " Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.016687 4826 scope.go:117] "RemoveContainer" containerID="ec279a703333da719630f2e0c6d148b5384ea39751419c0374a4b84c8c570303" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.016701 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2c964f6-3323-4fe8-949e-066624fc29d7-kube-api-access-n88t9" (OuterVolumeSpecName: "kube-api-access-n88t9") pod "f2c964f6-3323-4fe8-949e-066624fc29d7" (UID: "f2c964f6-3323-4fe8-949e-066624fc29d7"). InnerVolumeSpecName "kube-api-access-n88t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:12:22 crc kubenswrapper[4826]: E0129 08:12:22.017219 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec279a703333da719630f2e0c6d148b5384ea39751419c0374a4b84c8c570303\": container with ID starting with ec279a703333da719630f2e0c6d148b5384ea39751419c0374a4b84c8c570303 not found: ID does not exist" containerID="ec279a703333da719630f2e0c6d148b5384ea39751419c0374a4b84c8c570303" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.017264 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec279a703333da719630f2e0c6d148b5384ea39751419c0374a4b84c8c570303"} err="failed to get container status \"ec279a703333da719630f2e0c6d148b5384ea39751419c0374a4b84c8c570303\": rpc error: code = NotFound desc = could not find container \"ec279a703333da719630f2e0c6d148b5384ea39751419c0374a4b84c8c570303\": container with ID starting with ec279a703333da719630f2e0c6d148b5384ea39751419c0374a4b84c8c570303 not found: ID does not exist" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.017289 4826 scope.go:117] "RemoveContainer" containerID="bb02234ad800074f69ecf93dd410c55eb3fea27843e1d7c41e8ec580b90760e5" Jan 29 08:12:22 crc kubenswrapper[4826]: E0129 08:12:22.017626 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb02234ad800074f69ecf93dd410c55eb3fea27843e1d7c41e8ec580b90760e5\": container with ID starting with bb02234ad800074f69ecf93dd410c55eb3fea27843e1d7c41e8ec580b90760e5 not found: ID does not exist" containerID="bb02234ad800074f69ecf93dd410c55eb3fea27843e1d7c41e8ec580b90760e5" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.017658 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb02234ad800074f69ecf93dd410c55eb3fea27843e1d7c41e8ec580b90760e5"} err="failed to get container status \"bb02234ad800074f69ecf93dd410c55eb3fea27843e1d7c41e8ec580b90760e5\": rpc error: code = NotFound desc = could not find container \"bb02234ad800074f69ecf93dd410c55eb3fea27843e1d7c41e8ec580b90760e5\": container with ID starting with bb02234ad800074f69ecf93dd410c55eb3fea27843e1d7c41e8ec580b90760e5 not found: ID does not exist" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.052368 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-config" (OuterVolumeSpecName: "config") pod "f2c964f6-3323-4fe8-949e-066624fc29d7" (UID: "f2c964f6-3323-4fe8-949e-066624fc29d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.053383 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2c964f6-3323-4fe8-949e-066624fc29d7" (UID: "f2c964f6-3323-4fe8-949e-066624fc29d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.063196 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2c964f6-3323-4fe8-949e-066624fc29d7" (UID: "f2c964f6-3323-4fe8-949e-066624fc29d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.068781 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2c964f6-3323-4fe8-949e-066624fc29d7" (UID: "f2c964f6-3323-4fe8-949e-066624fc29d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.110820 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n88t9\" (UniqueName: \"kubernetes.io/projected/f2c964f6-3323-4fe8-949e-066624fc29d7-kube-api-access-n88t9\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.110854 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.110864 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.110874 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.110882 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2c964f6-3323-4fe8-949e-066624fc29d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.251575 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-756bb75757-mdq4x"] Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.260268 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-756bb75757-mdq4x"] Jan 29 08:12:22 crc kubenswrapper[4826]: I0129 08:12:22.823249 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2c964f6-3323-4fe8-949e-066624fc29d7" path="/var/lib/kubelet/pods/f2c964f6-3323-4fe8-949e-066624fc29d7/volumes" Jan 29 08:12:27 crc kubenswrapper[4826]: I0129 08:12:27.808706 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:12:27 crc kubenswrapper[4826]: E0129 08:12:27.810898 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.281750 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kmlqs"] Jan 29 08:12:36 crc kubenswrapper[4826]: E0129 08:12:36.282743 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2c964f6-3323-4fe8-949e-066624fc29d7" containerName="dnsmasq-dns" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.282760 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2c964f6-3323-4fe8-949e-066624fc29d7" containerName="dnsmasq-dns" Jan 29 08:12:36 crc kubenswrapper[4826]: E0129 08:12:36.282787 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2c964f6-3323-4fe8-949e-066624fc29d7" containerName="init" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.282796 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2c964f6-3323-4fe8-949e-066624fc29d7" containerName="init" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.283078 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2c964f6-3323-4fe8-949e-066624fc29d7" containerName="dnsmasq-dns" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.285185 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.303338 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptz8x\" (UniqueName: \"kubernetes.io/projected/b1f4100d-366d-4a0f-95e7-92bf4fefea66-kube-api-access-ptz8x\") pod \"certified-operators-kmlqs\" (UID: \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\") " pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.303413 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f4100d-366d-4a0f-95e7-92bf4fefea66-utilities\") pod \"certified-operators-kmlqs\" (UID: \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\") " pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.303551 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f4100d-366d-4a0f-95e7-92bf4fefea66-catalog-content\") pod \"certified-operators-kmlqs\" (UID: \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\") " pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.304503 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kmlqs"] Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.406207 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f4100d-366d-4a0f-95e7-92bf4fefea66-utilities\") pod \"certified-operators-kmlqs\" (UID: \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\") " pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.406292 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f4100d-366d-4a0f-95e7-92bf4fefea66-catalog-content\") pod \"certified-operators-kmlqs\" (UID: \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\") " pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.406381 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptz8x\" (UniqueName: \"kubernetes.io/projected/b1f4100d-366d-4a0f-95e7-92bf4fefea66-kube-api-access-ptz8x\") pod \"certified-operators-kmlqs\" (UID: \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\") " pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.406778 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f4100d-366d-4a0f-95e7-92bf4fefea66-utilities\") pod \"certified-operators-kmlqs\" (UID: \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\") " pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.406947 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f4100d-366d-4a0f-95e7-92bf4fefea66-catalog-content\") pod \"certified-operators-kmlqs\" (UID: \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\") " pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.426055 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptz8x\" (UniqueName: \"kubernetes.io/projected/b1f4100d-366d-4a0f-95e7-92bf4fefea66-kube-api-access-ptz8x\") pod \"certified-operators-kmlqs\" (UID: \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\") " pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:36 crc kubenswrapper[4826]: I0129 08:12:36.608323 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:37 crc kubenswrapper[4826]: I0129 08:12:37.149443 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kmlqs"] Jan 29 08:12:37 crc kubenswrapper[4826]: W0129 08:12:37.156785 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1f4100d_366d_4a0f_95e7_92bf4fefea66.slice/crio-f47ffd5aadb0aa0292212334be2f58aa20fa63ec766400119b3b92ab108febdb WatchSource:0}: Error finding container f47ffd5aadb0aa0292212334be2f58aa20fa63ec766400119b3b92ab108febdb: Status 404 returned error can't find the container with id f47ffd5aadb0aa0292212334be2f58aa20fa63ec766400119b3b92ab108febdb Jan 29 08:12:38 crc kubenswrapper[4826]: I0129 08:12:38.103621 4826 generic.go:334] "Generic (PLEG): container finished" podID="b1f4100d-366d-4a0f-95e7-92bf4fefea66" containerID="767da31f4a974b2dc037002b2d10462f19735371445bacb89b4e47401c3b1239" exitCode=0 Jan 29 08:12:38 crc kubenswrapper[4826]: I0129 08:12:38.103876 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmlqs" event={"ID":"b1f4100d-366d-4a0f-95e7-92bf4fefea66","Type":"ContainerDied","Data":"767da31f4a974b2dc037002b2d10462f19735371445bacb89b4e47401c3b1239"} Jan 29 08:12:38 crc kubenswrapper[4826]: I0129 08:12:38.103990 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmlqs" event={"ID":"b1f4100d-366d-4a0f-95e7-92bf4fefea66","Type":"ContainerStarted","Data":"f47ffd5aadb0aa0292212334be2f58aa20fa63ec766400119b3b92ab108febdb"} Jan 29 08:12:40 crc kubenswrapper[4826]: I0129 08:12:40.126547 4826 generic.go:334] "Generic (PLEG): container finished" podID="b1f4100d-366d-4a0f-95e7-92bf4fefea66" containerID="ef1158fc493bc65d7bf41ee79903991dedbb6a47b544cbc672228695c6f3e289" exitCode=0 Jan 29 08:12:40 crc kubenswrapper[4826]: I0129 08:12:40.126691 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmlqs" event={"ID":"b1f4100d-366d-4a0f-95e7-92bf4fefea66","Type":"ContainerDied","Data":"ef1158fc493bc65d7bf41ee79903991dedbb6a47b544cbc672228695c6f3e289"} Jan 29 08:12:41 crc kubenswrapper[4826]: I0129 08:12:41.135599 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmlqs" event={"ID":"b1f4100d-366d-4a0f-95e7-92bf4fefea66","Type":"ContainerStarted","Data":"d936677df47266db379a11351c2f32c0d1ac78f2fd9b8f43f39aa9d613b15c3c"} Jan 29 08:12:41 crc kubenswrapper[4826]: I0129 08:12:41.157335 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kmlqs" podStartSLOduration=2.65211077 podStartE2EDuration="5.157315979s" podCreationTimestamp="2026-01-29 08:12:36 +0000 UTC" firstStartedPulling="2026-01-29 08:12:38.107978733 +0000 UTC m=+5341.969771822" lastFinishedPulling="2026-01-29 08:12:40.613183962 +0000 UTC m=+5344.474977031" observedRunningTime="2026-01-29 08:12:41.154385842 +0000 UTC m=+5345.016178931" watchObservedRunningTime="2026-01-29 08:12:41.157315979 +0000 UTC m=+5345.019109048" Jan 29 08:12:41 crc kubenswrapper[4826]: I0129 08:12:41.366538 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:42 crc kubenswrapper[4826]: I0129 08:12:42.820290 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:12:42 crc kubenswrapper[4826]: E0129 08:12:42.821425 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:12:44 crc kubenswrapper[4826]: I0129 08:12:44.442376 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f8bfccb8f-fkvqn" Jan 29 08:12:44 crc kubenswrapper[4826]: I0129 08:12:44.510471 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f446dc6b-46bwh"] Jan 29 08:12:44 crc kubenswrapper[4826]: I0129 08:12:44.510819 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f446dc6b-46bwh" podUID="db7ef629-be06-4915-bf03-b8c5106670a1" containerName="neutron-api" containerID="cri-o://5c60d1d69fd5d7c8fc28c2ab64cfe0b5e067de2241af32abd03eb7fd3de15ca2" gracePeriod=30 Jan 29 08:12:44 crc kubenswrapper[4826]: I0129 08:12:44.510884 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f446dc6b-46bwh" podUID="db7ef629-be06-4915-bf03-b8c5106670a1" containerName="neutron-httpd" containerID="cri-o://85ff00eea8a24d9b4087cdbea2f296645a54f8226ebfe9c87ade60efc4c77194" gracePeriod=30 Jan 29 08:12:45 crc kubenswrapper[4826]: I0129 08:12:45.172351 4826 generic.go:334] "Generic (PLEG): container finished" podID="db7ef629-be06-4915-bf03-b8c5106670a1" containerID="85ff00eea8a24d9b4087cdbea2f296645a54f8226ebfe9c87ade60efc4c77194" exitCode=0 Jan 29 08:12:45 crc kubenswrapper[4826]: I0129 08:12:45.172388 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f446dc6b-46bwh" event={"ID":"db7ef629-be06-4915-bf03-b8c5106670a1","Type":"ContainerDied","Data":"85ff00eea8a24d9b4087cdbea2f296645a54f8226ebfe9c87ade60efc4c77194"} Jan 29 08:12:46 crc kubenswrapper[4826]: I0129 08:12:46.609322 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:46 crc kubenswrapper[4826]: I0129 08:12:46.610690 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:46 crc kubenswrapper[4826]: I0129 08:12:46.666770 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:47 crc kubenswrapper[4826]: I0129 08:12:47.258559 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:47 crc kubenswrapper[4826]: I0129 08:12:47.315710 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kmlqs"] Jan 29 08:12:47 crc kubenswrapper[4826]: I0129 08:12:47.824133 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:47 crc kubenswrapper[4826]: I0129 08:12:47.937326 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-config\") pod \"db7ef629-be06-4915-bf03-b8c5106670a1\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " Jan 29 08:12:47 crc kubenswrapper[4826]: I0129 08:12:47.937429 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-ovndb-tls-certs\") pod \"db7ef629-be06-4915-bf03-b8c5106670a1\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " Jan 29 08:12:47 crc kubenswrapper[4826]: I0129 08:12:47.937449 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-combined-ca-bundle\") pod \"db7ef629-be06-4915-bf03-b8c5106670a1\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " Jan 29 08:12:47 crc kubenswrapper[4826]: I0129 08:12:47.937587 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-httpd-config\") pod \"db7ef629-be06-4915-bf03-b8c5106670a1\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " Jan 29 08:12:47 crc kubenswrapper[4826]: I0129 08:12:47.937607 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvxcg\" (UniqueName: \"kubernetes.io/projected/db7ef629-be06-4915-bf03-b8c5106670a1-kube-api-access-cvxcg\") pod \"db7ef629-be06-4915-bf03-b8c5106670a1\" (UID: \"db7ef629-be06-4915-bf03-b8c5106670a1\") " Jan 29 08:12:47 crc kubenswrapper[4826]: I0129 08:12:47.942044 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "db7ef629-be06-4915-bf03-b8c5106670a1" (UID: "db7ef629-be06-4915-bf03-b8c5106670a1"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:47 crc kubenswrapper[4826]: I0129 08:12:47.942204 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7ef629-be06-4915-bf03-b8c5106670a1-kube-api-access-cvxcg" (OuterVolumeSpecName: "kube-api-access-cvxcg") pod "db7ef629-be06-4915-bf03-b8c5106670a1" (UID: "db7ef629-be06-4915-bf03-b8c5106670a1"). InnerVolumeSpecName "kube-api-access-cvxcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:12:47 crc kubenswrapper[4826]: I0129 08:12:47.976131 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-config" (OuterVolumeSpecName: "config") pod "db7ef629-be06-4915-bf03-b8c5106670a1" (UID: "db7ef629-be06-4915-bf03-b8c5106670a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:47 crc kubenswrapper[4826]: I0129 08:12:47.987017 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db7ef629-be06-4915-bf03-b8c5106670a1" (UID: "db7ef629-be06-4915-bf03-b8c5106670a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.000441 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "db7ef629-be06-4915-bf03-b8c5106670a1" (UID: "db7ef629-be06-4915-bf03-b8c5106670a1"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.040734 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.040785 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvxcg\" (UniqueName: \"kubernetes.io/projected/db7ef629-be06-4915-bf03-b8c5106670a1-kube-api-access-cvxcg\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.040800 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.040808 4826 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.040817 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7ef629-be06-4915-bf03-b8c5106670a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.058707 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-v488d"] Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.065605 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-v488d"] Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.202401 4826 generic.go:334] "Generic (PLEG): container finished" podID="db7ef629-be06-4915-bf03-b8c5106670a1" containerID="5c60d1d69fd5d7c8fc28c2ab64cfe0b5e067de2241af32abd03eb7fd3de15ca2" exitCode=0 Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.202556 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f446dc6b-46bwh" event={"ID":"db7ef629-be06-4915-bf03-b8c5106670a1","Type":"ContainerDied","Data":"5c60d1d69fd5d7c8fc28c2ab64cfe0b5e067de2241af32abd03eb7fd3de15ca2"} Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.202725 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f446dc6b-46bwh" event={"ID":"db7ef629-be06-4915-bf03-b8c5106670a1","Type":"ContainerDied","Data":"c051aedcbddafc00d48799b8f19db9b9d44e2a06553553c819c2f57b9552b335"} Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.202750 4826 scope.go:117] "RemoveContainer" containerID="85ff00eea8a24d9b4087cdbea2f296645a54f8226ebfe9c87ade60efc4c77194" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.202640 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f446dc6b-46bwh" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.226459 4826 scope.go:117] "RemoveContainer" containerID="5c60d1d69fd5d7c8fc28c2ab64cfe0b5e067de2241af32abd03eb7fd3de15ca2" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.238040 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f446dc6b-46bwh"] Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.246417 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6f446dc6b-46bwh"] Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.257860 4826 scope.go:117] "RemoveContainer" containerID="85ff00eea8a24d9b4087cdbea2f296645a54f8226ebfe9c87ade60efc4c77194" Jan 29 08:12:48 crc kubenswrapper[4826]: E0129 08:12:48.258281 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85ff00eea8a24d9b4087cdbea2f296645a54f8226ebfe9c87ade60efc4c77194\": container with ID starting with 85ff00eea8a24d9b4087cdbea2f296645a54f8226ebfe9c87ade60efc4c77194 not found: ID does not exist" containerID="85ff00eea8a24d9b4087cdbea2f296645a54f8226ebfe9c87ade60efc4c77194" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.258328 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85ff00eea8a24d9b4087cdbea2f296645a54f8226ebfe9c87ade60efc4c77194"} err="failed to get container status \"85ff00eea8a24d9b4087cdbea2f296645a54f8226ebfe9c87ade60efc4c77194\": rpc error: code = NotFound desc = could not find container \"85ff00eea8a24d9b4087cdbea2f296645a54f8226ebfe9c87ade60efc4c77194\": container with ID starting with 85ff00eea8a24d9b4087cdbea2f296645a54f8226ebfe9c87ade60efc4c77194 not found: ID does not exist" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.258352 4826 scope.go:117] "RemoveContainer" containerID="5c60d1d69fd5d7c8fc28c2ab64cfe0b5e067de2241af32abd03eb7fd3de15ca2" Jan 29 08:12:48 crc kubenswrapper[4826]: E0129 08:12:48.258620 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c60d1d69fd5d7c8fc28c2ab64cfe0b5e067de2241af32abd03eb7fd3de15ca2\": container with ID starting with 5c60d1d69fd5d7c8fc28c2ab64cfe0b5e067de2241af32abd03eb7fd3de15ca2 not found: ID does not exist" containerID="5c60d1d69fd5d7c8fc28c2ab64cfe0b5e067de2241af32abd03eb7fd3de15ca2" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.258664 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c60d1d69fd5d7c8fc28c2ab64cfe0b5e067de2241af32abd03eb7fd3de15ca2"} err="failed to get container status \"5c60d1d69fd5d7c8fc28c2ab64cfe0b5e067de2241af32abd03eb7fd3de15ca2\": rpc error: code = NotFound desc = could not find container \"5c60d1d69fd5d7c8fc28c2ab64cfe0b5e067de2241af32abd03eb7fd3de15ca2\": container with ID starting with 5c60d1d69fd5d7c8fc28c2ab64cfe0b5e067de2241af32abd03eb7fd3de15ca2 not found: ID does not exist" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.821171 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b11520f-13c9-4211-8543-5fccafb422ad" path="/var/lib/kubelet/pods/9b11520f-13c9-4211-8543-5fccafb422ad/volumes" Jan 29 08:12:48 crc kubenswrapper[4826]: I0129 08:12:48.821911 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7ef629-be06-4915-bf03-b8c5106670a1" path="/var/lib/kubelet/pods/db7ef629-be06-4915-bf03-b8c5106670a1/volumes" Jan 29 08:12:49 crc kubenswrapper[4826]: I0129 08:12:49.217752 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kmlqs" podUID="b1f4100d-366d-4a0f-95e7-92bf4fefea66" containerName="registry-server" containerID="cri-o://d936677df47266db379a11351c2f32c0d1ac78f2fd9b8f43f39aa9d613b15c3c" gracePeriod=2 Jan 29 08:12:49 crc kubenswrapper[4826]: I0129 08:12:49.767366 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:49 crc kubenswrapper[4826]: I0129 08:12:49.881240 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptz8x\" (UniqueName: \"kubernetes.io/projected/b1f4100d-366d-4a0f-95e7-92bf4fefea66-kube-api-access-ptz8x\") pod \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\" (UID: \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\") " Jan 29 08:12:49 crc kubenswrapper[4826]: I0129 08:12:49.881438 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f4100d-366d-4a0f-95e7-92bf4fefea66-catalog-content\") pod \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\" (UID: \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\") " Jan 29 08:12:49 crc kubenswrapper[4826]: I0129 08:12:49.881494 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f4100d-366d-4a0f-95e7-92bf4fefea66-utilities\") pod \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\" (UID: \"b1f4100d-366d-4a0f-95e7-92bf4fefea66\") " Jan 29 08:12:49 crc kubenswrapper[4826]: I0129 08:12:49.882789 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1f4100d-366d-4a0f-95e7-92bf4fefea66-utilities" (OuterVolumeSpecName: "utilities") pod "b1f4100d-366d-4a0f-95e7-92bf4fefea66" (UID: "b1f4100d-366d-4a0f-95e7-92bf4fefea66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:12:49 crc kubenswrapper[4826]: I0129 08:12:49.888975 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f4100d-366d-4a0f-95e7-92bf4fefea66-kube-api-access-ptz8x" (OuterVolumeSpecName: "kube-api-access-ptz8x") pod "b1f4100d-366d-4a0f-95e7-92bf4fefea66" (UID: "b1f4100d-366d-4a0f-95e7-92bf4fefea66"). InnerVolumeSpecName "kube-api-access-ptz8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:12:49 crc kubenswrapper[4826]: I0129 08:12:49.941344 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1f4100d-366d-4a0f-95e7-92bf4fefea66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1f4100d-366d-4a0f-95e7-92bf4fefea66" (UID: "b1f4100d-366d-4a0f-95e7-92bf4fefea66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:12:49 crc kubenswrapper[4826]: I0129 08:12:49.983913 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f4100d-366d-4a0f-95e7-92bf4fefea66-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:49 crc kubenswrapper[4826]: I0129 08:12:49.984224 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f4100d-366d-4a0f-95e7-92bf4fefea66-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:49 crc kubenswrapper[4826]: I0129 08:12:49.984503 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptz8x\" (UniqueName: \"kubernetes.io/projected/b1f4100d-366d-4a0f-95e7-92bf4fefea66-kube-api-access-ptz8x\") on node \"crc\" DevicePath \"\"" Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.234800 4826 generic.go:334] "Generic (PLEG): container finished" podID="b1f4100d-366d-4a0f-95e7-92bf4fefea66" containerID="d936677df47266db379a11351c2f32c0d1ac78f2fd9b8f43f39aa9d613b15c3c" exitCode=0 Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.234892 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmlqs" Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.234859 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmlqs" event={"ID":"b1f4100d-366d-4a0f-95e7-92bf4fefea66","Type":"ContainerDied","Data":"d936677df47266db379a11351c2f32c0d1ac78f2fd9b8f43f39aa9d613b15c3c"} Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.235054 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmlqs" event={"ID":"b1f4100d-366d-4a0f-95e7-92bf4fefea66","Type":"ContainerDied","Data":"f47ffd5aadb0aa0292212334be2f58aa20fa63ec766400119b3b92ab108febdb"} Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.235085 4826 scope.go:117] "RemoveContainer" containerID="d936677df47266db379a11351c2f32c0d1ac78f2fd9b8f43f39aa9d613b15c3c" Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.267211 4826 scope.go:117] "RemoveContainer" containerID="ef1158fc493bc65d7bf41ee79903991dedbb6a47b544cbc672228695c6f3e289" Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.298941 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kmlqs"] Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.310090 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kmlqs"] Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.315743 4826 scope.go:117] "RemoveContainer" containerID="767da31f4a974b2dc037002b2d10462f19735371445bacb89b4e47401c3b1239" Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.366494 4826 scope.go:117] "RemoveContainer" containerID="d936677df47266db379a11351c2f32c0d1ac78f2fd9b8f43f39aa9d613b15c3c" Jan 29 08:12:50 crc kubenswrapper[4826]: E0129 08:12:50.367125 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d936677df47266db379a11351c2f32c0d1ac78f2fd9b8f43f39aa9d613b15c3c\": container with ID starting with d936677df47266db379a11351c2f32c0d1ac78f2fd9b8f43f39aa9d613b15c3c not found: ID does not exist" containerID="d936677df47266db379a11351c2f32c0d1ac78f2fd9b8f43f39aa9d613b15c3c" Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.367199 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d936677df47266db379a11351c2f32c0d1ac78f2fd9b8f43f39aa9d613b15c3c"} err="failed to get container status \"d936677df47266db379a11351c2f32c0d1ac78f2fd9b8f43f39aa9d613b15c3c\": rpc error: code = NotFound desc = could not find container \"d936677df47266db379a11351c2f32c0d1ac78f2fd9b8f43f39aa9d613b15c3c\": container with ID starting with d936677df47266db379a11351c2f32c0d1ac78f2fd9b8f43f39aa9d613b15c3c not found: ID does not exist" Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.367248 4826 scope.go:117] "RemoveContainer" containerID="ef1158fc493bc65d7bf41ee79903991dedbb6a47b544cbc672228695c6f3e289" Jan 29 08:12:50 crc kubenswrapper[4826]: E0129 08:12:50.373168 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef1158fc493bc65d7bf41ee79903991dedbb6a47b544cbc672228695c6f3e289\": container with ID starting with ef1158fc493bc65d7bf41ee79903991dedbb6a47b544cbc672228695c6f3e289 not found: ID does not exist" containerID="ef1158fc493bc65d7bf41ee79903991dedbb6a47b544cbc672228695c6f3e289" Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.373244 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1158fc493bc65d7bf41ee79903991dedbb6a47b544cbc672228695c6f3e289"} err="failed to get container status \"ef1158fc493bc65d7bf41ee79903991dedbb6a47b544cbc672228695c6f3e289\": rpc error: code = NotFound desc = could not find container \"ef1158fc493bc65d7bf41ee79903991dedbb6a47b544cbc672228695c6f3e289\": container with ID starting with ef1158fc493bc65d7bf41ee79903991dedbb6a47b544cbc672228695c6f3e289 not found: ID does not exist" Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.373291 4826 scope.go:117] "RemoveContainer" containerID="767da31f4a974b2dc037002b2d10462f19735371445bacb89b4e47401c3b1239" Jan 29 08:12:50 crc kubenswrapper[4826]: E0129 08:12:50.373750 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"767da31f4a974b2dc037002b2d10462f19735371445bacb89b4e47401c3b1239\": container with ID starting with 767da31f4a974b2dc037002b2d10462f19735371445bacb89b4e47401c3b1239 not found: ID does not exist" containerID="767da31f4a974b2dc037002b2d10462f19735371445bacb89b4e47401c3b1239" Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.373812 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"767da31f4a974b2dc037002b2d10462f19735371445bacb89b4e47401c3b1239"} err="failed to get container status \"767da31f4a974b2dc037002b2d10462f19735371445bacb89b4e47401c3b1239\": rpc error: code = NotFound desc = could not find container \"767da31f4a974b2dc037002b2d10462f19735371445bacb89b4e47401c3b1239\": container with ID starting with 767da31f4a974b2dc037002b2d10462f19735371445bacb89b4e47401c3b1239 not found: ID does not exist" Jan 29 08:12:50 crc kubenswrapper[4826]: I0129 08:12:50.878542 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1f4100d-366d-4a0f-95e7-92bf4fefea66" path="/var/lib/kubelet/pods/b1f4100d-366d-4a0f-95e7-92bf4fefea66/volumes" Jan 29 08:12:56 crc kubenswrapper[4826]: I0129 08:12:56.817809 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:12:56 crc kubenswrapper[4826]: E0129 08:12:56.818787 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.368384 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-jpvpj"] Jan 29 08:13:07 crc kubenswrapper[4826]: E0129 08:13:07.369166 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7ef629-be06-4915-bf03-b8c5106670a1" containerName="neutron-api" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.369182 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7ef629-be06-4915-bf03-b8c5106670a1" containerName="neutron-api" Jan 29 08:13:07 crc kubenswrapper[4826]: E0129 08:13:07.369208 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f4100d-366d-4a0f-95e7-92bf4fefea66" containerName="extract-utilities" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.369216 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f4100d-366d-4a0f-95e7-92bf4fefea66" containerName="extract-utilities" Jan 29 08:13:07 crc kubenswrapper[4826]: E0129 08:13:07.369224 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f4100d-366d-4a0f-95e7-92bf4fefea66" containerName="registry-server" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.369232 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f4100d-366d-4a0f-95e7-92bf4fefea66" containerName="registry-server" Jan 29 08:13:07 crc kubenswrapper[4826]: E0129 08:13:07.369252 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7ef629-be06-4915-bf03-b8c5106670a1" containerName="neutron-httpd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.369259 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7ef629-be06-4915-bf03-b8c5106670a1" containerName="neutron-httpd" Jan 29 08:13:07 crc kubenswrapper[4826]: E0129 08:13:07.369278 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f4100d-366d-4a0f-95e7-92bf4fefea66" containerName="extract-content" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.369286 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f4100d-366d-4a0f-95e7-92bf4fefea66" containerName="extract-content" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.369491 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7ef629-be06-4915-bf03-b8c5106670a1" containerName="neutron-httpd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.369504 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f4100d-366d-4a0f-95e7-92bf4fefea66" containerName="registry-server" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.369513 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7ef629-be06-4915-bf03-b8c5106670a1" containerName="neutron-api" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.370181 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.377887 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.378160 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.378340 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.378495 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-4vwk7" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.379126 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.394321 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jpvpj"] Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.456518 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57cdc85b77-pp9cd"] Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.457625 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nz75\" (UniqueName: \"kubernetes.io/projected/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-kube-api-access-4nz75\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.457685 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-scripts\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.457712 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-combined-ca-bundle\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.457749 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-ring-data-devices\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.457806 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-swiftconf\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.457838 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.457839 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-dispersionconf\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.457933 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-etc-swift\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.465529 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57cdc85b77-pp9cd"] Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.559446 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nz75\" (UniqueName: \"kubernetes.io/projected/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-kube-api-access-4nz75\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.559496 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-config\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.559514 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-ovsdbserver-sb\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.559538 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-ovsdbserver-nb\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.559558 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-scripts\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.559573 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-combined-ca-bundle\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.559601 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-ring-data-devices\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.559618 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkc6m\" (UniqueName: \"kubernetes.io/projected/acb1638f-8ad4-476a-a018-261e585bd72a-kube-api-access-wkc6m\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.559662 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-dns-svc\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.559678 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-swiftconf\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.559700 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-dispersionconf\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.559741 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-etc-swift\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.560816 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-etc-swift\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.563701 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-scripts\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.563754 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-ring-data-devices\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.565444 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-swiftconf\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.569135 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-combined-ca-bundle\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.590122 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nz75\" (UniqueName: \"kubernetes.io/projected/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-kube-api-access-4nz75\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.591354 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-dispersionconf\") pod \"swift-ring-rebalance-jpvpj\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.664161 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-config\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.664685 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-ovsdbserver-sb\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.664772 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-ovsdbserver-nb\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.664890 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkc6m\" (UniqueName: \"kubernetes.io/projected/acb1638f-8ad4-476a-a018-261e585bd72a-kube-api-access-wkc6m\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.665031 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-config\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.665365 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-ovsdbserver-sb\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.665451 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-dns-svc\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.665777 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-ovsdbserver-nb\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.665909 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-dns-svc\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.684427 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkc6m\" (UniqueName: \"kubernetes.io/projected/acb1638f-8ad4-476a-a018-261e585bd72a-kube-api-access-wkc6m\") pod \"dnsmasq-dns-57cdc85b77-pp9cd\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.695031 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:07 crc kubenswrapper[4826]: I0129 08:13:07.777014 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:08 crc kubenswrapper[4826]: I0129 08:13:08.028074 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57cdc85b77-pp9cd"] Jan 29 08:13:08 crc kubenswrapper[4826]: I0129 08:13:08.125126 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jpvpj"] Jan 29 08:13:08 crc kubenswrapper[4826]: I0129 08:13:08.444204 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jpvpj" event={"ID":"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae","Type":"ContainerStarted","Data":"3d693007bd9ba611f1a02c3cca59fc307e955cc33158d5881dc69d251da38e9a"} Jan 29 08:13:08 crc kubenswrapper[4826]: I0129 08:13:08.447422 4826 generic.go:334] "Generic (PLEG): container finished" podID="acb1638f-8ad4-476a-a018-261e585bd72a" containerID="f1c82d2694c98ac8d45099501e27dfdc29d7aa79a6a1dc8040699300e4c60a72" exitCode=0 Jan 29 08:13:08 crc kubenswrapper[4826]: I0129 08:13:08.447453 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" event={"ID":"acb1638f-8ad4-476a-a018-261e585bd72a","Type":"ContainerDied","Data":"f1c82d2694c98ac8d45099501e27dfdc29d7aa79a6a1dc8040699300e4c60a72"} Jan 29 08:13:08 crc kubenswrapper[4826]: I0129 08:13:08.447472 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" event={"ID":"acb1638f-8ad4-476a-a018-261e585bd72a","Type":"ContainerStarted","Data":"3f5262e360c800d9083bce5b04284c911ac7a30c40831a00121e7734df884999"} Jan 29 08:13:08 crc kubenswrapper[4826]: I0129 08:13:08.809574 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:13:08 crc kubenswrapper[4826]: E0129 08:13:08.810170 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.397130 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-dd7969d78-84sjq"] Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.404823 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.406709 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.407783 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-dd7969d78-84sjq"] Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.468250 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" event={"ID":"acb1638f-8ad4-476a-a018-261e585bd72a","Type":"ContainerStarted","Data":"7ec6c8ec9b273742035c567d9a6ebb7fee02685c7f909041c18c30c4db780e1f"} Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.469223 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.490004 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" podStartSLOduration=2.489987026 podStartE2EDuration="2.489987026s" podCreationTimestamp="2026-01-29 08:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:13:09.484975564 +0000 UTC m=+5373.346768633" watchObservedRunningTime="2026-01-29 08:13:09.489987026 +0000 UTC m=+5373.351780095" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.495821 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068a687f-8397-4d97-a2bd-4bfb2220825b-combined-ca-bundle\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.495900 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068a687f-8397-4d97-a2bd-4bfb2220825b-config-data\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.496027 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/068a687f-8397-4d97-a2bd-4bfb2220825b-etc-swift\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.496120 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqtzv\" (UniqueName: \"kubernetes.io/projected/068a687f-8397-4d97-a2bd-4bfb2220825b-kube-api-access-bqtzv\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.496189 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/068a687f-8397-4d97-a2bd-4bfb2220825b-run-httpd\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.496235 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/068a687f-8397-4d97-a2bd-4bfb2220825b-log-httpd\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.597930 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068a687f-8397-4d97-a2bd-4bfb2220825b-combined-ca-bundle\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.597986 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068a687f-8397-4d97-a2bd-4bfb2220825b-config-data\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.598019 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/068a687f-8397-4d97-a2bd-4bfb2220825b-etc-swift\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.598063 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqtzv\" (UniqueName: \"kubernetes.io/projected/068a687f-8397-4d97-a2bd-4bfb2220825b-kube-api-access-bqtzv\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.598091 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/068a687f-8397-4d97-a2bd-4bfb2220825b-run-httpd\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.598116 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/068a687f-8397-4d97-a2bd-4bfb2220825b-log-httpd\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.604389 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/068a687f-8397-4d97-a2bd-4bfb2220825b-log-httpd\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.605024 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068a687f-8397-4d97-a2bd-4bfb2220825b-combined-ca-bundle\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.607834 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/068a687f-8397-4d97-a2bd-4bfb2220825b-run-httpd\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.629086 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/068a687f-8397-4d97-a2bd-4bfb2220825b-etc-swift\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.629198 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqtzv\" (UniqueName: \"kubernetes.io/projected/068a687f-8397-4d97-a2bd-4bfb2220825b-kube-api-access-bqtzv\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.631140 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068a687f-8397-4d97-a2bd-4bfb2220825b-config-data\") pod \"swift-proxy-dd7969d78-84sjq\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:09 crc kubenswrapper[4826]: I0129 08:13:09.722184 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.374671 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6979dcc99b-fdxq4"] Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.376223 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.378503 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.379511 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.387242 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6979dcc99b-fdxq4"] Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.442234 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441b644f-ce75-4397-a90e-f1d02c40da19-combined-ca-bundle\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.442625 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441b644f-ce75-4397-a90e-f1d02c40da19-internal-tls-certs\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.442647 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/441b644f-ce75-4397-a90e-f1d02c40da19-etc-swift\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.442670 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/441b644f-ce75-4397-a90e-f1d02c40da19-run-httpd\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.442686 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/441b644f-ce75-4397-a90e-f1d02c40da19-log-httpd\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.442730 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441b644f-ce75-4397-a90e-f1d02c40da19-config-data\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.442759 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjmnt\" (UniqueName: \"kubernetes.io/projected/441b644f-ce75-4397-a90e-f1d02c40da19-kube-api-access-mjmnt\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.442801 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441b644f-ce75-4397-a90e-f1d02c40da19-public-tls-certs\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.544913 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441b644f-ce75-4397-a90e-f1d02c40da19-internal-tls-certs\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.544957 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/441b644f-ce75-4397-a90e-f1d02c40da19-etc-swift\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.544987 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/441b644f-ce75-4397-a90e-f1d02c40da19-run-httpd\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.545009 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/441b644f-ce75-4397-a90e-f1d02c40da19-log-httpd\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.545068 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441b644f-ce75-4397-a90e-f1d02c40da19-config-data\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.545103 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjmnt\" (UniqueName: \"kubernetes.io/projected/441b644f-ce75-4397-a90e-f1d02c40da19-kube-api-access-mjmnt\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.545154 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441b644f-ce75-4397-a90e-f1d02c40da19-public-tls-certs\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.545625 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441b644f-ce75-4397-a90e-f1d02c40da19-combined-ca-bundle\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.547068 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/441b644f-ce75-4397-a90e-f1d02c40da19-log-httpd\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.551402 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/441b644f-ce75-4397-a90e-f1d02c40da19-combined-ca-bundle\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.551795 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/441b644f-ce75-4397-a90e-f1d02c40da19-run-httpd\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.553825 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/441b644f-ce75-4397-a90e-f1d02c40da19-public-tls-certs\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.554939 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/441b644f-ce75-4397-a90e-f1d02c40da19-etc-swift\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.558065 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/441b644f-ce75-4397-a90e-f1d02c40da19-config-data\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.566052 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/441b644f-ce75-4397-a90e-f1d02c40da19-internal-tls-certs\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.576018 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjmnt\" (UniqueName: \"kubernetes.io/projected/441b644f-ce75-4397-a90e-f1d02c40da19-kube-api-access-mjmnt\") pod \"swift-proxy-6979dcc99b-fdxq4\" (UID: \"441b644f-ce75-4397-a90e-f1d02c40da19\") " pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:11 crc kubenswrapper[4826]: I0129 08:13:11.699256 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:12 crc kubenswrapper[4826]: I0129 08:13:12.136752 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-dd7969d78-84sjq"] Jan 29 08:13:12 crc kubenswrapper[4826]: W0129 08:13:12.138113 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068a687f_8397_4d97_a2bd_4bfb2220825b.slice/crio-29ded569448757ca6bc9034413c35726111ce2bd42d9b252512b4b3e1e387744 WatchSource:0}: Error finding container 29ded569448757ca6bc9034413c35726111ce2bd42d9b252512b4b3e1e387744: Status 404 returned error can't find the container with id 29ded569448757ca6bc9034413c35726111ce2bd42d9b252512b4b3e1e387744 Jan 29 08:13:12 crc kubenswrapper[4826]: I0129 08:13:12.332884 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6979dcc99b-fdxq4"] Jan 29 08:13:12 crc kubenswrapper[4826]: W0129 08:13:12.343507 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod441b644f_ce75_4397_a90e_f1d02c40da19.slice/crio-fbb5e634a210c56616d93ff018a5aaab619880b3f3dd5a5f0a18df20166d6a65 WatchSource:0}: Error finding container fbb5e634a210c56616d93ff018a5aaab619880b3f3dd5a5f0a18df20166d6a65: Status 404 returned error can't find the container with id fbb5e634a210c56616d93ff018a5aaab619880b3f3dd5a5f0a18df20166d6a65 Jan 29 08:13:12 crc kubenswrapper[4826]: I0129 08:13:12.509052 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jpvpj" event={"ID":"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae","Type":"ContainerStarted","Data":"2dea8f3f6d85e962f088431ceeede53d89cf50472cea129a817cda89ed00a241"} Jan 29 08:13:12 crc kubenswrapper[4826]: I0129 08:13:12.510565 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6979dcc99b-fdxq4" event={"ID":"441b644f-ce75-4397-a90e-f1d02c40da19","Type":"ContainerStarted","Data":"fbb5e634a210c56616d93ff018a5aaab619880b3f3dd5a5f0a18df20166d6a65"} Jan 29 08:13:12 crc kubenswrapper[4826]: I0129 08:13:12.512431 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dd7969d78-84sjq" event={"ID":"068a687f-8397-4d97-a2bd-4bfb2220825b","Type":"ContainerStarted","Data":"8432178450a0b0fe2f28ef9c04f1e2001af69ca06cabb47f0280fb0b546b8f39"} Jan 29 08:13:12 crc kubenswrapper[4826]: I0129 08:13:12.512458 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dd7969d78-84sjq" event={"ID":"068a687f-8397-4d97-a2bd-4bfb2220825b","Type":"ContainerStarted","Data":"fd8f9f88699fe8a3d0815614a0c58de3ae05b04a24a145a4a23f3b17351d5733"} Jan 29 08:13:12 crc kubenswrapper[4826]: I0129 08:13:12.512471 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dd7969d78-84sjq" event={"ID":"068a687f-8397-4d97-a2bd-4bfb2220825b","Type":"ContainerStarted","Data":"29ded569448757ca6bc9034413c35726111ce2bd42d9b252512b4b3e1e387744"} Jan 29 08:13:12 crc kubenswrapper[4826]: I0129 08:13:12.512677 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:12 crc kubenswrapper[4826]: I0129 08:13:12.512719 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:12 crc kubenswrapper[4826]: I0129 08:13:12.529406 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-jpvpj" podStartSLOduration=2.201444334 podStartE2EDuration="5.529388341s" podCreationTimestamp="2026-01-29 08:13:07 +0000 UTC" firstStartedPulling="2026-01-29 08:13:08.135774581 +0000 UTC m=+5371.997567650" lastFinishedPulling="2026-01-29 08:13:11.463718568 +0000 UTC m=+5375.325511657" observedRunningTime="2026-01-29 08:13:12.523214999 +0000 UTC m=+5376.385008078" watchObservedRunningTime="2026-01-29 08:13:12.529388341 +0000 UTC m=+5376.391181420" Jan 29 08:13:12 crc kubenswrapper[4826]: I0129 08:13:12.543865 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-dd7969d78-84sjq" podStartSLOduration=3.543843791 podStartE2EDuration="3.543843791s" podCreationTimestamp="2026-01-29 08:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:13:12.537664508 +0000 UTC m=+5376.399457577" watchObservedRunningTime="2026-01-29 08:13:12.543843791 +0000 UTC m=+5376.405636880" Jan 29 08:13:13 crc kubenswrapper[4826]: I0129 08:13:13.521354 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6979dcc99b-fdxq4" event={"ID":"441b644f-ce75-4397-a90e-f1d02c40da19","Type":"ContainerStarted","Data":"35038a82b6044c556bc47b95c7ed69924a02087fcd9106d452fa9c5f0f2683a7"} Jan 29 08:13:13 crc kubenswrapper[4826]: I0129 08:13:13.521400 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6979dcc99b-fdxq4" event={"ID":"441b644f-ce75-4397-a90e-f1d02c40da19","Type":"ContainerStarted","Data":"cdc0c5695b289f0dcb49bad27d1f9044f60589c76375bc0e25c056fe92cf627d"} Jan 29 08:13:13 crc kubenswrapper[4826]: I0129 08:13:13.542467 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6979dcc99b-fdxq4" podStartSLOduration=2.54244882 podStartE2EDuration="2.54244882s" podCreationTimestamp="2026-01-29 08:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:13:13.537096559 +0000 UTC m=+5377.398889628" watchObservedRunningTime="2026-01-29 08:13:13.54244882 +0000 UTC m=+5377.404241879" Jan 29 08:13:14 crc kubenswrapper[4826]: I0129 08:13:14.532331 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:14 crc kubenswrapper[4826]: I0129 08:13:14.532762 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:15 crc kubenswrapper[4826]: I0129 08:13:15.549841 4826 generic.go:334] "Generic (PLEG): container finished" podID="e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae" containerID="2dea8f3f6d85e962f088431ceeede53d89cf50472cea129a817cda89ed00a241" exitCode=0 Jan 29 08:13:15 crc kubenswrapper[4826]: I0129 08:13:15.549933 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jpvpj" event={"ID":"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae","Type":"ContainerDied","Data":"2dea8f3f6d85e962f088431ceeede53d89cf50472cea129a817cda89ed00a241"} Jan 29 08:13:16 crc kubenswrapper[4826]: I0129 08:13:16.983234 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.084061 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-scripts\") pod \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.084189 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nz75\" (UniqueName: \"kubernetes.io/projected/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-kube-api-access-4nz75\") pod \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.084332 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-dispersionconf\") pod \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.084362 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-ring-data-devices\") pod \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.084451 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-combined-ca-bundle\") pod \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.084489 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-etc-swift\") pod \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.084516 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-swiftconf\") pod \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\" (UID: \"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae\") " Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.085167 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae" (UID: "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.085812 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae" (UID: "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.093479 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae" (UID: "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.103476 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-kube-api-access-4nz75" (OuterVolumeSpecName: "kube-api-access-4nz75") pod "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae" (UID: "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae"). InnerVolumeSpecName "kube-api-access-4nz75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.115459 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae" (UID: "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.116317 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae" (UID: "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.121971 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-scripts" (OuterVolumeSpecName: "scripts") pod "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae" (UID: "e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.186223 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nz75\" (UniqueName: \"kubernetes.io/projected/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-kube-api-access-4nz75\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.186258 4826 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.186271 4826 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.186280 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.186291 4826 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.186311 4826 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.186321 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.572262 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jpvpj" event={"ID":"e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae","Type":"ContainerDied","Data":"3d693007bd9ba611f1a02c3cca59fc307e955cc33158d5881dc69d251da38e9a"} Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.572363 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d693007bd9ba611f1a02c3cca59fc307e955cc33158d5881dc69d251da38e9a" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.572401 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jpvpj" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.779516 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.867111 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6779c9d96f-pg8l2"] Jan 29 08:13:17 crc kubenswrapper[4826]: I0129 08:13:17.867632 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" podUID="2f748bb0-1855-4f91-9fb3-b22c39e29eab" containerName="dnsmasq-dns" containerID="cri-o://c07ab096b05162127deea8a753f92287fc72f737e9c6dbea450ee46d48d6c0f9" gracePeriod=10 Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.391953 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.512962 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-ovsdbserver-sb\") pod \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.513328 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-config\") pod \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.513499 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-ovsdbserver-nb\") pod \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.513592 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4fg5\" (UniqueName: \"kubernetes.io/projected/2f748bb0-1855-4f91-9fb3-b22c39e29eab-kube-api-access-r4fg5\") pod \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.513829 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-dns-svc\") pod \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\" (UID: \"2f748bb0-1855-4f91-9fb3-b22c39e29eab\") " Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.518689 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f748bb0-1855-4f91-9fb3-b22c39e29eab-kube-api-access-r4fg5" (OuterVolumeSpecName: "kube-api-access-r4fg5") pod "2f748bb0-1855-4f91-9fb3-b22c39e29eab" (UID: "2f748bb0-1855-4f91-9fb3-b22c39e29eab"). InnerVolumeSpecName "kube-api-access-r4fg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.559561 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2f748bb0-1855-4f91-9fb3-b22c39e29eab" (UID: "2f748bb0-1855-4f91-9fb3-b22c39e29eab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.562853 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2f748bb0-1855-4f91-9fb3-b22c39e29eab" (UID: "2f748bb0-1855-4f91-9fb3-b22c39e29eab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.568412 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-config" (OuterVolumeSpecName: "config") pod "2f748bb0-1855-4f91-9fb3-b22c39e29eab" (UID: "2f748bb0-1855-4f91-9fb3-b22c39e29eab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.574086 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f748bb0-1855-4f91-9fb3-b22c39e29eab" (UID: "2f748bb0-1855-4f91-9fb3-b22c39e29eab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.583426 4826 generic.go:334] "Generic (PLEG): container finished" podID="2f748bb0-1855-4f91-9fb3-b22c39e29eab" containerID="c07ab096b05162127deea8a753f92287fc72f737e9c6dbea450ee46d48d6c0f9" exitCode=0 Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.583472 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" event={"ID":"2f748bb0-1855-4f91-9fb3-b22c39e29eab","Type":"ContainerDied","Data":"c07ab096b05162127deea8a753f92287fc72f737e9c6dbea450ee46d48d6c0f9"} Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.583504 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" event={"ID":"2f748bb0-1855-4f91-9fb3-b22c39e29eab","Type":"ContainerDied","Data":"cb00235711f093c36ce471ad8bcbcc89b958045d3b3425d93f7ee8b8251dc104"} Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.583508 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6779c9d96f-pg8l2" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.583523 4826 scope.go:117] "RemoveContainer" containerID="c07ab096b05162127deea8a753f92287fc72f737e9c6dbea450ee46d48d6c0f9" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.616837 4826 scope.go:117] "RemoveContainer" containerID="8d8da36ab850fd1f1e4e00829ccb5de9959c5f29f3945d744e48a4ea4b727388" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.617826 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.617862 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.617878 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.617891 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f748bb0-1855-4f91-9fb3-b22c39e29eab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.617905 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4fg5\" (UniqueName: \"kubernetes.io/projected/2f748bb0-1855-4f91-9fb3-b22c39e29eab-kube-api-access-r4fg5\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.639711 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6779c9d96f-pg8l2"] Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.641003 4826 scope.go:117] "RemoveContainer" containerID="c07ab096b05162127deea8a753f92287fc72f737e9c6dbea450ee46d48d6c0f9" Jan 29 08:13:18 crc kubenswrapper[4826]: E0129 08:13:18.641456 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07ab096b05162127deea8a753f92287fc72f737e9c6dbea450ee46d48d6c0f9\": container with ID starting with c07ab096b05162127deea8a753f92287fc72f737e9c6dbea450ee46d48d6c0f9 not found: ID does not exist" containerID="c07ab096b05162127deea8a753f92287fc72f737e9c6dbea450ee46d48d6c0f9" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.641513 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07ab096b05162127deea8a753f92287fc72f737e9c6dbea450ee46d48d6c0f9"} err="failed to get container status \"c07ab096b05162127deea8a753f92287fc72f737e9c6dbea450ee46d48d6c0f9\": rpc error: code = NotFound desc = could not find container \"c07ab096b05162127deea8a753f92287fc72f737e9c6dbea450ee46d48d6c0f9\": container with ID starting with c07ab096b05162127deea8a753f92287fc72f737e9c6dbea450ee46d48d6c0f9 not found: ID does not exist" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.641538 4826 scope.go:117] "RemoveContainer" containerID="8d8da36ab850fd1f1e4e00829ccb5de9959c5f29f3945d744e48a4ea4b727388" Jan 29 08:13:18 crc kubenswrapper[4826]: E0129 08:13:18.643790 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d8da36ab850fd1f1e4e00829ccb5de9959c5f29f3945d744e48a4ea4b727388\": container with ID starting with 8d8da36ab850fd1f1e4e00829ccb5de9959c5f29f3945d744e48a4ea4b727388 not found: ID does not exist" containerID="8d8da36ab850fd1f1e4e00829ccb5de9959c5f29f3945d744e48a4ea4b727388" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.643824 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8da36ab850fd1f1e4e00829ccb5de9959c5f29f3945d744e48a4ea4b727388"} err="failed to get container status \"8d8da36ab850fd1f1e4e00829ccb5de9959c5f29f3945d744e48a4ea4b727388\": rpc error: code = NotFound desc = could not find container \"8d8da36ab850fd1f1e4e00829ccb5de9959c5f29f3945d744e48a4ea4b727388\": container with ID starting with 8d8da36ab850fd1f1e4e00829ccb5de9959c5f29f3945d744e48a4ea4b727388 not found: ID does not exist" Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.649039 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6779c9d96f-pg8l2"] Jan 29 08:13:18 crc kubenswrapper[4826]: I0129 08:13:18.821015 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f748bb0-1855-4f91-9fb3-b22c39e29eab" path="/var/lib/kubelet/pods/2f748bb0-1855-4f91-9fb3-b22c39e29eab/volumes" Jan 29 08:13:19 crc kubenswrapper[4826]: I0129 08:13:19.725090 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:19 crc kubenswrapper[4826]: I0129 08:13:19.728446 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:19 crc kubenswrapper[4826]: I0129 08:13:19.943543 4826 scope.go:117] "RemoveContainer" containerID="c1ad3845712b39a0059997603369da605485bee6656e374de156ba79ac3b693a" Jan 29 08:13:20 crc kubenswrapper[4826]: I0129 08:13:20.809090 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:13:20 crc kubenswrapper[4826]: E0129 08:13:20.809587 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:13:21 crc kubenswrapper[4826]: I0129 08:13:21.710585 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:21 crc kubenswrapper[4826]: I0129 08:13:21.711267 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6979dcc99b-fdxq4" Jan 29 08:13:21 crc kubenswrapper[4826]: I0129 08:13:21.857619 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-dd7969d78-84sjq"] Jan 29 08:13:21 crc kubenswrapper[4826]: I0129 08:13:21.858088 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-dd7969d78-84sjq" podUID="068a687f-8397-4d97-a2bd-4bfb2220825b" containerName="proxy-server" containerID="cri-o://8432178450a0b0fe2f28ef9c04f1e2001af69ca06cabb47f0280fb0b546b8f39" gracePeriod=30 Jan 29 08:13:21 crc kubenswrapper[4826]: I0129 08:13:21.857905 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-dd7969d78-84sjq" podUID="068a687f-8397-4d97-a2bd-4bfb2220825b" containerName="proxy-httpd" containerID="cri-o://fd8f9f88699fe8a3d0815614a0c58de3ae05b04a24a145a4a23f3b17351d5733" gracePeriod=30 Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.630700 4826 generic.go:334] "Generic (PLEG): container finished" podID="068a687f-8397-4d97-a2bd-4bfb2220825b" containerID="8432178450a0b0fe2f28ef9c04f1e2001af69ca06cabb47f0280fb0b546b8f39" exitCode=0 Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.630963 4826 generic.go:334] "Generic (PLEG): container finished" podID="068a687f-8397-4d97-a2bd-4bfb2220825b" containerID="fd8f9f88699fe8a3d0815614a0c58de3ae05b04a24a145a4a23f3b17351d5733" exitCode=0 Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.630788 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dd7969d78-84sjq" event={"ID":"068a687f-8397-4d97-a2bd-4bfb2220825b","Type":"ContainerDied","Data":"8432178450a0b0fe2f28ef9c04f1e2001af69ca06cabb47f0280fb0b546b8f39"} Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.631071 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dd7969d78-84sjq" event={"ID":"068a687f-8397-4d97-a2bd-4bfb2220825b","Type":"ContainerDied","Data":"fd8f9f88699fe8a3d0815614a0c58de3ae05b04a24a145a4a23f3b17351d5733"} Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.898576 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.959601 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/068a687f-8397-4d97-a2bd-4bfb2220825b-log-httpd\") pod \"068a687f-8397-4d97-a2bd-4bfb2220825b\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.959663 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/068a687f-8397-4d97-a2bd-4bfb2220825b-etc-swift\") pod \"068a687f-8397-4d97-a2bd-4bfb2220825b\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.959815 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqtzv\" (UniqueName: \"kubernetes.io/projected/068a687f-8397-4d97-a2bd-4bfb2220825b-kube-api-access-bqtzv\") pod \"068a687f-8397-4d97-a2bd-4bfb2220825b\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.959913 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068a687f-8397-4d97-a2bd-4bfb2220825b-config-data\") pod \"068a687f-8397-4d97-a2bd-4bfb2220825b\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.959998 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/068a687f-8397-4d97-a2bd-4bfb2220825b-run-httpd\") pod \"068a687f-8397-4d97-a2bd-4bfb2220825b\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.960034 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068a687f-8397-4d97-a2bd-4bfb2220825b-combined-ca-bundle\") pod \"068a687f-8397-4d97-a2bd-4bfb2220825b\" (UID: \"068a687f-8397-4d97-a2bd-4bfb2220825b\") " Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.960551 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/068a687f-8397-4d97-a2bd-4bfb2220825b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "068a687f-8397-4d97-a2bd-4bfb2220825b" (UID: "068a687f-8397-4d97-a2bd-4bfb2220825b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.960707 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/068a687f-8397-4d97-a2bd-4bfb2220825b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "068a687f-8397-4d97-a2bd-4bfb2220825b" (UID: "068a687f-8397-4d97-a2bd-4bfb2220825b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.986522 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068a687f-8397-4d97-a2bd-4bfb2220825b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "068a687f-8397-4d97-a2bd-4bfb2220825b" (UID: "068a687f-8397-4d97-a2bd-4bfb2220825b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:13:22 crc kubenswrapper[4826]: I0129 08:13:22.986648 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068a687f-8397-4d97-a2bd-4bfb2220825b-kube-api-access-bqtzv" (OuterVolumeSpecName: "kube-api-access-bqtzv") pod "068a687f-8397-4d97-a2bd-4bfb2220825b" (UID: "068a687f-8397-4d97-a2bd-4bfb2220825b"). InnerVolumeSpecName "kube-api-access-bqtzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.022853 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068a687f-8397-4d97-a2bd-4bfb2220825b-config-data" (OuterVolumeSpecName: "config-data") pod "068a687f-8397-4d97-a2bd-4bfb2220825b" (UID: "068a687f-8397-4d97-a2bd-4bfb2220825b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.047388 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068a687f-8397-4d97-a2bd-4bfb2220825b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "068a687f-8397-4d97-a2bd-4bfb2220825b" (UID: "068a687f-8397-4d97-a2bd-4bfb2220825b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.061270 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/068a687f-8397-4d97-a2bd-4bfb2220825b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.061330 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/068a687f-8397-4d97-a2bd-4bfb2220825b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.061340 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068a687f-8397-4d97-a2bd-4bfb2220825b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.061350 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/068a687f-8397-4d97-a2bd-4bfb2220825b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.061360 4826 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/068a687f-8397-4d97-a2bd-4bfb2220825b-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.061368 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqtzv\" (UniqueName: \"kubernetes.io/projected/068a687f-8397-4d97-a2bd-4bfb2220825b-kube-api-access-bqtzv\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.644496 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dd7969d78-84sjq" event={"ID":"068a687f-8397-4d97-a2bd-4bfb2220825b","Type":"ContainerDied","Data":"29ded569448757ca6bc9034413c35726111ce2bd42d9b252512b4b3e1e387744"} Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.645155 4826 scope.go:117] "RemoveContainer" containerID="8432178450a0b0fe2f28ef9c04f1e2001af69ca06cabb47f0280fb0b546b8f39" Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.644594 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-dd7969d78-84sjq" Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.690471 4826 scope.go:117] "RemoveContainer" containerID="fd8f9f88699fe8a3d0815614a0c58de3ae05b04a24a145a4a23f3b17351d5733" Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.703387 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-dd7969d78-84sjq"] Jan 29 08:13:23 crc kubenswrapper[4826]: I0129 08:13:23.715197 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-dd7969d78-84sjq"] Jan 29 08:13:24 crc kubenswrapper[4826]: E0129 08:13:24.403907 4826 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.173:47318->38.102.83.173:41327: write tcp 38.102.83.173:47318->38.102.83.173:41327: write: connection reset by peer Jan 29 08:13:24 crc kubenswrapper[4826]: I0129 08:13:24.823402 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="068a687f-8397-4d97-a2bd-4bfb2220825b" path="/var/lib/kubelet/pods/068a687f-8397-4d97-a2bd-4bfb2220825b/volumes" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.652688 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cjxhd"] Jan 29 08:13:25 crc kubenswrapper[4826]: E0129 08:13:25.653401 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068a687f-8397-4d97-a2bd-4bfb2220825b" containerName="proxy-server" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.653431 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="068a687f-8397-4d97-a2bd-4bfb2220825b" containerName="proxy-server" Jan 29 08:13:25 crc kubenswrapper[4826]: E0129 08:13:25.653454 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f748bb0-1855-4f91-9fb3-b22c39e29eab" containerName="init" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.653465 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f748bb0-1855-4f91-9fb3-b22c39e29eab" containerName="init" Jan 29 08:13:25 crc kubenswrapper[4826]: E0129 08:13:25.653487 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f748bb0-1855-4f91-9fb3-b22c39e29eab" containerName="dnsmasq-dns" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.653496 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f748bb0-1855-4f91-9fb3-b22c39e29eab" containerName="dnsmasq-dns" Jan 29 08:13:25 crc kubenswrapper[4826]: E0129 08:13:25.653513 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae" containerName="swift-ring-rebalance" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.653521 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae" containerName="swift-ring-rebalance" Jan 29 08:13:25 crc kubenswrapper[4826]: E0129 08:13:25.653537 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068a687f-8397-4d97-a2bd-4bfb2220825b" containerName="proxy-httpd" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.653544 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="068a687f-8397-4d97-a2bd-4bfb2220825b" containerName="proxy-httpd" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.653753 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae" containerName="swift-ring-rebalance" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.653779 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="068a687f-8397-4d97-a2bd-4bfb2220825b" containerName="proxy-server" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.653795 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="068a687f-8397-4d97-a2bd-4bfb2220825b" containerName="proxy-httpd" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.653807 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f748bb0-1855-4f91-9fb3-b22c39e29eab" containerName="dnsmasq-dns" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.655287 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.666144 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cjxhd"] Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.714539 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-catalog-content\") pod \"redhat-operators-cjxhd\" (UID: \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\") " pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.714647 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-utilities\") pod \"redhat-operators-cjxhd\" (UID: \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\") " pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.715149 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj5p9\" (UniqueName: \"kubernetes.io/projected/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-kube-api-access-qj5p9\") pod \"redhat-operators-cjxhd\" (UID: \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\") " pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.817480 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj5p9\" (UniqueName: \"kubernetes.io/projected/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-kube-api-access-qj5p9\") pod \"redhat-operators-cjxhd\" (UID: \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\") " pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.817548 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-catalog-content\") pod \"redhat-operators-cjxhd\" (UID: \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\") " pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.817571 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-utilities\") pod \"redhat-operators-cjxhd\" (UID: \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\") " pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.818088 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-utilities\") pod \"redhat-operators-cjxhd\" (UID: \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\") " pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.818178 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-catalog-content\") pod \"redhat-operators-cjxhd\" (UID: \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\") " pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.839784 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj5p9\" (UniqueName: \"kubernetes.io/projected/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-kube-api-access-qj5p9\") pod \"redhat-operators-cjxhd\" (UID: \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\") " pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:25 crc kubenswrapper[4826]: I0129 08:13:25.989357 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:26 crc kubenswrapper[4826]: I0129 08:13:26.466111 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cjxhd"] Jan 29 08:13:26 crc kubenswrapper[4826]: I0129 08:13:26.703987 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxhd" event={"ID":"5a39943b-48a9-4d27-96b2-a22fd3c29e5f","Type":"ContainerStarted","Data":"dbf3c8d5b2831a9b7498341b24066fce99e156430d18a3d622e2c7a484ef26d5"} Jan 29 08:13:26 crc kubenswrapper[4826]: I0129 08:13:26.706201 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxhd" event={"ID":"5a39943b-48a9-4d27-96b2-a22fd3c29e5f","Type":"ContainerStarted","Data":"b876d9b628b23205a79fe57fe7bda46e322604458d03fa5f165e0e4057bd6669"} Jan 29 08:13:26 crc kubenswrapper[4826]: E0129 08:13:26.805151 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a39943b_48a9_4d27_96b2_a22fd3c29e5f.slice/crio-conmon-dbf3c8d5b2831a9b7498341b24066fce99e156430d18a3d622e2c7a484ef26d5.scope\": RecentStats: unable to find data in memory cache]" Jan 29 08:13:27 crc kubenswrapper[4826]: I0129 08:13:27.717258 4826 generic.go:334] "Generic (PLEG): container finished" podID="5a39943b-48a9-4d27-96b2-a22fd3c29e5f" containerID="dbf3c8d5b2831a9b7498341b24066fce99e156430d18a3d622e2c7a484ef26d5" exitCode=0 Jan 29 08:13:27 crc kubenswrapper[4826]: I0129 08:13:27.717452 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxhd" event={"ID":"5a39943b-48a9-4d27-96b2-a22fd3c29e5f","Type":"ContainerDied","Data":"dbf3c8d5b2831a9b7498341b24066fce99e156430d18a3d622e2c7a484ef26d5"} Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.244127 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-f8lj8"] Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.245372 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f8lj8" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.257983 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f8lj8"] Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.347386 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-64b9-account-create-update-vlrvd"] Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.348623 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-64b9-account-create-update-vlrvd" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.351006 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.365690 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6msq2\" (UniqueName: \"kubernetes.io/projected/f6b74f71-c8e0-4b71-91f3-84a55f48a8e6-kube-api-access-6msq2\") pod \"cinder-db-create-f8lj8\" (UID: \"f6b74f71-c8e0-4b71-91f3-84a55f48a8e6\") " pod="openstack/cinder-db-create-f8lj8" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.365732 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b74f71-c8e0-4b71-91f3-84a55f48a8e6-operator-scripts\") pod \"cinder-db-create-f8lj8\" (UID: \"f6b74f71-c8e0-4b71-91f3-84a55f48a8e6\") " pod="openstack/cinder-db-create-f8lj8" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.381441 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-64b9-account-create-update-vlrvd"] Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.467897 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a4e318f-fe31-41b8-91d3-2b595c03914a-operator-scripts\") pod \"cinder-64b9-account-create-update-vlrvd\" (UID: \"6a4e318f-fe31-41b8-91d3-2b595c03914a\") " pod="openstack/cinder-64b9-account-create-update-vlrvd" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.468033 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6msq2\" (UniqueName: \"kubernetes.io/projected/f6b74f71-c8e0-4b71-91f3-84a55f48a8e6-kube-api-access-6msq2\") pod \"cinder-db-create-f8lj8\" (UID: \"f6b74f71-c8e0-4b71-91f3-84a55f48a8e6\") " pod="openstack/cinder-db-create-f8lj8" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.468061 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b74f71-c8e0-4b71-91f3-84a55f48a8e6-operator-scripts\") pod \"cinder-db-create-f8lj8\" (UID: \"f6b74f71-c8e0-4b71-91f3-84a55f48a8e6\") " pod="openstack/cinder-db-create-f8lj8" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.468146 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl8k7\" (UniqueName: \"kubernetes.io/projected/6a4e318f-fe31-41b8-91d3-2b595c03914a-kube-api-access-vl8k7\") pod \"cinder-64b9-account-create-update-vlrvd\" (UID: \"6a4e318f-fe31-41b8-91d3-2b595c03914a\") " pod="openstack/cinder-64b9-account-create-update-vlrvd" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.469280 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b74f71-c8e0-4b71-91f3-84a55f48a8e6-operator-scripts\") pod \"cinder-db-create-f8lj8\" (UID: \"f6b74f71-c8e0-4b71-91f3-84a55f48a8e6\") " pod="openstack/cinder-db-create-f8lj8" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.506964 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6msq2\" (UniqueName: \"kubernetes.io/projected/f6b74f71-c8e0-4b71-91f3-84a55f48a8e6-kube-api-access-6msq2\") pod \"cinder-db-create-f8lj8\" (UID: \"f6b74f71-c8e0-4b71-91f3-84a55f48a8e6\") " pod="openstack/cinder-db-create-f8lj8" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.569197 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8k7\" (UniqueName: \"kubernetes.io/projected/6a4e318f-fe31-41b8-91d3-2b595c03914a-kube-api-access-vl8k7\") pod \"cinder-64b9-account-create-update-vlrvd\" (UID: \"6a4e318f-fe31-41b8-91d3-2b595c03914a\") " pod="openstack/cinder-64b9-account-create-update-vlrvd" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.569260 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a4e318f-fe31-41b8-91d3-2b595c03914a-operator-scripts\") pod \"cinder-64b9-account-create-update-vlrvd\" (UID: \"6a4e318f-fe31-41b8-91d3-2b595c03914a\") " pod="openstack/cinder-64b9-account-create-update-vlrvd" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.570233 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a4e318f-fe31-41b8-91d3-2b595c03914a-operator-scripts\") pod \"cinder-64b9-account-create-update-vlrvd\" (UID: \"6a4e318f-fe31-41b8-91d3-2b595c03914a\") " pod="openstack/cinder-64b9-account-create-update-vlrvd" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.579645 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f8lj8" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.585510 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl8k7\" (UniqueName: \"kubernetes.io/projected/6a4e318f-fe31-41b8-91d3-2b595c03914a-kube-api-access-vl8k7\") pod \"cinder-64b9-account-create-update-vlrvd\" (UID: \"6a4e318f-fe31-41b8-91d3-2b595c03914a\") " pod="openstack/cinder-64b9-account-create-update-vlrvd" Jan 29 08:13:28 crc kubenswrapper[4826]: I0129 08:13:28.669521 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-64b9-account-create-update-vlrvd" Jan 29 08:13:29 crc kubenswrapper[4826]: I0129 08:13:29.101191 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f8lj8"] Jan 29 08:13:29 crc kubenswrapper[4826]: W0129 08:13:29.111627 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6b74f71_c8e0_4b71_91f3_84a55f48a8e6.slice/crio-878bfffa8d24b886e14417b12723a5fbedfb33243854e4d3febae464d7c3ef4d WatchSource:0}: Error finding container 878bfffa8d24b886e14417b12723a5fbedfb33243854e4d3febae464d7c3ef4d: Status 404 returned error can't find the container with id 878bfffa8d24b886e14417b12723a5fbedfb33243854e4d3febae464d7c3ef4d Jan 29 08:13:29 crc kubenswrapper[4826]: I0129 08:13:29.188115 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-64b9-account-create-update-vlrvd"] Jan 29 08:13:29 crc kubenswrapper[4826]: W0129 08:13:29.203448 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a4e318f_fe31_41b8_91d3_2b595c03914a.slice/crio-3fda5f4fd1a39940f1df521a19adc6701edd6b132d436d8698ec9e1e9b8e0808 WatchSource:0}: Error finding container 3fda5f4fd1a39940f1df521a19adc6701edd6b132d436d8698ec9e1e9b8e0808: Status 404 returned error can't find the container with id 3fda5f4fd1a39940f1df521a19adc6701edd6b132d436d8698ec9e1e9b8e0808 Jan 29 08:13:29 crc kubenswrapper[4826]: I0129 08:13:29.752775 4826 generic.go:334] "Generic (PLEG): container finished" podID="f6b74f71-c8e0-4b71-91f3-84a55f48a8e6" containerID="208de6e5593814423dfcb7997eb95aeb19dc937cb05fed570adf40052d492353" exitCode=0 Jan 29 08:13:29 crc kubenswrapper[4826]: I0129 08:13:29.753348 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f8lj8" event={"ID":"f6b74f71-c8e0-4b71-91f3-84a55f48a8e6","Type":"ContainerDied","Data":"208de6e5593814423dfcb7997eb95aeb19dc937cb05fed570adf40052d492353"} Jan 29 08:13:29 crc kubenswrapper[4826]: I0129 08:13:29.753408 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f8lj8" event={"ID":"f6b74f71-c8e0-4b71-91f3-84a55f48a8e6","Type":"ContainerStarted","Data":"878bfffa8d24b886e14417b12723a5fbedfb33243854e4d3febae464d7c3ef4d"} Jan 29 08:13:29 crc kubenswrapper[4826]: I0129 08:13:29.757255 4826 generic.go:334] "Generic (PLEG): container finished" podID="6a4e318f-fe31-41b8-91d3-2b595c03914a" containerID="cbc371368313da977ac4c53783b4d6c96b6f9227f1a38ce4dc3e21176828a3f1" exitCode=0 Jan 29 08:13:29 crc kubenswrapper[4826]: I0129 08:13:29.757348 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-64b9-account-create-update-vlrvd" event={"ID":"6a4e318f-fe31-41b8-91d3-2b595c03914a","Type":"ContainerDied","Data":"cbc371368313da977ac4c53783b4d6c96b6f9227f1a38ce4dc3e21176828a3f1"} Jan 29 08:13:29 crc kubenswrapper[4826]: I0129 08:13:29.757372 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-64b9-account-create-update-vlrvd" event={"ID":"6a4e318f-fe31-41b8-91d3-2b595c03914a","Type":"ContainerStarted","Data":"3fda5f4fd1a39940f1df521a19adc6701edd6b132d436d8698ec9e1e9b8e0808"} Jan 29 08:13:29 crc kubenswrapper[4826]: I0129 08:13:29.760915 4826 generic.go:334] "Generic (PLEG): container finished" podID="5a39943b-48a9-4d27-96b2-a22fd3c29e5f" containerID="142ca1786f2190dfe2ac3f2f6fa81375822cdcd8e32ea92635b1af7915b9109d" exitCode=0 Jan 29 08:13:29 crc kubenswrapper[4826]: I0129 08:13:29.760948 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxhd" event={"ID":"5a39943b-48a9-4d27-96b2-a22fd3c29e5f","Type":"ContainerDied","Data":"142ca1786f2190dfe2ac3f2f6fa81375822cdcd8e32ea92635b1af7915b9109d"} Jan 29 08:13:30 crc kubenswrapper[4826]: I0129 08:13:30.774948 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxhd" event={"ID":"5a39943b-48a9-4d27-96b2-a22fd3c29e5f","Type":"ContainerStarted","Data":"a23b11b5195ecb2c16d1a75f0e0331c18390b977388899c99a5794552e1c33ce"} Jan 29 08:13:30 crc kubenswrapper[4826]: I0129 08:13:30.798934 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cjxhd" podStartSLOduration=3.3666049080000002 podStartE2EDuration="5.798915701s" podCreationTimestamp="2026-01-29 08:13:25 +0000 UTC" firstStartedPulling="2026-01-29 08:13:27.720587433 +0000 UTC m=+5391.582380532" lastFinishedPulling="2026-01-29 08:13:30.152898256 +0000 UTC m=+5394.014691325" observedRunningTime="2026-01-29 08:13:30.79202364 +0000 UTC m=+5394.653816709" watchObservedRunningTime="2026-01-29 08:13:30.798915701 +0000 UTC m=+5394.660708770" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.219836 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f8lj8" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.229902 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-64b9-account-create-update-vlrvd" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.317549 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6msq2\" (UniqueName: \"kubernetes.io/projected/f6b74f71-c8e0-4b71-91f3-84a55f48a8e6-kube-api-access-6msq2\") pod \"f6b74f71-c8e0-4b71-91f3-84a55f48a8e6\" (UID: \"f6b74f71-c8e0-4b71-91f3-84a55f48a8e6\") " Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.317665 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl8k7\" (UniqueName: \"kubernetes.io/projected/6a4e318f-fe31-41b8-91d3-2b595c03914a-kube-api-access-vl8k7\") pod \"6a4e318f-fe31-41b8-91d3-2b595c03914a\" (UID: \"6a4e318f-fe31-41b8-91d3-2b595c03914a\") " Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.326628 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b74f71-c8e0-4b71-91f3-84a55f48a8e6-kube-api-access-6msq2" (OuterVolumeSpecName: "kube-api-access-6msq2") pod "f6b74f71-c8e0-4b71-91f3-84a55f48a8e6" (UID: "f6b74f71-c8e0-4b71-91f3-84a55f48a8e6"). InnerVolumeSpecName "kube-api-access-6msq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.328511 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4e318f-fe31-41b8-91d3-2b595c03914a-kube-api-access-vl8k7" (OuterVolumeSpecName: "kube-api-access-vl8k7") pod "6a4e318f-fe31-41b8-91d3-2b595c03914a" (UID: "6a4e318f-fe31-41b8-91d3-2b595c03914a"). InnerVolumeSpecName "kube-api-access-vl8k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.419011 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a4e318f-fe31-41b8-91d3-2b595c03914a-operator-scripts\") pod \"6a4e318f-fe31-41b8-91d3-2b595c03914a\" (UID: \"6a4e318f-fe31-41b8-91d3-2b595c03914a\") " Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.419127 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b74f71-c8e0-4b71-91f3-84a55f48a8e6-operator-scripts\") pod \"f6b74f71-c8e0-4b71-91f3-84a55f48a8e6\" (UID: \"f6b74f71-c8e0-4b71-91f3-84a55f48a8e6\") " Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.419566 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6msq2\" (UniqueName: \"kubernetes.io/projected/f6b74f71-c8e0-4b71-91f3-84a55f48a8e6-kube-api-access-6msq2\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.419585 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl8k7\" (UniqueName: \"kubernetes.io/projected/6a4e318f-fe31-41b8-91d3-2b595c03914a-kube-api-access-vl8k7\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.419767 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b74f71-c8e0-4b71-91f3-84a55f48a8e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6b74f71-c8e0-4b71-91f3-84a55f48a8e6" (UID: "f6b74f71-c8e0-4b71-91f3-84a55f48a8e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.419859 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4e318f-fe31-41b8-91d3-2b595c03914a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a4e318f-fe31-41b8-91d3-2b595c03914a" (UID: "6a4e318f-fe31-41b8-91d3-2b595c03914a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.521900 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a4e318f-fe31-41b8-91d3-2b595c03914a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.522199 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b74f71-c8e0-4b71-91f3-84a55f48a8e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.783982 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f8lj8" event={"ID":"f6b74f71-c8e0-4b71-91f3-84a55f48a8e6","Type":"ContainerDied","Data":"878bfffa8d24b886e14417b12723a5fbedfb33243854e4d3febae464d7c3ef4d"} Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.784026 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="878bfffa8d24b886e14417b12723a5fbedfb33243854e4d3febae464d7c3ef4d" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.784096 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f8lj8" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.787349 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-64b9-account-create-update-vlrvd" event={"ID":"6a4e318f-fe31-41b8-91d3-2b595c03914a","Type":"ContainerDied","Data":"3fda5f4fd1a39940f1df521a19adc6701edd6b132d436d8698ec9e1e9b8e0808"} Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.787385 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fda5f4fd1a39940f1df521a19adc6701edd6b132d436d8698ec9e1e9b8e0808" Jan 29 08:13:31 crc kubenswrapper[4826]: I0129 08:13:31.787415 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-64b9-account-create-update-vlrvd" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.496124 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-wjkh5"] Jan 29 08:13:33 crc kubenswrapper[4826]: E0129 08:13:33.496783 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4e318f-fe31-41b8-91d3-2b595c03914a" containerName="mariadb-account-create-update" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.496799 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4e318f-fe31-41b8-91d3-2b595c03914a" containerName="mariadb-account-create-update" Jan 29 08:13:33 crc kubenswrapper[4826]: E0129 08:13:33.496816 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b74f71-c8e0-4b71-91f3-84a55f48a8e6" containerName="mariadb-database-create" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.496825 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b74f71-c8e0-4b71-91f3-84a55f48a8e6" containerName="mariadb-database-create" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.497025 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b74f71-c8e0-4b71-91f3-84a55f48a8e6" containerName="mariadb-database-create" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.497048 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a4e318f-fe31-41b8-91d3-2b595c03914a" containerName="mariadb-account-create-update" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.497793 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.499638 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.508492 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lbrqj" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.509046 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wjkh5"] Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.537193 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.558271 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-scripts\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.558417 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s28pf\" (UniqueName: \"kubernetes.io/projected/db1c0417-3f99-43a6-b55c-8639e55cb922-kube-api-access-s28pf\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.558487 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-config-data\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.558557 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-combined-ca-bundle\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.558615 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db1c0417-3f99-43a6-b55c-8639e55cb922-etc-machine-id\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.558649 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-db-sync-config-data\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.660402 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-combined-ca-bundle\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.660492 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db1c0417-3f99-43a6-b55c-8639e55cb922-etc-machine-id\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.660534 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-db-sync-config-data\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.660576 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-scripts\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.660611 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s28pf\" (UniqueName: \"kubernetes.io/projected/db1c0417-3f99-43a6-b55c-8639e55cb922-kube-api-access-s28pf\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.660666 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-config-data\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.660696 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db1c0417-3f99-43a6-b55c-8639e55cb922-etc-machine-id\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.664540 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-db-sync-config-data\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.664761 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-config-data\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.666783 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-scripts\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.677073 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-combined-ca-bundle\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.681894 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s28pf\" (UniqueName: \"kubernetes.io/projected/db1c0417-3f99-43a6-b55c-8639e55cb922-kube-api-access-s28pf\") pod \"cinder-db-sync-wjkh5\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:33 crc kubenswrapper[4826]: I0129 08:13:33.849007 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:13:34 crc kubenswrapper[4826]: I0129 08:13:34.305965 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wjkh5"] Jan 29 08:13:34 crc kubenswrapper[4826]: W0129 08:13:34.307784 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb1c0417_3f99_43a6_b55c_8639e55cb922.slice/crio-e0b8a39acea1b23edbd150e641e93def9c264ecad03e15e3d45b63fa5dd19893 WatchSource:0}: Error finding container e0b8a39acea1b23edbd150e641e93def9c264ecad03e15e3d45b63fa5dd19893: Status 404 returned error can't find the container with id e0b8a39acea1b23edbd150e641e93def9c264ecad03e15e3d45b63fa5dd19893 Jan 29 08:13:34 crc kubenswrapper[4826]: I0129 08:13:34.809445 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:13:34 crc kubenswrapper[4826]: E0129 08:13:34.809728 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:13:34 crc kubenswrapper[4826]: I0129 08:13:34.819484 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wjkh5" event={"ID":"db1c0417-3f99-43a6-b55c-8639e55cb922","Type":"ContainerStarted","Data":"e0b8a39acea1b23edbd150e641e93def9c264ecad03e15e3d45b63fa5dd19893"} Jan 29 08:13:35 crc kubenswrapper[4826]: I0129 08:13:35.990212 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:35 crc kubenswrapper[4826]: I0129 08:13:35.990269 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:37 crc kubenswrapper[4826]: I0129 08:13:37.040907 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cjxhd" podUID="5a39943b-48a9-4d27-96b2-a22fd3c29e5f" containerName="registry-server" probeResult="failure" output=< Jan 29 08:13:37 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 08:13:37 crc kubenswrapper[4826]: > Jan 29 08:13:46 crc kubenswrapper[4826]: I0129 08:13:46.057626 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:46 crc kubenswrapper[4826]: I0129 08:13:46.148128 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:46 crc kubenswrapper[4826]: I0129 08:13:46.293965 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cjxhd"] Jan 29 08:13:46 crc kubenswrapper[4826]: I0129 08:13:46.813809 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:13:46 crc kubenswrapper[4826]: E0129 08:13:46.814146 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:13:47 crc kubenswrapper[4826]: I0129 08:13:47.954159 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cjxhd" podUID="5a39943b-48a9-4d27-96b2-a22fd3c29e5f" containerName="registry-server" containerID="cri-o://a23b11b5195ecb2c16d1a75f0e0331c18390b977388899c99a5794552e1c33ce" gracePeriod=2 Jan 29 08:13:48 crc kubenswrapper[4826]: I0129 08:13:48.965401 4826 generic.go:334] "Generic (PLEG): container finished" podID="5a39943b-48a9-4d27-96b2-a22fd3c29e5f" containerID="a23b11b5195ecb2c16d1a75f0e0331c18390b977388899c99a5794552e1c33ce" exitCode=0 Jan 29 08:13:48 crc kubenswrapper[4826]: I0129 08:13:48.965659 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxhd" event={"ID":"5a39943b-48a9-4d27-96b2-a22fd3c29e5f","Type":"ContainerDied","Data":"a23b11b5195ecb2c16d1a75f0e0331c18390b977388899c99a5794552e1c33ce"} Jan 29 08:13:54 crc kubenswrapper[4826]: E0129 08:13:54.655370 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:13:54 crc kubenswrapper[4826]: E0129 08:13:54.655630 4826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 08:13:54 crc kubenswrapper[4826]: E0129 08:13:54.655748 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:b130bed8e4e0ff029dd29fba80441dc6,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s28pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-wjkh5_openstack(db1c0417-3f99-43a6-b55c-8639e55cb922): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 08:13:54 crc kubenswrapper[4826]: E0129 08:13:54.656917 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-wjkh5" podUID="db1c0417-3f99-43a6-b55c-8639e55cb922" Jan 29 08:13:55 crc kubenswrapper[4826]: I0129 08:13:55.020758 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cjxhd" event={"ID":"5a39943b-48a9-4d27-96b2-a22fd3c29e5f","Type":"ContainerDied","Data":"b876d9b628b23205a79fe57fe7bda46e322604458d03fa5f165e0e4057bd6669"} Jan 29 08:13:55 crc kubenswrapper[4826]: I0129 08:13:55.021245 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b876d9b628b23205a79fe57fe7bda46e322604458d03fa5f165e0e4057bd6669" Jan 29 08:13:55 crc kubenswrapper[4826]: E0129 08:13:55.022478 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:b130bed8e4e0ff029dd29fba80441dc6\\\"\"" pod="openstack/cinder-db-sync-wjkh5" podUID="db1c0417-3f99-43a6-b55c-8639e55cb922" Jan 29 08:13:55 crc kubenswrapper[4826]: I0129 08:13:55.034564 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:55 crc kubenswrapper[4826]: I0129 08:13:55.100684 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj5p9\" (UniqueName: \"kubernetes.io/projected/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-kube-api-access-qj5p9\") pod \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\" (UID: \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\") " Jan 29 08:13:55 crc kubenswrapper[4826]: I0129 08:13:55.100808 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-catalog-content\") pod \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\" (UID: \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\") " Jan 29 08:13:55 crc kubenswrapper[4826]: I0129 08:13:55.101079 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-utilities\") pod \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\" (UID: \"5a39943b-48a9-4d27-96b2-a22fd3c29e5f\") " Jan 29 08:13:55 crc kubenswrapper[4826]: I0129 08:13:55.102732 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-utilities" (OuterVolumeSpecName: "utilities") pod "5a39943b-48a9-4d27-96b2-a22fd3c29e5f" (UID: "5a39943b-48a9-4d27-96b2-a22fd3c29e5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:13:55 crc kubenswrapper[4826]: I0129 08:13:55.120582 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-kube-api-access-qj5p9" (OuterVolumeSpecName: "kube-api-access-qj5p9") pod "5a39943b-48a9-4d27-96b2-a22fd3c29e5f" (UID: "5a39943b-48a9-4d27-96b2-a22fd3c29e5f"). InnerVolumeSpecName "kube-api-access-qj5p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:13:55 crc kubenswrapper[4826]: I0129 08:13:55.203984 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj5p9\" (UniqueName: \"kubernetes.io/projected/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-kube-api-access-qj5p9\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:55 crc kubenswrapper[4826]: I0129 08:13:55.204027 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:55 crc kubenswrapper[4826]: I0129 08:13:55.217700 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a39943b-48a9-4d27-96b2-a22fd3c29e5f" (UID: "5a39943b-48a9-4d27-96b2-a22fd3c29e5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:13:55 crc kubenswrapper[4826]: I0129 08:13:55.306409 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a39943b-48a9-4d27-96b2-a22fd3c29e5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:13:56 crc kubenswrapper[4826]: I0129 08:13:56.030825 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cjxhd" Jan 29 08:13:56 crc kubenswrapper[4826]: I0129 08:13:56.083638 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cjxhd"] Jan 29 08:13:56 crc kubenswrapper[4826]: I0129 08:13:56.091448 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cjxhd"] Jan 29 08:13:56 crc kubenswrapper[4826]: I0129 08:13:56.826024 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a39943b-48a9-4d27-96b2-a22fd3c29e5f" path="/var/lib/kubelet/pods/5a39943b-48a9-4d27-96b2-a22fd3c29e5f/volumes" Jan 29 08:14:01 crc kubenswrapper[4826]: I0129 08:14:01.808851 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:14:01 crc kubenswrapper[4826]: E0129 08:14:01.810088 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:14:10 crc kubenswrapper[4826]: I0129 08:14:10.169783 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wjkh5" event={"ID":"db1c0417-3f99-43a6-b55c-8639e55cb922","Type":"ContainerStarted","Data":"ee46d04244e736678a3b64e10294d4119a68321ffba82764eee8ea1a70adcea2"} Jan 29 08:14:10 crc kubenswrapper[4826]: I0129 08:14:10.194232 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-wjkh5" podStartSLOduration=2.304285667 podStartE2EDuration="37.194212185s" podCreationTimestamp="2026-01-29 08:13:33 +0000 UTC" firstStartedPulling="2026-01-29 08:13:34.310671848 +0000 UTC m=+5398.172464917" lastFinishedPulling="2026-01-29 08:14:09.200598356 +0000 UTC m=+5433.062391435" observedRunningTime="2026-01-29 08:14:10.192247843 +0000 UTC m=+5434.054040942" watchObservedRunningTime="2026-01-29 08:14:10.194212185 +0000 UTC m=+5434.056005264" Jan 29 08:14:12 crc kubenswrapper[4826]: I0129 08:14:12.193788 4826 generic.go:334] "Generic (PLEG): container finished" podID="db1c0417-3f99-43a6-b55c-8639e55cb922" containerID="ee46d04244e736678a3b64e10294d4119a68321ffba82764eee8ea1a70adcea2" exitCode=0 Jan 29 08:14:12 crc kubenswrapper[4826]: I0129 08:14:12.193878 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wjkh5" event={"ID":"db1c0417-3f99-43a6-b55c-8639e55cb922","Type":"ContainerDied","Data":"ee46d04244e736678a3b64e10294d4119a68321ffba82764eee8ea1a70adcea2"} Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.621743 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.695108 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db1c0417-3f99-43a6-b55c-8639e55cb922-etc-machine-id\") pod \"db1c0417-3f99-43a6-b55c-8639e55cb922\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.695213 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db1c0417-3f99-43a6-b55c-8639e55cb922-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "db1c0417-3f99-43a6-b55c-8639e55cb922" (UID: "db1c0417-3f99-43a6-b55c-8639e55cb922"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.695238 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-combined-ca-bundle\") pod \"db1c0417-3f99-43a6-b55c-8639e55cb922\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.695335 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-scripts\") pod \"db1c0417-3f99-43a6-b55c-8639e55cb922\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.695374 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s28pf\" (UniqueName: \"kubernetes.io/projected/db1c0417-3f99-43a6-b55c-8639e55cb922-kube-api-access-s28pf\") pod \"db1c0417-3f99-43a6-b55c-8639e55cb922\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.695428 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-config-data\") pod \"db1c0417-3f99-43a6-b55c-8639e55cb922\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.695529 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-db-sync-config-data\") pod \"db1c0417-3f99-43a6-b55c-8639e55cb922\" (UID: \"db1c0417-3f99-43a6-b55c-8639e55cb922\") " Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.696040 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db1c0417-3f99-43a6-b55c-8639e55cb922-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.701164 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-scripts" (OuterVolumeSpecName: "scripts") pod "db1c0417-3f99-43a6-b55c-8639e55cb922" (UID: "db1c0417-3f99-43a6-b55c-8639e55cb922"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.701199 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db1c0417-3f99-43a6-b55c-8639e55cb922-kube-api-access-s28pf" (OuterVolumeSpecName: "kube-api-access-s28pf") pod "db1c0417-3f99-43a6-b55c-8639e55cb922" (UID: "db1c0417-3f99-43a6-b55c-8639e55cb922"). InnerVolumeSpecName "kube-api-access-s28pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.703024 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "db1c0417-3f99-43a6-b55c-8639e55cb922" (UID: "db1c0417-3f99-43a6-b55c-8639e55cb922"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.722370 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db1c0417-3f99-43a6-b55c-8639e55cb922" (UID: "db1c0417-3f99-43a6-b55c-8639e55cb922"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.763521 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-config-data" (OuterVolumeSpecName: "config-data") pod "db1c0417-3f99-43a6-b55c-8639e55cb922" (UID: "db1c0417-3f99-43a6-b55c-8639e55cb922"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.797835 4826 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.797886 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.797897 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.797909 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s28pf\" (UniqueName: \"kubernetes.io/projected/db1c0417-3f99-43a6-b55c-8639e55cb922-kube-api-access-s28pf\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.797921 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db1c0417-3f99-43a6-b55c-8639e55cb922-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:13 crc kubenswrapper[4826]: I0129 08:14:13.809029 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:14:13 crc kubenswrapper[4826]: E0129 08:14:13.809352 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.220550 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wjkh5" event={"ID":"db1c0417-3f99-43a6-b55c-8639e55cb922","Type":"ContainerDied","Data":"e0b8a39acea1b23edbd150e641e93def9c264ecad03e15e3d45b63fa5dd19893"} Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.220609 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0b8a39acea1b23edbd150e641e93def9c264ecad03e15e3d45b63fa5dd19893" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.220623 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wjkh5" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.583227 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79fc76598f-tc976"] Jan 29 08:14:14 crc kubenswrapper[4826]: E0129 08:14:14.584200 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a39943b-48a9-4d27-96b2-a22fd3c29e5f" containerName="registry-server" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.584279 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a39943b-48a9-4d27-96b2-a22fd3c29e5f" containerName="registry-server" Jan 29 08:14:14 crc kubenswrapper[4826]: E0129 08:14:14.584367 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a39943b-48a9-4d27-96b2-a22fd3c29e5f" containerName="extract-utilities" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.584436 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a39943b-48a9-4d27-96b2-a22fd3c29e5f" containerName="extract-utilities" Jan 29 08:14:14 crc kubenswrapper[4826]: E0129 08:14:14.584506 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1c0417-3f99-43a6-b55c-8639e55cb922" containerName="cinder-db-sync" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.584558 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1c0417-3f99-43a6-b55c-8639e55cb922" containerName="cinder-db-sync" Jan 29 08:14:14 crc kubenswrapper[4826]: E0129 08:14:14.584622 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a39943b-48a9-4d27-96b2-a22fd3c29e5f" containerName="extract-content" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.584680 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a39943b-48a9-4d27-96b2-a22fd3c29e5f" containerName="extract-content" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.584886 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a39943b-48a9-4d27-96b2-a22fd3c29e5f" containerName="registry-server" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.584957 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1c0417-3f99-43a6-b55c-8639e55cb922" containerName="cinder-db-sync" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.585852 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.615277 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-ovsdbserver-sb\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.615434 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-config\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.615458 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pclct\" (UniqueName: \"kubernetes.io/projected/aa3ada56-74ec-4f31-a22a-84e52dfba998-kube-api-access-pclct\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.615474 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-ovsdbserver-nb\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.615513 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-dns-svc\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.717488 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-config\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.717582 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-ovsdbserver-nb\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.717608 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pclct\" (UniqueName: \"kubernetes.io/projected/aa3ada56-74ec-4f31-a22a-84e52dfba998-kube-api-access-pclct\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.717682 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-dns-svc\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.717741 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-ovsdbserver-sb\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.718881 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-ovsdbserver-sb\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.719104 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-dns-svc\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.719174 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-config\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.719335 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-ovsdbserver-nb\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.879025 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pclct\" (UniqueName: \"kubernetes.io/projected/aa3ada56-74ec-4f31-a22a-84e52dfba998-kube-api-access-pclct\") pod \"dnsmasq-dns-79fc76598f-tc976\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.880138 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79fc76598f-tc976"] Jan 29 08:14:14 crc kubenswrapper[4826]: I0129 08:14:14.903742 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.026392 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.029243 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.038233 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.038649 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lbrqj" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.038996 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.039336 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.053777 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.127535 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f31d1c3a-39f3-441c-86bf-50f8687631e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.127575 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.127597 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-scripts\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.127634 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.127659 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31d1c3a-39f3-441c-86bf-50f8687631e6-logs\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.127701 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48n5z\" (UniqueName: \"kubernetes.io/projected/f31d1c3a-39f3-441c-86bf-50f8687631e6-kube-api-access-48n5z\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.127724 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-config-data\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.230219 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.230277 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-scripts\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.230355 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.230409 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31d1c3a-39f3-441c-86bf-50f8687631e6-logs\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.230469 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48n5z\" (UniqueName: \"kubernetes.io/projected/f31d1c3a-39f3-441c-86bf-50f8687631e6-kube-api-access-48n5z\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.230504 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-config-data\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.230595 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f31d1c3a-39f3-441c-86bf-50f8687631e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.230690 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f31d1c3a-39f3-441c-86bf-50f8687631e6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.231518 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31d1c3a-39f3-441c-86bf-50f8687631e6-logs\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.236016 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-scripts\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.236072 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-config-data-custom\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.240021 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-config-data\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.247757 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.248636 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48n5z\" (UniqueName: \"kubernetes.io/projected/f31d1c3a-39f3-441c-86bf-50f8687631e6-kube-api-access-48n5z\") pod \"cinder-api-0\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.362039 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.439202 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79fc76598f-tc976"] Jan 29 08:14:15 crc kubenswrapper[4826]: I0129 08:14:15.782378 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:15 crc kubenswrapper[4826]: W0129 08:14:15.787482 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf31d1c3a_39f3_441c_86bf_50f8687631e6.slice/crio-97d0a5f5e487f37723a55d21f39d568978d802789962716716a8d9129d36729c WatchSource:0}: Error finding container 97d0a5f5e487f37723a55d21f39d568978d802789962716716a8d9129d36729c: Status 404 returned error can't find the container with id 97d0a5f5e487f37723a55d21f39d568978d802789962716716a8d9129d36729c Jan 29 08:14:16 crc kubenswrapper[4826]: I0129 08:14:16.240378 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f31d1c3a-39f3-441c-86bf-50f8687631e6","Type":"ContainerStarted","Data":"97d0a5f5e487f37723a55d21f39d568978d802789962716716a8d9129d36729c"} Jan 29 08:14:16 crc kubenswrapper[4826]: I0129 08:14:16.242370 4826 generic.go:334] "Generic (PLEG): container finished" podID="aa3ada56-74ec-4f31-a22a-84e52dfba998" containerID="dee3da880cc7e612ddc9f3f2ec64e9580bdd7f0497de820cbc1ab243bb8e4805" exitCode=0 Jan 29 08:14:16 crc kubenswrapper[4826]: I0129 08:14:16.242412 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79fc76598f-tc976" event={"ID":"aa3ada56-74ec-4f31-a22a-84e52dfba998","Type":"ContainerDied","Data":"dee3da880cc7e612ddc9f3f2ec64e9580bdd7f0497de820cbc1ab243bb8e4805"} Jan 29 08:14:16 crc kubenswrapper[4826]: I0129 08:14:16.242439 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79fc76598f-tc976" event={"ID":"aa3ada56-74ec-4f31-a22a-84e52dfba998","Type":"ContainerStarted","Data":"38d1b907656a0484bdbb4e4f5aff7e1af05685fc813f4bee0ded1e19f9a5c5ef"} Jan 29 08:14:17 crc kubenswrapper[4826]: I0129 08:14:17.236902 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:17 crc kubenswrapper[4826]: I0129 08:14:17.252431 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79fc76598f-tc976" event={"ID":"aa3ada56-74ec-4f31-a22a-84e52dfba998","Type":"ContainerStarted","Data":"a62fc4d2af9fa05b4974e54dc32f0df8d5cd505c9ab779712b08e01036bf033f"} Jan 29 08:14:17 crc kubenswrapper[4826]: I0129 08:14:17.253508 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:17 crc kubenswrapper[4826]: I0129 08:14:17.255384 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f31d1c3a-39f3-441c-86bf-50f8687631e6","Type":"ContainerStarted","Data":"5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1"} Jan 29 08:14:17 crc kubenswrapper[4826]: I0129 08:14:17.255411 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f31d1c3a-39f3-441c-86bf-50f8687631e6","Type":"ContainerStarted","Data":"69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b"} Jan 29 08:14:17 crc kubenswrapper[4826]: I0129 08:14:17.255539 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 08:14:17 crc kubenswrapper[4826]: I0129 08:14:17.274810 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79fc76598f-tc976" podStartSLOduration=3.27479046 podStartE2EDuration="3.27479046s" podCreationTimestamp="2026-01-29 08:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:14:17.273426115 +0000 UTC m=+5441.135219194" watchObservedRunningTime="2026-01-29 08:14:17.27479046 +0000 UTC m=+5441.136583529" Jan 29 08:14:17 crc kubenswrapper[4826]: I0129 08:14:17.304833 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.304811489 podStartE2EDuration="3.304811489s" podCreationTimestamp="2026-01-29 08:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:14:17.292567898 +0000 UTC m=+5441.154360967" watchObservedRunningTime="2026-01-29 08:14:17.304811489 +0000 UTC m=+5441.166604558" Jan 29 08:14:18 crc kubenswrapper[4826]: I0129 08:14:18.265332 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f31d1c3a-39f3-441c-86bf-50f8687631e6" containerName="cinder-api-log" containerID="cri-o://69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b" gracePeriod=30 Jan 29 08:14:18 crc kubenswrapper[4826]: I0129 08:14:18.265387 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f31d1c3a-39f3-441c-86bf-50f8687631e6" containerName="cinder-api" containerID="cri-o://5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1" gracePeriod=30 Jan 29 08:14:18 crc kubenswrapper[4826]: I0129 08:14:18.812744 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:14:18 crc kubenswrapper[4826]: I0129 08:14:18.995783 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-config-data-custom\") pod \"f31d1c3a-39f3-441c-86bf-50f8687631e6\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " Jan 29 08:14:18 crc kubenswrapper[4826]: I0129 08:14:18.995848 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-combined-ca-bundle\") pod \"f31d1c3a-39f3-441c-86bf-50f8687631e6\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " Jan 29 08:14:18 crc kubenswrapper[4826]: I0129 08:14:18.995894 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-scripts\") pod \"f31d1c3a-39f3-441c-86bf-50f8687631e6\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " Jan 29 08:14:18 crc kubenswrapper[4826]: I0129 08:14:18.995948 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-config-data\") pod \"f31d1c3a-39f3-441c-86bf-50f8687631e6\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " Jan 29 08:14:18 crc kubenswrapper[4826]: I0129 08:14:18.996001 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48n5z\" (UniqueName: \"kubernetes.io/projected/f31d1c3a-39f3-441c-86bf-50f8687631e6-kube-api-access-48n5z\") pod \"f31d1c3a-39f3-441c-86bf-50f8687631e6\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " Jan 29 08:14:18 crc kubenswrapper[4826]: I0129 08:14:18.996050 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f31d1c3a-39f3-441c-86bf-50f8687631e6-etc-machine-id\") pod \"f31d1c3a-39f3-441c-86bf-50f8687631e6\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " Jan 29 08:14:18 crc kubenswrapper[4826]: I0129 08:14:18.996122 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31d1c3a-39f3-441c-86bf-50f8687631e6-logs\") pod \"f31d1c3a-39f3-441c-86bf-50f8687631e6\" (UID: \"f31d1c3a-39f3-441c-86bf-50f8687631e6\") " Jan 29 08:14:18 crc kubenswrapper[4826]: I0129 08:14:18.996334 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f31d1c3a-39f3-441c-86bf-50f8687631e6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f31d1c3a-39f3-441c-86bf-50f8687631e6" (UID: "f31d1c3a-39f3-441c-86bf-50f8687631e6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:14:18 crc kubenswrapper[4826]: I0129 08:14:18.996680 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f31d1c3a-39f3-441c-86bf-50f8687631e6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:18 crc kubenswrapper[4826]: I0129 08:14:18.996765 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f31d1c3a-39f3-441c-86bf-50f8687631e6-logs" (OuterVolumeSpecName: "logs") pod "f31d1c3a-39f3-441c-86bf-50f8687631e6" (UID: "f31d1c3a-39f3-441c-86bf-50f8687631e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.001799 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f31d1c3a-39f3-441c-86bf-50f8687631e6" (UID: "f31d1c3a-39f3-441c-86bf-50f8687631e6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.001886 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-scripts" (OuterVolumeSpecName: "scripts") pod "f31d1c3a-39f3-441c-86bf-50f8687631e6" (UID: "f31d1c3a-39f3-441c-86bf-50f8687631e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.009641 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31d1c3a-39f3-441c-86bf-50f8687631e6-kube-api-access-48n5z" (OuterVolumeSpecName: "kube-api-access-48n5z") pod "f31d1c3a-39f3-441c-86bf-50f8687631e6" (UID: "f31d1c3a-39f3-441c-86bf-50f8687631e6"). InnerVolumeSpecName "kube-api-access-48n5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.031637 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f31d1c3a-39f3-441c-86bf-50f8687631e6" (UID: "f31d1c3a-39f3-441c-86bf-50f8687631e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.068154 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-config-data" (OuterVolumeSpecName: "config-data") pod "f31d1c3a-39f3-441c-86bf-50f8687631e6" (UID: "f31d1c3a-39f3-441c-86bf-50f8687631e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.098617 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.098647 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48n5z\" (UniqueName: \"kubernetes.io/projected/f31d1c3a-39f3-441c-86bf-50f8687631e6-kube-api-access-48n5z\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.098659 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31d1c3a-39f3-441c-86bf-50f8687631e6-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.098669 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.098677 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.098684 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f31d1c3a-39f3-441c-86bf-50f8687631e6-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.273194 4826 generic.go:334] "Generic (PLEG): container finished" podID="f31d1c3a-39f3-441c-86bf-50f8687631e6" containerID="5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1" exitCode=0 Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.273482 4826 generic.go:334] "Generic (PLEG): container finished" podID="f31d1c3a-39f3-441c-86bf-50f8687631e6" containerID="69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b" exitCode=143 Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.273266 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.273286 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f31d1c3a-39f3-441c-86bf-50f8687631e6","Type":"ContainerDied","Data":"5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1"} Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.273570 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f31d1c3a-39f3-441c-86bf-50f8687631e6","Type":"ContainerDied","Data":"69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b"} Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.273590 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f31d1c3a-39f3-441c-86bf-50f8687631e6","Type":"ContainerDied","Data":"97d0a5f5e487f37723a55d21f39d568978d802789962716716a8d9129d36729c"} Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.273613 4826 scope.go:117] "RemoveContainer" containerID="5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.298219 4826 scope.go:117] "RemoveContainer" containerID="69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.321924 4826 scope.go:117] "RemoveContainer" containerID="5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1" Jan 29 08:14:19 crc kubenswrapper[4826]: E0129 08:14:19.322336 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1\": container with ID starting with 5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1 not found: ID does not exist" containerID="5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.322370 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1"} err="failed to get container status \"5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1\": rpc error: code = NotFound desc = could not find container \"5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1\": container with ID starting with 5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1 not found: ID does not exist" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.322387 4826 scope.go:117] "RemoveContainer" containerID="69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b" Jan 29 08:14:19 crc kubenswrapper[4826]: E0129 08:14:19.322600 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b\": container with ID starting with 69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b not found: ID does not exist" containerID="69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.322617 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b"} err="failed to get container status \"69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b\": rpc error: code = NotFound desc = could not find container \"69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b\": container with ID starting with 69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b not found: ID does not exist" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.322629 4826 scope.go:117] "RemoveContainer" containerID="5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.322840 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1"} err="failed to get container status \"5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1\": rpc error: code = NotFound desc = could not find container \"5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1\": container with ID starting with 5e23c08a6460d0b646c832d732b2b50ef39869061a7cab9cc75b9bc998fe1fc1 not found: ID does not exist" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.322854 4826 scope.go:117] "RemoveContainer" containerID="69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.323230 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b"} err="failed to get container status \"69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b\": rpc error: code = NotFound desc = could not find container \"69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b\": container with ID starting with 69b85d6a3119d0ab016eb97197b527f88c22c6509e979028d94d50f77ebee97b not found: ID does not exist" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.332457 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.344715 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.370349 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:19 crc kubenswrapper[4826]: E0129 08:14:19.373166 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31d1c3a-39f3-441c-86bf-50f8687631e6" containerName="cinder-api-log" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.373217 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31d1c3a-39f3-441c-86bf-50f8687631e6" containerName="cinder-api-log" Jan 29 08:14:19 crc kubenswrapper[4826]: E0129 08:14:19.373254 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31d1c3a-39f3-441c-86bf-50f8687631e6" containerName="cinder-api" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.373275 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31d1c3a-39f3-441c-86bf-50f8687631e6" containerName="cinder-api" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.373725 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31d1c3a-39f3-441c-86bf-50f8687631e6" containerName="cinder-api-log" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.373766 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31d1c3a-39f3-441c-86bf-50f8687631e6" containerName="cinder-api" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.375950 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.379044 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.379363 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.379568 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.379731 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lbrqj" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.383775 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.383882 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.395559 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.506383 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/355ef105-82c3-484b-8a1e-b54630b6e9a3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.506448 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-config-data\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.506481 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.506661 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-scripts\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.506816 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.506903 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftxnv\" (UniqueName: \"kubernetes.io/projected/355ef105-82c3-484b-8a1e-b54630b6e9a3-kube-api-access-ftxnv\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.506937 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/355ef105-82c3-484b-8a1e-b54630b6e9a3-logs\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.506989 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.507042 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-config-data-custom\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.608949 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.609073 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftxnv\" (UniqueName: \"kubernetes.io/projected/355ef105-82c3-484b-8a1e-b54630b6e9a3-kube-api-access-ftxnv\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.609112 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/355ef105-82c3-484b-8a1e-b54630b6e9a3-logs\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.609163 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.609223 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-config-data-custom\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.609362 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/355ef105-82c3-484b-8a1e-b54630b6e9a3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.609417 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-config-data\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.609469 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.609578 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-scripts\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.609887 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/355ef105-82c3-484b-8a1e-b54630b6e9a3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.610841 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/355ef105-82c3-484b-8a1e-b54630b6e9a3-logs\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.615617 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-scripts\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.616083 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-config-data-custom\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.618116 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.618563 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.624080 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.624490 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-config-data\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.639779 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftxnv\" (UniqueName: \"kubernetes.io/projected/355ef105-82c3-484b-8a1e-b54630b6e9a3-kube-api-access-ftxnv\") pod \"cinder-api-0\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " pod="openstack/cinder-api-0" Jan 29 08:14:19 crc kubenswrapper[4826]: I0129 08:14:19.746022 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:14:20 crc kubenswrapper[4826]: I0129 08:14:20.095884 4826 scope.go:117] "RemoveContainer" containerID="5fbcb597f025be2341218b893491ab3d466e5dfaa0e751c7ab941414bc19b8e4" Jan 29 08:14:20 crc kubenswrapper[4826]: I0129 08:14:20.117676 4826 scope.go:117] "RemoveContainer" containerID="0502a74beb4fb58419f4c87f0c60c4a997163bf3aed17009285ddadcfeb7955d" Jan 29 08:14:20 crc kubenswrapper[4826]: I0129 08:14:20.233744 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:20 crc kubenswrapper[4826]: W0129 08:14:20.245683 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod355ef105_82c3_484b_8a1e_b54630b6e9a3.slice/crio-b278b8f46138dadac72bba37b73b4a0dac34fc3b56e2dc8a703fee26808a926d WatchSource:0}: Error finding container b278b8f46138dadac72bba37b73b4a0dac34fc3b56e2dc8a703fee26808a926d: Status 404 returned error can't find the container with id b278b8f46138dadac72bba37b73b4a0dac34fc3b56e2dc8a703fee26808a926d Jan 29 08:14:20 crc kubenswrapper[4826]: I0129 08:14:20.289324 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"355ef105-82c3-484b-8a1e-b54630b6e9a3","Type":"ContainerStarted","Data":"b278b8f46138dadac72bba37b73b4a0dac34fc3b56e2dc8a703fee26808a926d"} Jan 29 08:14:20 crc kubenswrapper[4826]: I0129 08:14:20.820764 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31d1c3a-39f3-441c-86bf-50f8687631e6" path="/var/lib/kubelet/pods/f31d1c3a-39f3-441c-86bf-50f8687631e6/volumes" Jan 29 08:14:21 crc kubenswrapper[4826]: I0129 08:14:21.299855 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"355ef105-82c3-484b-8a1e-b54630b6e9a3","Type":"ContainerStarted","Data":"9b8769dca0e07aeb539264927940fe357540208a471f545bd519403eefbac321"} Jan 29 08:14:22 crc kubenswrapper[4826]: I0129 08:14:22.312112 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"355ef105-82c3-484b-8a1e-b54630b6e9a3","Type":"ContainerStarted","Data":"53733a27456760851d6241253747ff6a5530bfb46d374eb81171edfb2004aead"} Jan 29 08:14:22 crc kubenswrapper[4826]: I0129 08:14:22.312651 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 08:14:22 crc kubenswrapper[4826]: I0129 08:14:22.360313 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.360261799 podStartE2EDuration="3.360261799s" podCreationTimestamp="2026-01-29 08:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:14:22.336066593 +0000 UTC m=+5446.197859692" watchObservedRunningTime="2026-01-29 08:14:22.360261799 +0000 UTC m=+5446.222054878" Jan 29 08:14:24 crc kubenswrapper[4826]: I0129 08:14:24.906710 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:14:24 crc kubenswrapper[4826]: I0129 08:14:24.987036 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57cdc85b77-pp9cd"] Jan 29 08:14:24 crc kubenswrapper[4826]: I0129 08:14:24.987470 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" podUID="acb1638f-8ad4-476a-a018-261e585bd72a" containerName="dnsmasq-dns" containerID="cri-o://7ec6c8ec9b273742035c567d9a6ebb7fee02685c7f909041c18c30c4db780e1f" gracePeriod=10 Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.346575 4826 generic.go:334] "Generic (PLEG): container finished" podID="acb1638f-8ad4-476a-a018-261e585bd72a" containerID="7ec6c8ec9b273742035c567d9a6ebb7fee02685c7f909041c18c30c4db780e1f" exitCode=0 Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.346630 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" event={"ID":"acb1638f-8ad4-476a-a018-261e585bd72a","Type":"ContainerDied","Data":"7ec6c8ec9b273742035c567d9a6ebb7fee02685c7f909041c18c30c4db780e1f"} Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.462835 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.639139 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-dns-svc\") pod \"acb1638f-8ad4-476a-a018-261e585bd72a\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.639200 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-ovsdbserver-sb\") pod \"acb1638f-8ad4-476a-a018-261e585bd72a\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.639269 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-config\") pod \"acb1638f-8ad4-476a-a018-261e585bd72a\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.639504 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-ovsdbserver-nb\") pod \"acb1638f-8ad4-476a-a018-261e585bd72a\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.639571 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkc6m\" (UniqueName: \"kubernetes.io/projected/acb1638f-8ad4-476a-a018-261e585bd72a-kube-api-access-wkc6m\") pod \"acb1638f-8ad4-476a-a018-261e585bd72a\" (UID: \"acb1638f-8ad4-476a-a018-261e585bd72a\") " Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.649843 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb1638f-8ad4-476a-a018-261e585bd72a-kube-api-access-wkc6m" (OuterVolumeSpecName: "kube-api-access-wkc6m") pod "acb1638f-8ad4-476a-a018-261e585bd72a" (UID: "acb1638f-8ad4-476a-a018-261e585bd72a"). InnerVolumeSpecName "kube-api-access-wkc6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.685528 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "acb1638f-8ad4-476a-a018-261e585bd72a" (UID: "acb1638f-8ad4-476a-a018-261e585bd72a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.689867 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-config" (OuterVolumeSpecName: "config") pod "acb1638f-8ad4-476a-a018-261e585bd72a" (UID: "acb1638f-8ad4-476a-a018-261e585bd72a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.711869 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "acb1638f-8ad4-476a-a018-261e585bd72a" (UID: "acb1638f-8ad4-476a-a018-261e585bd72a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.719288 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "acb1638f-8ad4-476a-a018-261e585bd72a" (UID: "acb1638f-8ad4-476a-a018-261e585bd72a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.741836 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.741865 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkc6m\" (UniqueName: \"kubernetes.io/projected/acb1638f-8ad4-476a-a018-261e585bd72a-kube-api-access-wkc6m\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.741877 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.741885 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.741893 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb1638f-8ad4-476a-a018-261e585bd72a-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:25 crc kubenswrapper[4826]: I0129 08:14:25.809016 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:14:25 crc kubenswrapper[4826]: E0129 08:14:25.809614 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:14:26 crc kubenswrapper[4826]: I0129 08:14:26.362832 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" event={"ID":"acb1638f-8ad4-476a-a018-261e585bd72a","Type":"ContainerDied","Data":"3f5262e360c800d9083bce5b04284c911ac7a30c40831a00121e7734df884999"} Jan 29 08:14:26 crc kubenswrapper[4826]: I0129 08:14:26.362895 4826 scope.go:117] "RemoveContainer" containerID="7ec6c8ec9b273742035c567d9a6ebb7fee02685c7f909041c18c30c4db780e1f" Jan 29 08:14:26 crc kubenswrapper[4826]: I0129 08:14:26.363516 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57cdc85b77-pp9cd" Jan 29 08:14:26 crc kubenswrapper[4826]: I0129 08:14:26.399285 4826 scope.go:117] "RemoveContainer" containerID="f1c82d2694c98ac8d45099501e27dfdc29d7aa79a6a1dc8040699300e4c60a72" Jan 29 08:14:26 crc kubenswrapper[4826]: I0129 08:14:26.411785 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57cdc85b77-pp9cd"] Jan 29 08:14:26 crc kubenswrapper[4826]: I0129 08:14:26.425737 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57cdc85b77-pp9cd"] Jan 29 08:14:26 crc kubenswrapper[4826]: I0129 08:14:26.829491 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acb1638f-8ad4-476a-a018-261e585bd72a" path="/var/lib/kubelet/pods/acb1638f-8ad4-476a-a018-261e585bd72a/volumes" Jan 29 08:14:31 crc kubenswrapper[4826]: I0129 08:14:31.414260 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 08:14:38 crc kubenswrapper[4826]: I0129 08:14:38.809395 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:14:39 crc kubenswrapper[4826]: I0129 08:14:39.499966 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"fa58273e324a815b3443a21ebdc2b51cbff96c021a9b9a81653f9dc2de446316"} Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.380634 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:14:47 crc kubenswrapper[4826]: E0129 08:14:47.381569 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb1638f-8ad4-476a-a018-261e585bd72a" containerName="init" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.381585 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb1638f-8ad4-476a-a018-261e585bd72a" containerName="init" Jan 29 08:14:47 crc kubenswrapper[4826]: E0129 08:14:47.381602 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb1638f-8ad4-476a-a018-261e585bd72a" containerName="dnsmasq-dns" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.381610 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb1638f-8ad4-476a-a018-261e585bd72a" containerName="dnsmasq-dns" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.381867 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb1638f-8ad4-476a-a018-261e585bd72a" containerName="dnsmasq-dns" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.383107 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.385495 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.391599 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.429671 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-scripts\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.429791 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.429841 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.429869 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-config-data\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.429903 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84664c57-16ba-4758-ab2f-944dec09bb0c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.430021 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6skr\" (UniqueName: \"kubernetes.io/projected/84664c57-16ba-4758-ab2f-944dec09bb0c-kube-api-access-q6skr\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.531456 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84664c57-16ba-4758-ab2f-944dec09bb0c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.531525 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6skr\" (UniqueName: \"kubernetes.io/projected/84664c57-16ba-4758-ab2f-944dec09bb0c-kube-api-access-q6skr\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.531570 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-scripts\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.531573 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84664c57-16ba-4758-ab2f-944dec09bb0c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.531707 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.531763 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.531815 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-config-data\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.538232 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.538545 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-config-data\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.546191 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.546700 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-scripts\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.554431 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6skr\" (UniqueName: \"kubernetes.io/projected/84664c57-16ba-4758-ab2f-944dec09bb0c-kube-api-access-q6skr\") pod \"cinder-scheduler-0\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " pod="openstack/cinder-scheduler-0" Jan 29 08:14:47 crc kubenswrapper[4826]: I0129 08:14:47.722048 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:14:48 crc kubenswrapper[4826]: I0129 08:14:48.196268 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:14:48 crc kubenswrapper[4826]: I0129 08:14:48.608717 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"84664c57-16ba-4758-ab2f-944dec09bb0c","Type":"ContainerStarted","Data":"c12d8e48ca635d96286f6f55bad4d64751582a3e3f88798fede55ce7e94db6b1"} Jan 29 08:14:48 crc kubenswrapper[4826]: I0129 08:14:48.847814 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:48 crc kubenswrapper[4826]: I0129 08:14:48.848643 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="355ef105-82c3-484b-8a1e-b54630b6e9a3" containerName="cinder-api-log" containerID="cri-o://9b8769dca0e07aeb539264927940fe357540208a471f545bd519403eefbac321" gracePeriod=30 Jan 29 08:14:48 crc kubenswrapper[4826]: I0129 08:14:48.849017 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="355ef105-82c3-484b-8a1e-b54630b6e9a3" containerName="cinder-api" containerID="cri-o://53733a27456760851d6241253747ff6a5530bfb46d374eb81171edfb2004aead" gracePeriod=30 Jan 29 08:14:49 crc kubenswrapper[4826]: I0129 08:14:49.619608 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"84664c57-16ba-4758-ab2f-944dec09bb0c","Type":"ContainerStarted","Data":"484c4da56119b7b06335664753043dc939982e45877b101d9c5834f03a99c5a9"} Jan 29 08:14:49 crc kubenswrapper[4826]: I0129 08:14:49.623442 4826 generic.go:334] "Generic (PLEG): container finished" podID="355ef105-82c3-484b-8a1e-b54630b6e9a3" containerID="9b8769dca0e07aeb539264927940fe357540208a471f545bd519403eefbac321" exitCode=143 Jan 29 08:14:49 crc kubenswrapper[4826]: I0129 08:14:49.623489 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"355ef105-82c3-484b-8a1e-b54630b6e9a3","Type":"ContainerDied","Data":"9b8769dca0e07aeb539264927940fe357540208a471f545bd519403eefbac321"} Jan 29 08:14:50 crc kubenswrapper[4826]: I0129 08:14:50.659810 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"84664c57-16ba-4758-ab2f-944dec09bb0c","Type":"ContainerStarted","Data":"d35733ef0b5cd4f8fd81d380dae433d48b1fbfad39907374b2fc73b9b1db4ce8"} Jan 29 08:14:50 crc kubenswrapper[4826]: I0129 08:14:50.699917 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.387689894 podStartE2EDuration="3.699890267s" podCreationTimestamp="2026-01-29 08:14:47 +0000 UTC" firstStartedPulling="2026-01-29 08:14:48.21677342 +0000 UTC m=+5472.078566489" lastFinishedPulling="2026-01-29 08:14:48.528973783 +0000 UTC m=+5472.390766862" observedRunningTime="2026-01-29 08:14:50.684145813 +0000 UTC m=+5474.545938882" watchObservedRunningTime="2026-01-29 08:14:50.699890267 +0000 UTC m=+5474.561683366" Jan 29 08:14:51 crc kubenswrapper[4826]: I0129 08:14:51.997526 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="355ef105-82c3-484b-8a1e-b54630b6e9a3" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.1.51:8776/healthcheck\": read tcp 10.217.0.2:36094->10.217.1.51:8776: read: connection reset by peer" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.400260 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.530418 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftxnv\" (UniqueName: \"kubernetes.io/projected/355ef105-82c3-484b-8a1e-b54630b6e9a3-kube-api-access-ftxnv\") pod \"355ef105-82c3-484b-8a1e-b54630b6e9a3\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.530774 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/355ef105-82c3-484b-8a1e-b54630b6e9a3-etc-machine-id\") pod \"355ef105-82c3-484b-8a1e-b54630b6e9a3\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.530823 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-public-tls-certs\") pod \"355ef105-82c3-484b-8a1e-b54630b6e9a3\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.530861 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-scripts\") pod \"355ef105-82c3-484b-8a1e-b54630b6e9a3\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.530928 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/355ef105-82c3-484b-8a1e-b54630b6e9a3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "355ef105-82c3-484b-8a1e-b54630b6e9a3" (UID: "355ef105-82c3-484b-8a1e-b54630b6e9a3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.531012 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/355ef105-82c3-484b-8a1e-b54630b6e9a3-logs\") pod \"355ef105-82c3-484b-8a1e-b54630b6e9a3\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.531085 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-config-data-custom\") pod \"355ef105-82c3-484b-8a1e-b54630b6e9a3\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.531165 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-config-data\") pod \"355ef105-82c3-484b-8a1e-b54630b6e9a3\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.531199 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-combined-ca-bundle\") pod \"355ef105-82c3-484b-8a1e-b54630b6e9a3\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.531283 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-internal-tls-certs\") pod \"355ef105-82c3-484b-8a1e-b54630b6e9a3\" (UID: \"355ef105-82c3-484b-8a1e-b54630b6e9a3\") " Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.531527 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/355ef105-82c3-484b-8a1e-b54630b6e9a3-logs" (OuterVolumeSpecName: "logs") pod "355ef105-82c3-484b-8a1e-b54630b6e9a3" (UID: "355ef105-82c3-484b-8a1e-b54630b6e9a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.531931 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/355ef105-82c3-484b-8a1e-b54630b6e9a3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.531960 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/355ef105-82c3-484b-8a1e-b54630b6e9a3-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.535853 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "355ef105-82c3-484b-8a1e-b54630b6e9a3" (UID: "355ef105-82c3-484b-8a1e-b54630b6e9a3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.535932 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-scripts" (OuterVolumeSpecName: "scripts") pod "355ef105-82c3-484b-8a1e-b54630b6e9a3" (UID: "355ef105-82c3-484b-8a1e-b54630b6e9a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.538354 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355ef105-82c3-484b-8a1e-b54630b6e9a3-kube-api-access-ftxnv" (OuterVolumeSpecName: "kube-api-access-ftxnv") pod "355ef105-82c3-484b-8a1e-b54630b6e9a3" (UID: "355ef105-82c3-484b-8a1e-b54630b6e9a3"). InnerVolumeSpecName "kube-api-access-ftxnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.585084 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-config-data" (OuterVolumeSpecName: "config-data") pod "355ef105-82c3-484b-8a1e-b54630b6e9a3" (UID: "355ef105-82c3-484b-8a1e-b54630b6e9a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.588598 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "355ef105-82c3-484b-8a1e-b54630b6e9a3" (UID: "355ef105-82c3-484b-8a1e-b54630b6e9a3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.589341 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "355ef105-82c3-484b-8a1e-b54630b6e9a3" (UID: "355ef105-82c3-484b-8a1e-b54630b6e9a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.602199 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "355ef105-82c3-484b-8a1e-b54630b6e9a3" (UID: "355ef105-82c3-484b-8a1e-b54630b6e9a3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.634304 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.634349 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftxnv\" (UniqueName: \"kubernetes.io/projected/355ef105-82c3-484b-8a1e-b54630b6e9a3-kube-api-access-ftxnv\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.634363 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.634372 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.634381 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.634390 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.634397 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/355ef105-82c3-484b-8a1e-b54630b6e9a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.695842 4826 generic.go:334] "Generic (PLEG): container finished" podID="355ef105-82c3-484b-8a1e-b54630b6e9a3" containerID="53733a27456760851d6241253747ff6a5530bfb46d374eb81171edfb2004aead" exitCode=0 Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.695889 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"355ef105-82c3-484b-8a1e-b54630b6e9a3","Type":"ContainerDied","Data":"53733a27456760851d6241253747ff6a5530bfb46d374eb81171edfb2004aead"} Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.695924 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"355ef105-82c3-484b-8a1e-b54630b6e9a3","Type":"ContainerDied","Data":"b278b8f46138dadac72bba37b73b4a0dac34fc3b56e2dc8a703fee26808a926d"} Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.695945 4826 scope.go:117] "RemoveContainer" containerID="53733a27456760851d6241253747ff6a5530bfb46d374eb81171edfb2004aead" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.695893 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.722830 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.734634 4826 scope.go:117] "RemoveContainer" containerID="9b8769dca0e07aeb539264927940fe357540208a471f545bd519403eefbac321" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.750363 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.757723 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.776210 4826 scope.go:117] "RemoveContainer" containerID="53733a27456760851d6241253747ff6a5530bfb46d374eb81171edfb2004aead" Jan 29 08:14:52 crc kubenswrapper[4826]: E0129 08:14:52.779495 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53733a27456760851d6241253747ff6a5530bfb46d374eb81171edfb2004aead\": container with ID starting with 53733a27456760851d6241253747ff6a5530bfb46d374eb81171edfb2004aead not found: ID does not exist" containerID="53733a27456760851d6241253747ff6a5530bfb46d374eb81171edfb2004aead" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.779557 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53733a27456760851d6241253747ff6a5530bfb46d374eb81171edfb2004aead"} err="failed to get container status \"53733a27456760851d6241253747ff6a5530bfb46d374eb81171edfb2004aead\": rpc error: code = NotFound desc = could not find container \"53733a27456760851d6241253747ff6a5530bfb46d374eb81171edfb2004aead\": container with ID starting with 53733a27456760851d6241253747ff6a5530bfb46d374eb81171edfb2004aead not found: ID does not exist" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.779602 4826 scope.go:117] "RemoveContainer" containerID="9b8769dca0e07aeb539264927940fe357540208a471f545bd519403eefbac321" Jan 29 08:14:52 crc kubenswrapper[4826]: E0129 08:14:52.780124 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b8769dca0e07aeb539264927940fe357540208a471f545bd519403eefbac321\": container with ID starting with 9b8769dca0e07aeb539264927940fe357540208a471f545bd519403eefbac321 not found: ID does not exist" containerID="9b8769dca0e07aeb539264927940fe357540208a471f545bd519403eefbac321" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.780142 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8769dca0e07aeb539264927940fe357540208a471f545bd519403eefbac321"} err="failed to get container status \"9b8769dca0e07aeb539264927940fe357540208a471f545bd519403eefbac321\": rpc error: code = NotFound desc = could not find container \"9b8769dca0e07aeb539264927940fe357540208a471f545bd519403eefbac321\": container with ID starting with 9b8769dca0e07aeb539264927940fe357540208a471f545bd519403eefbac321 not found: ID does not exist" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.788127 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:52 crc kubenswrapper[4826]: E0129 08:14:52.792864 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355ef105-82c3-484b-8a1e-b54630b6e9a3" containerName="cinder-api-log" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.793044 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="355ef105-82c3-484b-8a1e-b54630b6e9a3" containerName="cinder-api-log" Jan 29 08:14:52 crc kubenswrapper[4826]: E0129 08:14:52.793156 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355ef105-82c3-484b-8a1e-b54630b6e9a3" containerName="cinder-api" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.793229 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="355ef105-82c3-484b-8a1e-b54630b6e9a3" containerName="cinder-api" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.793519 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="355ef105-82c3-484b-8a1e-b54630b6e9a3" containerName="cinder-api" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.793624 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="355ef105-82c3-484b-8a1e-b54630b6e9a3" containerName="cinder-api-log" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.794884 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.803072 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.853958 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.854036 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.854399 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.866285 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355ef105-82c3-484b-8a1e-b54630b6e9a3" path="/var/lib/kubelet/pods/355ef105-82c3-484b-8a1e-b54630b6e9a3/volumes" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.956238 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-public-tls-certs\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.956344 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.956372 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w47lt\" (UniqueName: \"kubernetes.io/projected/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-kube-api-access-w47lt\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.956973 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-logs\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.957023 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.957056 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-config-data\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.957124 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-scripts\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.957162 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-config-data-custom\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:52 crc kubenswrapper[4826]: I0129 08:14:52.957210 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-etc-machine-id\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.059026 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w47lt\" (UniqueName: \"kubernetes.io/projected/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-kube-api-access-w47lt\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.059118 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-logs\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.059176 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.059231 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-config-data\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.059255 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-scripts\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.059281 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-config-data-custom\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.059350 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-etc-machine-id\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.059377 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-public-tls-certs\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.059421 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.059592 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-logs\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.059761 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-etc-machine-id\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.062675 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-scripts\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.063183 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.063234 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-config-data-custom\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.063369 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.063494 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-config-data\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.066450 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-public-tls-certs\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.077285 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w47lt\" (UniqueName: \"kubernetes.io/projected/681f50ce-bf2e-46d0-bb2d-4b2c196a9749-kube-api-access-w47lt\") pod \"cinder-api-0\" (UID: \"681f50ce-bf2e-46d0-bb2d-4b2c196a9749\") " pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.176979 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 08:14:53 crc kubenswrapper[4826]: W0129 08:14:53.487265 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod681f50ce_bf2e_46d0_bb2d_4b2c196a9749.slice/crio-7358b96b63e84ccb5a08a9a7e498f3439137373ba3d12457136cc6ee77e82d01 WatchSource:0}: Error finding container 7358b96b63e84ccb5a08a9a7e498f3439137373ba3d12457136cc6ee77e82d01: Status 404 returned error can't find the container with id 7358b96b63e84ccb5a08a9a7e498f3439137373ba3d12457136cc6ee77e82d01 Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.506809 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 08:14:53 crc kubenswrapper[4826]: I0129 08:14:53.710112 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"681f50ce-bf2e-46d0-bb2d-4b2c196a9749","Type":"ContainerStarted","Data":"7358b96b63e84ccb5a08a9a7e498f3439137373ba3d12457136cc6ee77e82d01"} Jan 29 08:14:54 crc kubenswrapper[4826]: I0129 08:14:54.723218 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"681f50ce-bf2e-46d0-bb2d-4b2c196a9749","Type":"ContainerStarted","Data":"38ae2a4360f04f779ce53b3348b9762e4139e1828e70e4daa3f12a260a21a6cc"} Jan 29 08:14:55 crc kubenswrapper[4826]: I0129 08:14:55.736222 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"681f50ce-bf2e-46d0-bb2d-4b2c196a9749","Type":"ContainerStarted","Data":"4104277106c026ad85360da315d80b247febdecfa010980ff8ead7205d3b27f1"} Jan 29 08:14:55 crc kubenswrapper[4826]: I0129 08:14:55.736702 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 08:14:57 crc kubenswrapper[4826]: I0129 08:14:57.957220 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 08:14:58 crc kubenswrapper[4826]: I0129 08:14:58.014218 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.014180304 podStartE2EDuration="6.014180304s" podCreationTimestamp="2026-01-29 08:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:14:55.770942389 +0000 UTC m=+5479.632735498" watchObservedRunningTime="2026-01-29 08:14:58.014180304 +0000 UTC m=+5481.875973413" Jan 29 08:14:58 crc kubenswrapper[4826]: I0129 08:14:58.067717 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:14:58 crc kubenswrapper[4826]: I0129 08:14:58.767875 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="84664c57-16ba-4758-ab2f-944dec09bb0c" containerName="probe" containerID="cri-o://d35733ef0b5cd4f8fd81d380dae433d48b1fbfad39907374b2fc73b9b1db4ce8" gracePeriod=30 Jan 29 08:14:58 crc kubenswrapper[4826]: I0129 08:14:58.768058 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="84664c57-16ba-4758-ab2f-944dec09bb0c" containerName="cinder-scheduler" containerID="cri-o://484c4da56119b7b06335664753043dc939982e45877b101d9c5834f03a99c5a9" gracePeriod=30 Jan 29 08:14:59 crc kubenswrapper[4826]: I0129 08:14:59.777138 4826 generic.go:334] "Generic (PLEG): container finished" podID="84664c57-16ba-4758-ab2f-944dec09bb0c" containerID="d35733ef0b5cd4f8fd81d380dae433d48b1fbfad39907374b2fc73b9b1db4ce8" exitCode=0 Jan 29 08:14:59 crc kubenswrapper[4826]: I0129 08:14:59.777228 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"84664c57-16ba-4758-ab2f-944dec09bb0c","Type":"ContainerDied","Data":"d35733ef0b5cd4f8fd81d380dae433d48b1fbfad39907374b2fc73b9b1db4ce8"} Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.147802 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9"] Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.148907 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.150923 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.151198 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.163280 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9"] Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.215722 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.326966 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-config-data\") pod \"84664c57-16ba-4758-ab2f-944dec09bb0c\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.327060 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-scripts\") pod \"84664c57-16ba-4758-ab2f-944dec09bb0c\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.327129 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-config-data-custom\") pod \"84664c57-16ba-4758-ab2f-944dec09bb0c\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.327176 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84664c57-16ba-4758-ab2f-944dec09bb0c-etc-machine-id\") pod \"84664c57-16ba-4758-ab2f-944dec09bb0c\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.327272 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6skr\" (UniqueName: \"kubernetes.io/projected/84664c57-16ba-4758-ab2f-944dec09bb0c-kube-api-access-q6skr\") pod \"84664c57-16ba-4758-ab2f-944dec09bb0c\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.327317 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-combined-ca-bundle\") pod \"84664c57-16ba-4758-ab2f-944dec09bb0c\" (UID: \"84664c57-16ba-4758-ab2f-944dec09bb0c\") " Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.327452 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84664c57-16ba-4758-ab2f-944dec09bb0c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "84664c57-16ba-4758-ab2f-944dec09bb0c" (UID: "84664c57-16ba-4758-ab2f-944dec09bb0c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.327582 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzjxb\" (UniqueName: \"kubernetes.io/projected/3bcc2de4-434f-468a-98d5-62d14b98be5e-kube-api-access-wzjxb\") pod \"collect-profiles-29494575-bdsx9\" (UID: \"3bcc2de4-434f-468a-98d5-62d14b98be5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.327668 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bcc2de4-434f-468a-98d5-62d14b98be5e-secret-volume\") pod \"collect-profiles-29494575-bdsx9\" (UID: \"3bcc2de4-434f-468a-98d5-62d14b98be5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.327967 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bcc2de4-434f-468a-98d5-62d14b98be5e-config-volume\") pod \"collect-profiles-29494575-bdsx9\" (UID: \"3bcc2de4-434f-468a-98d5-62d14b98be5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.328163 4826 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84664c57-16ba-4758-ab2f-944dec09bb0c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.332872 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-scripts" (OuterVolumeSpecName: "scripts") pod "84664c57-16ba-4758-ab2f-944dec09bb0c" (UID: "84664c57-16ba-4758-ab2f-944dec09bb0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.333473 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "84664c57-16ba-4758-ab2f-944dec09bb0c" (UID: "84664c57-16ba-4758-ab2f-944dec09bb0c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.337687 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84664c57-16ba-4758-ab2f-944dec09bb0c-kube-api-access-q6skr" (OuterVolumeSpecName: "kube-api-access-q6skr") pod "84664c57-16ba-4758-ab2f-944dec09bb0c" (UID: "84664c57-16ba-4758-ab2f-944dec09bb0c"). InnerVolumeSpecName "kube-api-access-q6skr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.388068 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84664c57-16ba-4758-ab2f-944dec09bb0c" (UID: "84664c57-16ba-4758-ab2f-944dec09bb0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.429654 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzjxb\" (UniqueName: \"kubernetes.io/projected/3bcc2de4-434f-468a-98d5-62d14b98be5e-kube-api-access-wzjxb\") pod \"collect-profiles-29494575-bdsx9\" (UID: \"3bcc2de4-434f-468a-98d5-62d14b98be5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.430011 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bcc2de4-434f-468a-98d5-62d14b98be5e-secret-volume\") pod \"collect-profiles-29494575-bdsx9\" (UID: \"3bcc2de4-434f-468a-98d5-62d14b98be5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.430080 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bcc2de4-434f-468a-98d5-62d14b98be5e-config-volume\") pod \"collect-profiles-29494575-bdsx9\" (UID: \"3bcc2de4-434f-468a-98d5-62d14b98be5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.430166 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.430178 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.430191 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6skr\" (UniqueName: \"kubernetes.io/projected/84664c57-16ba-4758-ab2f-944dec09bb0c-kube-api-access-q6skr\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.430200 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.431086 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bcc2de4-434f-468a-98d5-62d14b98be5e-config-volume\") pod \"collect-profiles-29494575-bdsx9\" (UID: \"3bcc2de4-434f-468a-98d5-62d14b98be5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.438811 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-config-data" (OuterVolumeSpecName: "config-data") pod "84664c57-16ba-4758-ab2f-944dec09bb0c" (UID: "84664c57-16ba-4758-ab2f-944dec09bb0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.439458 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bcc2de4-434f-468a-98d5-62d14b98be5e-secret-volume\") pod \"collect-profiles-29494575-bdsx9\" (UID: \"3bcc2de4-434f-468a-98d5-62d14b98be5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.449349 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzjxb\" (UniqueName: \"kubernetes.io/projected/3bcc2de4-434f-468a-98d5-62d14b98be5e-kube-api-access-wzjxb\") pod \"collect-profiles-29494575-bdsx9\" (UID: \"3bcc2de4-434f-468a-98d5-62d14b98be5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.531392 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.532175 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84664c57-16ba-4758-ab2f-944dec09bb0c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.786798 4826 generic.go:334] "Generic (PLEG): container finished" podID="84664c57-16ba-4758-ab2f-944dec09bb0c" containerID="484c4da56119b7b06335664753043dc939982e45877b101d9c5834f03a99c5a9" exitCode=0 Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.786846 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"84664c57-16ba-4758-ab2f-944dec09bb0c","Type":"ContainerDied","Data":"484c4da56119b7b06335664753043dc939982e45877b101d9c5834f03a99c5a9"} Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.787065 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"84664c57-16ba-4758-ab2f-944dec09bb0c","Type":"ContainerDied","Data":"c12d8e48ca635d96286f6f55bad4d64751582a3e3f88798fede55ce7e94db6b1"} Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.787249 4826 scope.go:117] "RemoveContainer" containerID="d35733ef0b5cd4f8fd81d380dae433d48b1fbfad39907374b2fc73b9b1db4ce8" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.786855 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.823930 4826 scope.go:117] "RemoveContainer" containerID="484c4da56119b7b06335664753043dc939982e45877b101d9c5834f03a99c5a9" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.837662 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.853291 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.855197 4826 scope.go:117] "RemoveContainer" containerID="d35733ef0b5cd4f8fd81d380dae433d48b1fbfad39907374b2fc73b9b1db4ce8" Jan 29 08:15:00 crc kubenswrapper[4826]: E0129 08:15:00.858739 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d35733ef0b5cd4f8fd81d380dae433d48b1fbfad39907374b2fc73b9b1db4ce8\": container with ID starting with d35733ef0b5cd4f8fd81d380dae433d48b1fbfad39907374b2fc73b9b1db4ce8 not found: ID does not exist" containerID="d35733ef0b5cd4f8fd81d380dae433d48b1fbfad39907374b2fc73b9b1db4ce8" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.858805 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d35733ef0b5cd4f8fd81d380dae433d48b1fbfad39907374b2fc73b9b1db4ce8"} err="failed to get container status \"d35733ef0b5cd4f8fd81d380dae433d48b1fbfad39907374b2fc73b9b1db4ce8\": rpc error: code = NotFound desc = could not find container \"d35733ef0b5cd4f8fd81d380dae433d48b1fbfad39907374b2fc73b9b1db4ce8\": container with ID starting with d35733ef0b5cd4f8fd81d380dae433d48b1fbfad39907374b2fc73b9b1db4ce8 not found: ID does not exist" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.858834 4826 scope.go:117] "RemoveContainer" containerID="484c4da56119b7b06335664753043dc939982e45877b101d9c5834f03a99c5a9" Jan 29 08:15:00 crc kubenswrapper[4826]: E0129 08:15:00.860744 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"484c4da56119b7b06335664753043dc939982e45877b101d9c5834f03a99c5a9\": container with ID starting with 484c4da56119b7b06335664753043dc939982e45877b101d9c5834f03a99c5a9 not found: ID does not exist" containerID="484c4da56119b7b06335664753043dc939982e45877b101d9c5834f03a99c5a9" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.860868 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484c4da56119b7b06335664753043dc939982e45877b101d9c5834f03a99c5a9"} err="failed to get container status \"484c4da56119b7b06335664753043dc939982e45877b101d9c5834f03a99c5a9\": rpc error: code = NotFound desc = could not find container \"484c4da56119b7b06335664753043dc939982e45877b101d9c5834f03a99c5a9\": container with ID starting with 484c4da56119b7b06335664753043dc939982e45877b101d9c5834f03a99c5a9 not found: ID does not exist" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.867876 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:15:00 crc kubenswrapper[4826]: E0129 08:15:00.868419 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84664c57-16ba-4758-ab2f-944dec09bb0c" containerName="probe" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.868438 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="84664c57-16ba-4758-ab2f-944dec09bb0c" containerName="probe" Jan 29 08:15:00 crc kubenswrapper[4826]: E0129 08:15:00.868451 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84664c57-16ba-4758-ab2f-944dec09bb0c" containerName="cinder-scheduler" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.868577 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="84664c57-16ba-4758-ab2f-944dec09bb0c" containerName="cinder-scheduler" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.868889 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="84664c57-16ba-4758-ab2f-944dec09bb0c" containerName="probe" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.868934 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="84664c57-16ba-4758-ab2f-944dec09bb0c" containerName="cinder-scheduler" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.869995 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.872443 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.879962 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:15:00 crc kubenswrapper[4826]: I0129 08:15:00.981042 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9"] Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.048023 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-scripts\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.048081 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.048138 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.048174 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-config-data\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.048219 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rxmn\" (UniqueName: \"kubernetes.io/projected/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-kube-api-access-6rxmn\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.048384 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.152786 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.152835 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-config-data\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.152883 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rxmn\" (UniqueName: \"kubernetes.io/projected/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-kube-api-access-6rxmn\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.153204 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.152968 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.153505 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-scripts\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.153535 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.161104 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-scripts\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.163414 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.164026 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.166075 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-config-data\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.173096 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rxmn\" (UniqueName: \"kubernetes.io/projected/5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a-kube-api-access-6rxmn\") pod \"cinder-scheduler-0\" (UID: \"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a\") " pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.185349 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.671912 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.802857 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a","Type":"ContainerStarted","Data":"ec43315d0a25392545f318e1f486352220d0d1746bc427427a5fbfa154c6f22e"} Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.804776 4826 generic.go:334] "Generic (PLEG): container finished" podID="3bcc2de4-434f-468a-98d5-62d14b98be5e" containerID="41ec008557db7f5057b49861b2c0ae4bd5b559f8c4cb02b9903d4d163b4b03bf" exitCode=0 Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.804819 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" event={"ID":"3bcc2de4-434f-468a-98d5-62d14b98be5e","Type":"ContainerDied","Data":"41ec008557db7f5057b49861b2c0ae4bd5b559f8c4cb02b9903d4d163b4b03bf"} Jan 29 08:15:01 crc kubenswrapper[4826]: I0129 08:15:01.804838 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" event={"ID":"3bcc2de4-434f-468a-98d5-62d14b98be5e","Type":"ContainerStarted","Data":"193e56eb89dac21d5d90c7665ca67ead5a03cbf005b5b8f2d9b8ff3c8bd3678b"} Jan 29 08:15:02 crc kubenswrapper[4826]: I0129 08:15:02.831073 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84664c57-16ba-4758-ab2f-944dec09bb0c" path="/var/lib/kubelet/pods/84664c57-16ba-4758-ab2f-944dec09bb0c/volumes" Jan 29 08:15:02 crc kubenswrapper[4826]: I0129 08:15:02.832362 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a","Type":"ContainerStarted","Data":"ef4604f5e3e61adce927e1c4210ec846763c2072da5ac323f9f7735ccfaf4fdc"} Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.173504 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.315450 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bcc2de4-434f-468a-98d5-62d14b98be5e-config-volume\") pod \"3bcc2de4-434f-468a-98d5-62d14b98be5e\" (UID: \"3bcc2de4-434f-468a-98d5-62d14b98be5e\") " Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.315891 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bcc2de4-434f-468a-98d5-62d14b98be5e-secret-volume\") pod \"3bcc2de4-434f-468a-98d5-62d14b98be5e\" (UID: \"3bcc2de4-434f-468a-98d5-62d14b98be5e\") " Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.315969 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzjxb\" (UniqueName: \"kubernetes.io/projected/3bcc2de4-434f-468a-98d5-62d14b98be5e-kube-api-access-wzjxb\") pod \"3bcc2de4-434f-468a-98d5-62d14b98be5e\" (UID: \"3bcc2de4-434f-468a-98d5-62d14b98be5e\") " Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.316251 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bcc2de4-434f-468a-98d5-62d14b98be5e-config-volume" (OuterVolumeSpecName: "config-volume") pod "3bcc2de4-434f-468a-98d5-62d14b98be5e" (UID: "3bcc2de4-434f-468a-98d5-62d14b98be5e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.316601 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bcc2de4-434f-468a-98d5-62d14b98be5e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.321472 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bcc2de4-434f-468a-98d5-62d14b98be5e-kube-api-access-wzjxb" (OuterVolumeSpecName: "kube-api-access-wzjxb") pod "3bcc2de4-434f-468a-98d5-62d14b98be5e" (UID: "3bcc2de4-434f-468a-98d5-62d14b98be5e"). InnerVolumeSpecName "kube-api-access-wzjxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.334501 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcc2de4-434f-468a-98d5-62d14b98be5e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3bcc2de4-434f-468a-98d5-62d14b98be5e" (UID: "3bcc2de4-434f-468a-98d5-62d14b98be5e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.418903 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3bcc2de4-434f-468a-98d5-62d14b98be5e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.418966 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzjxb\" (UniqueName: \"kubernetes.io/projected/3bcc2de4-434f-468a-98d5-62d14b98be5e-kube-api-access-wzjxb\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.836615 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" event={"ID":"3bcc2de4-434f-468a-98d5-62d14b98be5e","Type":"ContainerDied","Data":"193e56eb89dac21d5d90c7665ca67ead5a03cbf005b5b8f2d9b8ff3c8bd3678b"} Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.836671 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="193e56eb89dac21d5d90c7665ca67ead5a03cbf005b5b8f2d9b8ff3c8bd3678b" Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.836669 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9" Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.838984 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a","Type":"ContainerStarted","Data":"fd90d92991edaebeadc0f25f84111147a8b3204a062b2e60319f999dccbdc912"} Jan 29 08:15:03 crc kubenswrapper[4826]: I0129 08:15:03.878765 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.878744335 podStartE2EDuration="3.878744335s" podCreationTimestamp="2026-01-29 08:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:15:03.866960236 +0000 UTC m=+5487.728753315" watchObservedRunningTime="2026-01-29 08:15:03.878744335 +0000 UTC m=+5487.740537404" Jan 29 08:15:04 crc kubenswrapper[4826]: I0129 08:15:04.272112 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v"] Jan 29 08:15:04 crc kubenswrapper[4826]: I0129 08:15:04.282699 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494530-zp45v"] Jan 29 08:15:04 crc kubenswrapper[4826]: I0129 08:15:04.835590 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="927e0d81-7950-4dec-a31a-88b7fff7b462" path="/var/lib/kubelet/pods/927e0d81-7950-4dec-a31a-88b7fff7b462/volumes" Jan 29 08:15:04 crc kubenswrapper[4826]: I0129 08:15:04.948062 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 08:15:06 crc kubenswrapper[4826]: I0129 08:15:06.186012 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 08:15:11 crc kubenswrapper[4826]: I0129 08:15:11.444026 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 08:15:11 crc kubenswrapper[4826]: I0129 08:15:11.896310 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-68dff"] Jan 29 08:15:11 crc kubenswrapper[4826]: E0129 08:15:11.896875 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bcc2de4-434f-468a-98d5-62d14b98be5e" containerName="collect-profiles" Jan 29 08:15:11 crc kubenswrapper[4826]: I0129 08:15:11.896894 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bcc2de4-434f-468a-98d5-62d14b98be5e" containerName="collect-profiles" Jan 29 08:15:11 crc kubenswrapper[4826]: I0129 08:15:11.897071 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bcc2de4-434f-468a-98d5-62d14b98be5e" containerName="collect-profiles" Jan 29 08:15:11 crc kubenswrapper[4826]: I0129 08:15:11.897686 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-68dff" Jan 29 08:15:11 crc kubenswrapper[4826]: I0129 08:15:11.906085 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-68dff"] Jan 29 08:15:11 crc kubenswrapper[4826]: I0129 08:15:11.972134 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b054-account-create-update-sjkrb"] Jan 29 08:15:11 crc kubenswrapper[4826]: I0129 08:15:11.973129 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b054-account-create-update-sjkrb" Jan 29 08:15:11 crc kubenswrapper[4826]: I0129 08:15:11.974962 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 08:15:11 crc kubenswrapper[4826]: I0129 08:15:11.982286 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4wbs\" (UniqueName: \"kubernetes.io/projected/372c37cb-4f61-42ba-8bd8-d744b414f501-kube-api-access-c4wbs\") pod \"glance-db-create-68dff\" (UID: \"372c37cb-4f61-42ba-8bd8-d744b414f501\") " pod="openstack/glance-db-create-68dff" Jan 29 08:15:11 crc kubenswrapper[4826]: I0129 08:15:11.982459 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372c37cb-4f61-42ba-8bd8-d744b414f501-operator-scripts\") pod \"glance-db-create-68dff\" (UID: \"372c37cb-4f61-42ba-8bd8-d744b414f501\") " pod="openstack/glance-db-create-68dff" Jan 29 08:15:11 crc kubenswrapper[4826]: I0129 08:15:11.983826 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b054-account-create-update-sjkrb"] Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.083658 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e86956-b407-4607-823d-754830701852-operator-scripts\") pod \"glance-b054-account-create-update-sjkrb\" (UID: \"20e86956-b407-4607-823d-754830701852\") " pod="openstack/glance-b054-account-create-update-sjkrb" Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.083930 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4wbs\" (UniqueName: \"kubernetes.io/projected/372c37cb-4f61-42ba-8bd8-d744b414f501-kube-api-access-c4wbs\") pod \"glance-db-create-68dff\" (UID: \"372c37cb-4f61-42ba-8bd8-d744b414f501\") " pod="openstack/glance-db-create-68dff" Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.084059 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64t8b\" (UniqueName: \"kubernetes.io/projected/20e86956-b407-4607-823d-754830701852-kube-api-access-64t8b\") pod \"glance-b054-account-create-update-sjkrb\" (UID: \"20e86956-b407-4607-823d-754830701852\") " pod="openstack/glance-b054-account-create-update-sjkrb" Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.084155 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372c37cb-4f61-42ba-8bd8-d744b414f501-operator-scripts\") pod \"glance-db-create-68dff\" (UID: \"372c37cb-4f61-42ba-8bd8-d744b414f501\") " pod="openstack/glance-db-create-68dff" Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.085035 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372c37cb-4f61-42ba-8bd8-d744b414f501-operator-scripts\") pod \"glance-db-create-68dff\" (UID: \"372c37cb-4f61-42ba-8bd8-d744b414f501\") " pod="openstack/glance-db-create-68dff" Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.101327 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4wbs\" (UniqueName: \"kubernetes.io/projected/372c37cb-4f61-42ba-8bd8-d744b414f501-kube-api-access-c4wbs\") pod \"glance-db-create-68dff\" (UID: \"372c37cb-4f61-42ba-8bd8-d744b414f501\") " pod="openstack/glance-db-create-68dff" Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.186361 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e86956-b407-4607-823d-754830701852-operator-scripts\") pod \"glance-b054-account-create-update-sjkrb\" (UID: \"20e86956-b407-4607-823d-754830701852\") " pod="openstack/glance-b054-account-create-update-sjkrb" Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.186519 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64t8b\" (UniqueName: \"kubernetes.io/projected/20e86956-b407-4607-823d-754830701852-kube-api-access-64t8b\") pod \"glance-b054-account-create-update-sjkrb\" (UID: \"20e86956-b407-4607-823d-754830701852\") " pod="openstack/glance-b054-account-create-update-sjkrb" Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.187649 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e86956-b407-4607-823d-754830701852-operator-scripts\") pod \"glance-b054-account-create-update-sjkrb\" (UID: \"20e86956-b407-4607-823d-754830701852\") " pod="openstack/glance-b054-account-create-update-sjkrb" Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.206650 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64t8b\" (UniqueName: \"kubernetes.io/projected/20e86956-b407-4607-823d-754830701852-kube-api-access-64t8b\") pod \"glance-b054-account-create-update-sjkrb\" (UID: \"20e86956-b407-4607-823d-754830701852\") " pod="openstack/glance-b054-account-create-update-sjkrb" Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.251153 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-68dff" Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.294506 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b054-account-create-update-sjkrb" Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.798736 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-68dff"] Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.872681 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b054-account-create-update-sjkrb"] Jan 29 08:15:12 crc kubenswrapper[4826]: W0129 08:15:12.875650 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20e86956_b407_4607_823d_754830701852.slice/crio-c47cdadcfab62bae8baf8bd1ad9225f846d70e81a57e3602e1c4ebfa225fa3bb WatchSource:0}: Error finding container c47cdadcfab62bae8baf8bd1ad9225f846d70e81a57e3602e1c4ebfa225fa3bb: Status 404 returned error can't find the container with id c47cdadcfab62bae8baf8bd1ad9225f846d70e81a57e3602e1c4ebfa225fa3bb Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.954000 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b054-account-create-update-sjkrb" event={"ID":"20e86956-b407-4607-823d-754830701852","Type":"ContainerStarted","Data":"c47cdadcfab62bae8baf8bd1ad9225f846d70e81a57e3602e1c4ebfa225fa3bb"} Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.956281 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-68dff" event={"ID":"372c37cb-4f61-42ba-8bd8-d744b414f501","Type":"ContainerStarted","Data":"586f0b6f5979b804d19efc9c987dba5825e6e00707e61358786c72c77c9beea0"} Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.956331 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-68dff" event={"ID":"372c37cb-4f61-42ba-8bd8-d744b414f501","Type":"ContainerStarted","Data":"2a4d1fe8328391166f864888112c58fd206f24923e124455f8fa462c385fdd6c"} Jan 29 08:15:12 crc kubenswrapper[4826]: I0129 08:15:12.980282 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-68dff" podStartSLOduration=1.980262292 podStartE2EDuration="1.980262292s" podCreationTimestamp="2026-01-29 08:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:15:12.972156749 +0000 UTC m=+5496.833949818" watchObservedRunningTime="2026-01-29 08:15:12.980262292 +0000 UTC m=+5496.842055361" Jan 29 08:15:13 crc kubenswrapper[4826]: I0129 08:15:13.969718 4826 generic.go:334] "Generic (PLEG): container finished" podID="20e86956-b407-4607-823d-754830701852" containerID="777bc48a98c5819b956035a3928bc4b93ab4fd15aee51a6053cc1caec1c06b3f" exitCode=0 Jan 29 08:15:13 crc kubenswrapper[4826]: I0129 08:15:13.970130 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b054-account-create-update-sjkrb" event={"ID":"20e86956-b407-4607-823d-754830701852","Type":"ContainerDied","Data":"777bc48a98c5819b956035a3928bc4b93ab4fd15aee51a6053cc1caec1c06b3f"} Jan 29 08:15:13 crc kubenswrapper[4826]: I0129 08:15:13.972389 4826 generic.go:334] "Generic (PLEG): container finished" podID="372c37cb-4f61-42ba-8bd8-d744b414f501" containerID="586f0b6f5979b804d19efc9c987dba5825e6e00707e61358786c72c77c9beea0" exitCode=0 Jan 29 08:15:13 crc kubenswrapper[4826]: I0129 08:15:13.972451 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-68dff" event={"ID":"372c37cb-4f61-42ba-8bd8-d744b414f501","Type":"ContainerDied","Data":"586f0b6f5979b804d19efc9c987dba5825e6e00707e61358786c72c77c9beea0"} Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.434614 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-68dff" Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.437221 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b054-account-create-update-sjkrb" Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.557090 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64t8b\" (UniqueName: \"kubernetes.io/projected/20e86956-b407-4607-823d-754830701852-kube-api-access-64t8b\") pod \"20e86956-b407-4607-823d-754830701852\" (UID: \"20e86956-b407-4607-823d-754830701852\") " Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.557810 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372c37cb-4f61-42ba-8bd8-d744b414f501-operator-scripts\") pod \"372c37cb-4f61-42ba-8bd8-d744b414f501\" (UID: \"372c37cb-4f61-42ba-8bd8-d744b414f501\") " Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.557981 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4wbs\" (UniqueName: \"kubernetes.io/projected/372c37cb-4f61-42ba-8bd8-d744b414f501-kube-api-access-c4wbs\") pod \"372c37cb-4f61-42ba-8bd8-d744b414f501\" (UID: \"372c37cb-4f61-42ba-8bd8-d744b414f501\") " Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.558271 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372c37cb-4f61-42ba-8bd8-d744b414f501-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "372c37cb-4f61-42ba-8bd8-d744b414f501" (UID: "372c37cb-4f61-42ba-8bd8-d744b414f501"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.558715 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e86956-b407-4607-823d-754830701852-operator-scripts\") pod \"20e86956-b407-4607-823d-754830701852\" (UID: \"20e86956-b407-4607-823d-754830701852\") " Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.559634 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e86956-b407-4607-823d-754830701852-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20e86956-b407-4607-823d-754830701852" (UID: "20e86956-b407-4607-823d-754830701852"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.560419 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372c37cb-4f61-42ba-8bd8-d744b414f501-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.560458 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e86956-b407-4607-823d-754830701852-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.566767 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e86956-b407-4607-823d-754830701852-kube-api-access-64t8b" (OuterVolumeSpecName: "kube-api-access-64t8b") pod "20e86956-b407-4607-823d-754830701852" (UID: "20e86956-b407-4607-823d-754830701852"). InnerVolumeSpecName "kube-api-access-64t8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.572538 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372c37cb-4f61-42ba-8bd8-d744b414f501-kube-api-access-c4wbs" (OuterVolumeSpecName: "kube-api-access-c4wbs") pod "372c37cb-4f61-42ba-8bd8-d744b414f501" (UID: "372c37cb-4f61-42ba-8bd8-d744b414f501"). InnerVolumeSpecName "kube-api-access-c4wbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.662585 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64t8b\" (UniqueName: \"kubernetes.io/projected/20e86956-b407-4607-823d-754830701852-kube-api-access-64t8b\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:15 crc kubenswrapper[4826]: I0129 08:15:15.662626 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4wbs\" (UniqueName: \"kubernetes.io/projected/372c37cb-4f61-42ba-8bd8-d744b414f501-kube-api-access-c4wbs\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:16 crc kubenswrapper[4826]: I0129 08:15:16.007737 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b054-account-create-update-sjkrb" event={"ID":"20e86956-b407-4607-823d-754830701852","Type":"ContainerDied","Data":"c47cdadcfab62bae8baf8bd1ad9225f846d70e81a57e3602e1c4ebfa225fa3bb"} Jan 29 08:15:16 crc kubenswrapper[4826]: I0129 08:15:16.008359 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c47cdadcfab62bae8baf8bd1ad9225f846d70e81a57e3602e1c4ebfa225fa3bb" Jan 29 08:15:16 crc kubenswrapper[4826]: I0129 08:15:16.008463 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b054-account-create-update-sjkrb" Jan 29 08:15:16 crc kubenswrapper[4826]: I0129 08:15:16.017141 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-68dff" event={"ID":"372c37cb-4f61-42ba-8bd8-d744b414f501","Type":"ContainerDied","Data":"2a4d1fe8328391166f864888112c58fd206f24923e124455f8fa462c385fdd6c"} Jan 29 08:15:16 crc kubenswrapper[4826]: I0129 08:15:16.017181 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a4d1fe8328391166f864888112c58fd206f24923e124455f8fa462c385fdd6c" Jan 29 08:15:16 crc kubenswrapper[4826]: I0129 08:15:16.017238 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-68dff" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.085399 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8pgv6"] Jan 29 08:15:17 crc kubenswrapper[4826]: E0129 08:15:17.085814 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372c37cb-4f61-42ba-8bd8-d744b414f501" containerName="mariadb-database-create" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.085828 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="372c37cb-4f61-42ba-8bd8-d744b414f501" containerName="mariadb-database-create" Jan 29 08:15:17 crc kubenswrapper[4826]: E0129 08:15:17.085859 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e86956-b407-4607-823d-754830701852" containerName="mariadb-account-create-update" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.085867 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e86956-b407-4607-823d-754830701852" containerName="mariadb-account-create-update" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.086080 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e86956-b407-4607-823d-754830701852" containerName="mariadb-account-create-update" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.086100 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="372c37cb-4f61-42ba-8bd8-d744b414f501" containerName="mariadb-database-create" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.086818 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.091257 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-s67bn" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.092606 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.101349 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8pgv6"] Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.192872 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-config-data\") pod \"glance-db-sync-8pgv6\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.192968 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-db-sync-config-data\") pod \"glance-db-sync-8pgv6\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.193068 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn4gm\" (UniqueName: \"kubernetes.io/projected/4ae68fcf-64c2-4107-b160-7466430edc38-kube-api-access-cn4gm\") pod \"glance-db-sync-8pgv6\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.193184 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-combined-ca-bundle\") pod \"glance-db-sync-8pgv6\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.295066 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn4gm\" (UniqueName: \"kubernetes.io/projected/4ae68fcf-64c2-4107-b160-7466430edc38-kube-api-access-cn4gm\") pod \"glance-db-sync-8pgv6\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.295187 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-combined-ca-bundle\") pod \"glance-db-sync-8pgv6\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.295217 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-config-data\") pod \"glance-db-sync-8pgv6\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.295270 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-db-sync-config-data\") pod \"glance-db-sync-8pgv6\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.300856 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-combined-ca-bundle\") pod \"glance-db-sync-8pgv6\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.302584 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-db-sync-config-data\") pod \"glance-db-sync-8pgv6\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.311967 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-config-data\") pod \"glance-db-sync-8pgv6\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.319050 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn4gm\" (UniqueName: \"kubernetes.io/projected/4ae68fcf-64c2-4107-b160-7466430edc38-kube-api-access-cn4gm\") pod \"glance-db-sync-8pgv6\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.454211 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:17 crc kubenswrapper[4826]: I0129 08:15:17.995659 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8pgv6"] Jan 29 08:15:17 crc kubenswrapper[4826]: W0129 08:15:17.998174 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ae68fcf_64c2_4107_b160_7466430edc38.slice/crio-a13f42e1c3ff4b817da8d55a163f2c8ff4253a08b1d859902e2575ce51947375 WatchSource:0}: Error finding container a13f42e1c3ff4b817da8d55a163f2c8ff4253a08b1d859902e2575ce51947375: Status 404 returned error can't find the container with id a13f42e1c3ff4b817da8d55a163f2c8ff4253a08b1d859902e2575ce51947375 Jan 29 08:15:18 crc kubenswrapper[4826]: I0129 08:15:18.001011 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:15:18 crc kubenswrapper[4826]: I0129 08:15:18.036409 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8pgv6" event={"ID":"4ae68fcf-64c2-4107-b160-7466430edc38","Type":"ContainerStarted","Data":"a13f42e1c3ff4b817da8d55a163f2c8ff4253a08b1d859902e2575ce51947375"} Jan 29 08:15:20 crc kubenswrapper[4826]: I0129 08:15:20.234258 4826 scope.go:117] "RemoveContainer" containerID="b46d35339a20cc1923f1e8ae7dbc8c2546b07b7a89a8cab68e5f6a9c94110af4" Jan 29 08:15:20 crc kubenswrapper[4826]: I0129 08:15:20.279844 4826 scope.go:117] "RemoveContainer" containerID="7940dff32b057f62728bc628f3fadc620a74b1c0ebd2d239b3b128d23d0396f7" Jan 29 08:15:20 crc kubenswrapper[4826]: I0129 08:15:20.298046 4826 scope.go:117] "RemoveContainer" containerID="4c5112a45c83cea2a6e958a4b92428138c3bfb749a18590ad0bd6fdfdf165c4a" Jan 29 08:15:20 crc kubenswrapper[4826]: I0129 08:15:20.335519 4826 scope.go:117] "RemoveContainer" containerID="aa5733609cc4d54bbe7bfd6da21a24af57cd2ba5916bbac9294318e7dc3d1b5e" Jan 29 08:15:27 crc kubenswrapper[4826]: I0129 08:15:27.462193 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jtwhw"] Jan 29 08:15:27 crc kubenswrapper[4826]: I0129 08:15:27.465932 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:27 crc kubenswrapper[4826]: I0129 08:15:27.478401 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtwhw"] Jan 29 08:15:27 crc kubenswrapper[4826]: I0129 08:15:27.535852 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdkrf\" (UniqueName: \"kubernetes.io/projected/64f2249b-7eec-4121-9b5b-3fd964858886-kube-api-access-jdkrf\") pod \"redhat-marketplace-jtwhw\" (UID: \"64f2249b-7eec-4121-9b5b-3fd964858886\") " pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:27 crc kubenswrapper[4826]: I0129 08:15:27.536196 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f2249b-7eec-4121-9b5b-3fd964858886-utilities\") pod \"redhat-marketplace-jtwhw\" (UID: \"64f2249b-7eec-4121-9b5b-3fd964858886\") " pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:27 crc kubenswrapper[4826]: I0129 08:15:27.536480 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f2249b-7eec-4121-9b5b-3fd964858886-catalog-content\") pod \"redhat-marketplace-jtwhw\" (UID: \"64f2249b-7eec-4121-9b5b-3fd964858886\") " pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:27 crc kubenswrapper[4826]: I0129 08:15:27.637281 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdkrf\" (UniqueName: \"kubernetes.io/projected/64f2249b-7eec-4121-9b5b-3fd964858886-kube-api-access-jdkrf\") pod \"redhat-marketplace-jtwhw\" (UID: \"64f2249b-7eec-4121-9b5b-3fd964858886\") " pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:27 crc kubenswrapper[4826]: I0129 08:15:27.637707 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f2249b-7eec-4121-9b5b-3fd964858886-utilities\") pod \"redhat-marketplace-jtwhw\" (UID: \"64f2249b-7eec-4121-9b5b-3fd964858886\") " pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:27 crc kubenswrapper[4826]: I0129 08:15:27.637738 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f2249b-7eec-4121-9b5b-3fd964858886-catalog-content\") pod \"redhat-marketplace-jtwhw\" (UID: \"64f2249b-7eec-4121-9b5b-3fd964858886\") " pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:27 crc kubenswrapper[4826]: I0129 08:15:27.638561 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f2249b-7eec-4121-9b5b-3fd964858886-utilities\") pod \"redhat-marketplace-jtwhw\" (UID: \"64f2249b-7eec-4121-9b5b-3fd964858886\") " pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:27 crc kubenswrapper[4826]: I0129 08:15:27.639257 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f2249b-7eec-4121-9b5b-3fd964858886-catalog-content\") pod \"redhat-marketplace-jtwhw\" (UID: \"64f2249b-7eec-4121-9b5b-3fd964858886\") " pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:27 crc kubenswrapper[4826]: I0129 08:15:27.676546 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdkrf\" (UniqueName: \"kubernetes.io/projected/64f2249b-7eec-4121-9b5b-3fd964858886-kube-api-access-jdkrf\") pod \"redhat-marketplace-jtwhw\" (UID: \"64f2249b-7eec-4121-9b5b-3fd964858886\") " pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:27 crc kubenswrapper[4826]: I0129 08:15:27.794488 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:35 crc kubenswrapper[4826]: I0129 08:15:35.099891 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtwhw"] Jan 29 08:15:35 crc kubenswrapper[4826]: W0129 08:15:35.108902 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64f2249b_7eec_4121_9b5b_3fd964858886.slice/crio-afcbb8d3d47ad968b5aa67602a48e00383515bf0776cd7dc0dbf3a6109f9bbb9 WatchSource:0}: Error finding container afcbb8d3d47ad968b5aa67602a48e00383515bf0776cd7dc0dbf3a6109f9bbb9: Status 404 returned error can't find the container with id afcbb8d3d47ad968b5aa67602a48e00383515bf0776cd7dc0dbf3a6109f9bbb9 Jan 29 08:15:35 crc kubenswrapper[4826]: I0129 08:15:35.216480 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtwhw" event={"ID":"64f2249b-7eec-4121-9b5b-3fd964858886","Type":"ContainerStarted","Data":"afcbb8d3d47ad968b5aa67602a48e00383515bf0776cd7dc0dbf3a6109f9bbb9"} Jan 29 08:15:36 crc kubenswrapper[4826]: I0129 08:15:36.233847 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8pgv6" event={"ID":"4ae68fcf-64c2-4107-b160-7466430edc38","Type":"ContainerStarted","Data":"3bc120af49c3b3e7f315fce86c7844d79f69df43166c7937deb247993a4bcf47"} Jan 29 08:15:36 crc kubenswrapper[4826]: I0129 08:15:36.239660 4826 generic.go:334] "Generic (PLEG): container finished" podID="64f2249b-7eec-4121-9b5b-3fd964858886" containerID="5e0fee1a5797a527f379fc6ec18336e11d0b6b5c4041732931818ee964b7a7bb" exitCode=0 Jan 29 08:15:36 crc kubenswrapper[4826]: I0129 08:15:36.239722 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtwhw" event={"ID":"64f2249b-7eec-4121-9b5b-3fd964858886","Type":"ContainerDied","Data":"5e0fee1a5797a527f379fc6ec18336e11d0b6b5c4041732931818ee964b7a7bb"} Jan 29 08:15:36 crc kubenswrapper[4826]: I0129 08:15:36.262927 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8pgv6" podStartSLOduration=2.605308446 podStartE2EDuration="19.262897031s" podCreationTimestamp="2026-01-29 08:15:17 +0000 UTC" firstStartedPulling="2026-01-29 08:15:18.000809965 +0000 UTC m=+5501.862603034" lastFinishedPulling="2026-01-29 08:15:34.65839855 +0000 UTC m=+5518.520191619" observedRunningTime="2026-01-29 08:15:36.259867432 +0000 UTC m=+5520.121660511" watchObservedRunningTime="2026-01-29 08:15:36.262897031 +0000 UTC m=+5520.124690140" Jan 29 08:15:37 crc kubenswrapper[4826]: I0129 08:15:37.273784 4826 generic.go:334] "Generic (PLEG): container finished" podID="64f2249b-7eec-4121-9b5b-3fd964858886" containerID="578becf96f0517120d4250e306af3a1da57f708439157e779517edbec09bd1ba" exitCode=0 Jan 29 08:15:37 crc kubenswrapper[4826]: I0129 08:15:37.274185 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtwhw" event={"ID":"64f2249b-7eec-4121-9b5b-3fd964858886","Type":"ContainerDied","Data":"578becf96f0517120d4250e306af3a1da57f708439157e779517edbec09bd1ba"} Jan 29 08:15:38 crc kubenswrapper[4826]: I0129 08:15:38.287366 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtwhw" event={"ID":"64f2249b-7eec-4121-9b5b-3fd964858886","Type":"ContainerStarted","Data":"9c089617cbc0c14d6f5dce7d66ae08b2402b566c916423266698794f8ea3267b"} Jan 29 08:15:38 crc kubenswrapper[4826]: I0129 08:15:38.322826 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jtwhw" podStartSLOduration=9.875180381 podStartE2EDuration="11.322795028s" podCreationTimestamp="2026-01-29 08:15:27 +0000 UTC" firstStartedPulling="2026-01-29 08:15:36.242239499 +0000 UTC m=+5520.104032608" lastFinishedPulling="2026-01-29 08:15:37.689854186 +0000 UTC m=+5521.551647255" observedRunningTime="2026-01-29 08:15:38.320347744 +0000 UTC m=+5522.182140823" watchObservedRunningTime="2026-01-29 08:15:38.322795028 +0000 UTC m=+5522.184588107" Jan 29 08:15:39 crc kubenswrapper[4826]: I0129 08:15:39.299285 4826 generic.go:334] "Generic (PLEG): container finished" podID="4ae68fcf-64c2-4107-b160-7466430edc38" containerID="3bc120af49c3b3e7f315fce86c7844d79f69df43166c7937deb247993a4bcf47" exitCode=0 Jan 29 08:15:39 crc kubenswrapper[4826]: I0129 08:15:39.299373 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8pgv6" event={"ID":"4ae68fcf-64c2-4107-b160-7466430edc38","Type":"ContainerDied","Data":"3bc120af49c3b3e7f315fce86c7844d79f69df43166c7937deb247993a4bcf47"} Jan 29 08:15:40 crc kubenswrapper[4826]: I0129 08:15:40.800460 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:40 crc kubenswrapper[4826]: I0129 08:15:40.917647 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-db-sync-config-data\") pod \"4ae68fcf-64c2-4107-b160-7466430edc38\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " Jan 29 08:15:40 crc kubenswrapper[4826]: I0129 08:15:40.917762 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-combined-ca-bundle\") pod \"4ae68fcf-64c2-4107-b160-7466430edc38\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " Jan 29 08:15:40 crc kubenswrapper[4826]: I0129 08:15:40.917878 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-config-data\") pod \"4ae68fcf-64c2-4107-b160-7466430edc38\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " Jan 29 08:15:40 crc kubenswrapper[4826]: I0129 08:15:40.918005 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn4gm\" (UniqueName: \"kubernetes.io/projected/4ae68fcf-64c2-4107-b160-7466430edc38-kube-api-access-cn4gm\") pod \"4ae68fcf-64c2-4107-b160-7466430edc38\" (UID: \"4ae68fcf-64c2-4107-b160-7466430edc38\") " Jan 29 08:15:40 crc kubenswrapper[4826]: I0129 08:15:40.925257 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4ae68fcf-64c2-4107-b160-7466430edc38" (UID: "4ae68fcf-64c2-4107-b160-7466430edc38"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:40 crc kubenswrapper[4826]: I0129 08:15:40.931804 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae68fcf-64c2-4107-b160-7466430edc38-kube-api-access-cn4gm" (OuterVolumeSpecName: "kube-api-access-cn4gm") pod "4ae68fcf-64c2-4107-b160-7466430edc38" (UID: "4ae68fcf-64c2-4107-b160-7466430edc38"). InnerVolumeSpecName "kube-api-access-cn4gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:15:40 crc kubenswrapper[4826]: I0129 08:15:40.967737 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ae68fcf-64c2-4107-b160-7466430edc38" (UID: "4ae68fcf-64c2-4107-b160-7466430edc38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:40 crc kubenswrapper[4826]: I0129 08:15:40.975084 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-config-data" (OuterVolumeSpecName: "config-data") pod "4ae68fcf-64c2-4107-b160-7466430edc38" (UID: "4ae68fcf-64c2-4107-b160-7466430edc38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.021396 4826 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.021444 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.021457 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae68fcf-64c2-4107-b160-7466430edc38-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.021500 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn4gm\" (UniqueName: \"kubernetes.io/projected/4ae68fcf-64c2-4107-b160-7466430edc38-kube-api-access-cn4gm\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.327809 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8pgv6" event={"ID":"4ae68fcf-64c2-4107-b160-7466430edc38","Type":"ContainerDied","Data":"a13f42e1c3ff4b817da8d55a163f2c8ff4253a08b1d859902e2575ce51947375"} Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.327876 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a13f42e1c3ff4b817da8d55a163f2c8ff4253a08b1d859902e2575ce51947375" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.327926 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8pgv6" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.683975 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:15:41 crc kubenswrapper[4826]: E0129 08:15:41.684525 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae68fcf-64c2-4107-b160-7466430edc38" containerName="glance-db-sync" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.684558 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae68fcf-64c2-4107-b160-7466430edc38" containerName="glance-db-sync" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.684872 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae68fcf-64c2-4107-b160-7466430edc38" containerName="glance-db-sync" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.688034 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.691359 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.692092 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-s67bn" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.692393 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.693419 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.837061 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f559f45c-jcxmn"] Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.838336 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.838389 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-logs\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.838420 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.838445 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.838489 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b568\" (UniqueName: \"kubernetes.io/projected/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-kube-api-access-5b568\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.838511 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.842739 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.888507 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f559f45c-jcxmn"] Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.940350 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4ljt\" (UniqueName: \"kubernetes.io/projected/ad84c848-d300-4e24-bcf0-c6f5ff046a87-kube-api-access-w4ljt\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.940406 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-dns-svc\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.940453 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.940475 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-logs\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.940503 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-config\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.940523 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.940542 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.940580 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-ovsdbserver-nb\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.940598 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b568\" (UniqueName: \"kubernetes.io/projected/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-kube-api-access-5b568\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.940614 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.940654 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-ovsdbserver-sb\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.941097 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-logs\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.941319 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.945318 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.950226 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.951000 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.965626 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b568\" (UniqueName: \"kubernetes.io/projected/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-kube-api-access-5b568\") pod \"glance-default-external-api-0\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.975340 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.976774 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.978752 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 08:15:41 crc kubenswrapper[4826]: I0129 08:15:41.981796 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.010433 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.042914 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-dns-svc\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.043346 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-config\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.043414 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-ovsdbserver-nb\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.043474 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-ovsdbserver-sb\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.043577 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4ljt\" (UniqueName: \"kubernetes.io/projected/ad84c848-d300-4e24-bcf0-c6f5ff046a87-kube-api-access-w4ljt\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.043875 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-dns-svc\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.044409 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-ovsdbserver-nb\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.045037 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-config\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.045929 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-ovsdbserver-sb\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.062167 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4ljt\" (UniqueName: \"kubernetes.io/projected/ad84c848-d300-4e24-bcf0-c6f5ff046a87-kube-api-access-w4ljt\") pod \"dnsmasq-dns-85f559f45c-jcxmn\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.147319 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.147364 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.147399 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrrvc\" (UniqueName: \"kubernetes.io/projected/b066caca-1cff-4a24-babc-e1ca371de7ee-kube-api-access-xrrvc\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.147476 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b066caca-1cff-4a24-babc-e1ca371de7ee-logs\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.147528 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b066caca-1cff-4a24-babc-e1ca371de7ee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.147624 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.178602 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.249422 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.249479 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.249526 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrrvc\" (UniqueName: \"kubernetes.io/projected/b066caca-1cff-4a24-babc-e1ca371de7ee-kube-api-access-xrrvc\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.249580 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b066caca-1cff-4a24-babc-e1ca371de7ee-logs\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.249634 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b066caca-1cff-4a24-babc-e1ca371de7ee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.249675 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.252726 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b066caca-1cff-4a24-babc-e1ca371de7ee-logs\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.252995 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b066caca-1cff-4a24-babc-e1ca371de7ee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.254915 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.255013 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.260001 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.273235 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrrvc\" (UniqueName: \"kubernetes.io/projected/b066caca-1cff-4a24-babc-e1ca371de7ee-kube-api-access-xrrvc\") pod \"glance-default-internal-api-0\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.432840 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.551987 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:15:42 crc kubenswrapper[4826]: W0129 08:15:42.562155 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8d89f79_d7a9_4da9_88e6_dcddd0f0ebfa.slice/crio-fcb2b3bd4b3ebff1002baaf5ed59c867993cc327fc0ef740071e1e290924b1bd WatchSource:0}: Error finding container fcb2b3bd4b3ebff1002baaf5ed59c867993cc327fc0ef740071e1e290924b1bd: Status 404 returned error can't find the container with id fcb2b3bd4b3ebff1002baaf5ed59c867993cc327fc0ef740071e1e290924b1bd Jan 29 08:15:42 crc kubenswrapper[4826]: I0129 08:15:42.644830 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f559f45c-jcxmn"] Jan 29 08:15:42 crc kubenswrapper[4826]: W0129 08:15:42.648630 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad84c848_d300_4e24_bcf0_c6f5ff046a87.slice/crio-d2005472a79416f543323fa31cc752fb02113c2c66042cf8653111a2eadd19ca WatchSource:0}: Error finding container d2005472a79416f543323fa31cc752fb02113c2c66042cf8653111a2eadd19ca: Status 404 returned error can't find the container with id d2005472a79416f543323fa31cc752fb02113c2c66042cf8653111a2eadd19ca Jan 29 08:15:43 crc kubenswrapper[4826]: I0129 08:15:43.035961 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:15:43 crc kubenswrapper[4826]: W0129 08:15:43.045869 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb066caca_1cff_4a24_babc_e1ca371de7ee.slice/crio-63b049f4b1bbded66229d6b81e2df45431b63ae3af8f3124fa7c21b083ee1166 WatchSource:0}: Error finding container 63b049f4b1bbded66229d6b81e2df45431b63ae3af8f3124fa7c21b083ee1166: Status 404 returned error can't find the container with id 63b049f4b1bbded66229d6b81e2df45431b63ae3af8f3124fa7c21b083ee1166 Jan 29 08:15:43 crc kubenswrapper[4826]: I0129 08:15:43.074012 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:15:43 crc kubenswrapper[4826]: I0129 08:15:43.346076 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b066caca-1cff-4a24-babc-e1ca371de7ee","Type":"ContainerStarted","Data":"63b049f4b1bbded66229d6b81e2df45431b63ae3af8f3124fa7c21b083ee1166"} Jan 29 08:15:43 crc kubenswrapper[4826]: I0129 08:15:43.348519 4826 generic.go:334] "Generic (PLEG): container finished" podID="ad84c848-d300-4e24-bcf0-c6f5ff046a87" containerID="7a4523570412e58db500b5b4a18db5fc4f9b3c5156e6dfc98ec2e63057d9e15d" exitCode=0 Jan 29 08:15:43 crc kubenswrapper[4826]: I0129 08:15:43.348564 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" event={"ID":"ad84c848-d300-4e24-bcf0-c6f5ff046a87","Type":"ContainerDied","Data":"7a4523570412e58db500b5b4a18db5fc4f9b3c5156e6dfc98ec2e63057d9e15d"} Jan 29 08:15:43 crc kubenswrapper[4826]: I0129 08:15:43.348581 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" event={"ID":"ad84c848-d300-4e24-bcf0-c6f5ff046a87","Type":"ContainerStarted","Data":"d2005472a79416f543323fa31cc752fb02113c2c66042cf8653111a2eadd19ca"} Jan 29 08:15:43 crc kubenswrapper[4826]: I0129 08:15:43.357084 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa","Type":"ContainerStarted","Data":"ea89f3bd45ed66a77a665dd122468e109bc978210c89e46694e8dde0b89e858b"} Jan 29 08:15:43 crc kubenswrapper[4826]: I0129 08:15:43.357131 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa","Type":"ContainerStarted","Data":"fcb2b3bd4b3ebff1002baaf5ed59c867993cc327fc0ef740071e1e290924b1bd"} Jan 29 08:15:44 crc kubenswrapper[4826]: I0129 08:15:44.367288 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" event={"ID":"ad84c848-d300-4e24-bcf0-c6f5ff046a87","Type":"ContainerStarted","Data":"98d355213920248f787fd4090c2ab02a4988299129c22853b1b19f36f0dba021"} Jan 29 08:15:44 crc kubenswrapper[4826]: I0129 08:15:44.367679 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:44 crc kubenswrapper[4826]: I0129 08:15:44.371733 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa","Type":"ContainerStarted","Data":"360d8beb98cbd3e0344cd77457b6821e62ad211229f097e2fb748a8eb65239c7"} Jan 29 08:15:44 crc kubenswrapper[4826]: I0129 08:15:44.371851 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" containerName="glance-log" containerID="cri-o://ea89f3bd45ed66a77a665dd122468e109bc978210c89e46694e8dde0b89e858b" gracePeriod=30 Jan 29 08:15:44 crc kubenswrapper[4826]: I0129 08:15:44.372106 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" containerName="glance-httpd" containerID="cri-o://360d8beb98cbd3e0344cd77457b6821e62ad211229f097e2fb748a8eb65239c7" gracePeriod=30 Jan 29 08:15:44 crc kubenswrapper[4826]: I0129 08:15:44.374637 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b066caca-1cff-4a24-babc-e1ca371de7ee","Type":"ContainerStarted","Data":"c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf"} Jan 29 08:15:44 crc kubenswrapper[4826]: I0129 08:15:44.374659 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b066caca-1cff-4a24-babc-e1ca371de7ee","Type":"ContainerStarted","Data":"fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975"} Jan 29 08:15:44 crc kubenswrapper[4826]: I0129 08:15:44.458197 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" podStartSLOduration=3.458180795 podStartE2EDuration="3.458180795s" podCreationTimestamp="2026-01-29 08:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:15:44.410932063 +0000 UTC m=+5528.272725132" watchObservedRunningTime="2026-01-29 08:15:44.458180795 +0000 UTC m=+5528.319973864" Jan 29 08:15:44 crc kubenswrapper[4826]: I0129 08:15:44.464392 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.464374528 podStartE2EDuration="3.464374528s" podCreationTimestamp="2026-01-29 08:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:15:44.457121497 +0000 UTC m=+5528.318914566" watchObservedRunningTime="2026-01-29 08:15:44.464374528 +0000 UTC m=+5528.326167597" Jan 29 08:15:44 crc kubenswrapper[4826]: I0129 08:15:44.505793 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.505776786 podStartE2EDuration="3.505776786s" podCreationTimestamp="2026-01-29 08:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:15:44.500039265 +0000 UTC m=+5528.361832334" watchObservedRunningTime="2026-01-29 08:15:44.505776786 +0000 UTC m=+5528.367569855" Jan 29 08:15:44 crc kubenswrapper[4826]: I0129 08:15:44.743572 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:15:45 crc kubenswrapper[4826]: I0129 08:15:45.396089 4826 generic.go:334] "Generic (PLEG): container finished" podID="c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" containerID="360d8beb98cbd3e0344cd77457b6821e62ad211229f097e2fb748a8eb65239c7" exitCode=0 Jan 29 08:15:45 crc kubenswrapper[4826]: I0129 08:15:45.396128 4826 generic.go:334] "Generic (PLEG): container finished" podID="c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" containerID="ea89f3bd45ed66a77a665dd122468e109bc978210c89e46694e8dde0b89e858b" exitCode=143 Jan 29 08:15:45 crc kubenswrapper[4826]: I0129 08:15:45.396414 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa","Type":"ContainerDied","Data":"360d8beb98cbd3e0344cd77457b6821e62ad211229f097e2fb748a8eb65239c7"} Jan 29 08:15:45 crc kubenswrapper[4826]: I0129 08:15:45.396512 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa","Type":"ContainerDied","Data":"ea89f3bd45ed66a77a665dd122468e109bc978210c89e46694e8dde0b89e858b"} Jan 29 08:15:45 crc kubenswrapper[4826]: I0129 08:15:45.852669 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.030760 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-logs\") pod \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.030933 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b568\" (UniqueName: \"kubernetes.io/projected/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-kube-api-access-5b568\") pod \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.031159 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-scripts\") pod \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.031212 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-combined-ca-bundle\") pod \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.031279 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-httpd-run\") pod \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.031410 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-config-data\") pod \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\" (UID: \"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa\") " Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.032361 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-logs" (OuterVolumeSpecName: "logs") pod "c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" (UID: "c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.032416 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" (UID: "c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.037988 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-kube-api-access-5b568" (OuterVolumeSpecName: "kube-api-access-5b568") pod "c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" (UID: "c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa"). InnerVolumeSpecName "kube-api-access-5b568". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.050448 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-scripts" (OuterVolumeSpecName: "scripts") pod "c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" (UID: "c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.075532 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" (UID: "c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.098482 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-config-data" (OuterVolumeSpecName: "config-data") pod "c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" (UID: "c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.134017 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.134056 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.134065 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b568\" (UniqueName: \"kubernetes.io/projected/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-kube-api-access-5b568\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.134080 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.134088 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.134100 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.412245 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.412504 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b066caca-1cff-4a24-babc-e1ca371de7ee" containerName="glance-httpd" containerID="cri-o://c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf" gracePeriod=30 Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.412256 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa","Type":"ContainerDied","Data":"fcb2b3bd4b3ebff1002baaf5ed59c867993cc327fc0ef740071e1e290924b1bd"} Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.412399 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b066caca-1cff-4a24-babc-e1ca371de7ee" containerName="glance-log" containerID="cri-o://fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975" gracePeriod=30 Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.412660 4826 scope.go:117] "RemoveContainer" containerID="360d8beb98cbd3e0344cd77457b6821e62ad211229f097e2fb748a8eb65239c7" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.465260 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.477122 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.480114 4826 scope.go:117] "RemoveContainer" containerID="ea89f3bd45ed66a77a665dd122468e109bc978210c89e46694e8dde0b89e858b" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.491546 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:15:46 crc kubenswrapper[4826]: E0129 08:15:46.492000 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" containerName="glance-httpd" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.492018 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" containerName="glance-httpd" Jan 29 08:15:46 crc kubenswrapper[4826]: E0129 08:15:46.492056 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" containerName="glance-log" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.492062 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" containerName="glance-log" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.492243 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" containerName="glance-log" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.492266 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" containerName="glance-httpd" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.493363 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.496267 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.496501 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.501662 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.643223 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6107bcc9-cfe4-45d2-a776-f3633688ae3e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.643284 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-config-data\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.643321 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t87bn\" (UniqueName: \"kubernetes.io/projected/6107bcc9-cfe4-45d2-a776-f3633688ae3e-kube-api-access-t87bn\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.643353 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6107bcc9-cfe4-45d2-a776-f3633688ae3e-logs\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.643410 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.643465 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.643495 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-scripts\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.744736 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-config-data\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.744845 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t87bn\" (UniqueName: \"kubernetes.io/projected/6107bcc9-cfe4-45d2-a776-f3633688ae3e-kube-api-access-t87bn\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.744887 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6107bcc9-cfe4-45d2-a776-f3633688ae3e-logs\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.744956 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.745008 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.745036 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-scripts\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.745060 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6107bcc9-cfe4-45d2-a776-f3633688ae3e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.745817 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6107bcc9-cfe4-45d2-a776-f3633688ae3e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.746565 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6107bcc9-cfe4-45d2-a776-f3633688ae3e-logs\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.750505 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-config-data\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.750658 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.750729 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.751851 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-scripts\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.765827 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t87bn\" (UniqueName: \"kubernetes.io/projected/6107bcc9-cfe4-45d2-a776-f3633688ae3e-kube-api-access-t87bn\") pod \"glance-default-external-api-0\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " pod="openstack/glance-default-external-api-0" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.825879 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa" path="/var/lib/kubelet/pods/c8d89f79-d7a9-4da9-88e6-dcddd0f0ebfa/volumes" Jan 29 08:15:46 crc kubenswrapper[4826]: I0129 08:15:46.826109 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.078048 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.253912 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b066caca-1cff-4a24-babc-e1ca371de7ee-logs\") pod \"b066caca-1cff-4a24-babc-e1ca371de7ee\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.254267 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-scripts\") pod \"b066caca-1cff-4a24-babc-e1ca371de7ee\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.254325 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b066caca-1cff-4a24-babc-e1ca371de7ee-logs" (OuterVolumeSpecName: "logs") pod "b066caca-1cff-4a24-babc-e1ca371de7ee" (UID: "b066caca-1cff-4a24-babc-e1ca371de7ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.254413 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-combined-ca-bundle\") pod \"b066caca-1cff-4a24-babc-e1ca371de7ee\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.254645 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-config-data\") pod \"b066caca-1cff-4a24-babc-e1ca371de7ee\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.254759 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrrvc\" (UniqueName: \"kubernetes.io/projected/b066caca-1cff-4a24-babc-e1ca371de7ee-kube-api-access-xrrvc\") pod \"b066caca-1cff-4a24-babc-e1ca371de7ee\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.255174 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b066caca-1cff-4a24-babc-e1ca371de7ee-httpd-run\") pod \"b066caca-1cff-4a24-babc-e1ca371de7ee\" (UID: \"b066caca-1cff-4a24-babc-e1ca371de7ee\") " Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.255509 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b066caca-1cff-4a24-babc-e1ca371de7ee-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b066caca-1cff-4a24-babc-e1ca371de7ee" (UID: "b066caca-1cff-4a24-babc-e1ca371de7ee"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.256273 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b066caca-1cff-4a24-babc-e1ca371de7ee-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.256287 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b066caca-1cff-4a24-babc-e1ca371de7ee-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.259097 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b066caca-1cff-4a24-babc-e1ca371de7ee-kube-api-access-xrrvc" (OuterVolumeSpecName: "kube-api-access-xrrvc") pod "b066caca-1cff-4a24-babc-e1ca371de7ee" (UID: "b066caca-1cff-4a24-babc-e1ca371de7ee"). InnerVolumeSpecName "kube-api-access-xrrvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.260244 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-scripts" (OuterVolumeSpecName: "scripts") pod "b066caca-1cff-4a24-babc-e1ca371de7ee" (UID: "b066caca-1cff-4a24-babc-e1ca371de7ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.281971 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b066caca-1cff-4a24-babc-e1ca371de7ee" (UID: "b066caca-1cff-4a24-babc-e1ca371de7ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.309485 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-config-data" (OuterVolumeSpecName: "config-data") pod "b066caca-1cff-4a24-babc-e1ca371de7ee" (UID: "b066caca-1cff-4a24-babc-e1ca371de7ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.357631 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrrvc\" (UniqueName: \"kubernetes.io/projected/b066caca-1cff-4a24-babc-e1ca371de7ee-kube-api-access-xrrvc\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.357664 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.357677 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.357690 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b066caca-1cff-4a24-babc-e1ca371de7ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.422921 4826 generic.go:334] "Generic (PLEG): container finished" podID="b066caca-1cff-4a24-babc-e1ca371de7ee" containerID="c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf" exitCode=0 Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.422945 4826 generic.go:334] "Generic (PLEG): container finished" podID="b066caca-1cff-4a24-babc-e1ca371de7ee" containerID="fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975" exitCode=143 Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.422965 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b066caca-1cff-4a24-babc-e1ca371de7ee","Type":"ContainerDied","Data":"c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf"} Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.422989 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b066caca-1cff-4a24-babc-e1ca371de7ee","Type":"ContainerDied","Data":"fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975"} Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.422999 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b066caca-1cff-4a24-babc-e1ca371de7ee","Type":"ContainerDied","Data":"63b049f4b1bbded66229d6b81e2df45431b63ae3af8f3124fa7c21b083ee1166"} Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.423013 4826 scope.go:117] "RemoveContainer" containerID="c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.423101 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.452375 4826 scope.go:117] "RemoveContainer" containerID="fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.457599 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.478589 4826 scope.go:117] "RemoveContainer" containerID="c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.478754 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:15:47 crc kubenswrapper[4826]: E0129 08:15:47.480673 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf\": container with ID starting with c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf not found: ID does not exist" containerID="c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.480708 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf"} err="failed to get container status \"c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf\": rpc error: code = NotFound desc = could not find container \"c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf\": container with ID starting with c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf not found: ID does not exist" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.480763 4826 scope.go:117] "RemoveContainer" containerID="fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.496163 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:15:47 crc kubenswrapper[4826]: E0129 08:15:47.496671 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b066caca-1cff-4a24-babc-e1ca371de7ee" containerName="glance-log" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.496691 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b066caca-1cff-4a24-babc-e1ca371de7ee" containerName="glance-log" Jan 29 08:15:47 crc kubenswrapper[4826]: E0129 08:15:47.496713 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b066caca-1cff-4a24-babc-e1ca371de7ee" containerName="glance-httpd" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.496720 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b066caca-1cff-4a24-babc-e1ca371de7ee" containerName="glance-httpd" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.496913 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b066caca-1cff-4a24-babc-e1ca371de7ee" containerName="glance-httpd" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.496944 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b066caca-1cff-4a24-babc-e1ca371de7ee" containerName="glance-log" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.498323 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: E0129 08:15:47.501687 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975\": container with ID starting with fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975 not found: ID does not exist" containerID="fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.501758 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975"} err="failed to get container status \"fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975\": rpc error: code = NotFound desc = could not find container \"fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975\": container with ID starting with fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975 not found: ID does not exist" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.501791 4826 scope.go:117] "RemoveContainer" containerID="c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.502204 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.505819 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.508313 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf"} err="failed to get container status \"c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf\": rpc error: code = NotFound desc = could not find container \"c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf\": container with ID starting with c07d8f9162f3659aa9082bebf3c6baca0e735e0e078964e2e22ab56c158d0cbf not found: ID does not exist" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.508360 4826 scope.go:117] "RemoveContainer" containerID="fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.509591 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975"} err="failed to get container status \"fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975\": rpc error: code = NotFound desc = could not find container \"fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975\": container with ID starting with fa3776c9b951ca1dac7dca7de86529ed729f0b9d9b0c46cb94c8f87b718ba975 not found: ID does not exist" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.521781 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.665233 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.665315 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.665349 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.665418 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s6d6\" (UniqueName: \"kubernetes.io/projected/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-kube-api-access-6s6d6\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.665455 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.665476 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.665501 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.766699 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.766769 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.766800 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.766874 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s6d6\" (UniqueName: \"kubernetes.io/projected/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-kube-api-access-6s6d6\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.766931 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.766953 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.766978 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.768196 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.768465 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.771796 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.782276 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.782932 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.790347 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.794827 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.795422 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s6d6\" (UniqueName: \"kubernetes.io/projected/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-kube-api-access-6s6d6\") pod \"glance-default-internal-api-0\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.795643 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.846509 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:47 crc kubenswrapper[4826]: I0129 08:15:47.863336 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:15:48 crc kubenswrapper[4826]: I0129 08:15:48.149160 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:15:48 crc kubenswrapper[4826]: W0129 08:15:48.160280 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6107bcc9_cfe4_45d2_a776_f3633688ae3e.slice/crio-1668373bb5c59544e235378472f44e345dab45db75055c7afcc7fa3533d64925 WatchSource:0}: Error finding container 1668373bb5c59544e235378472f44e345dab45db75055c7afcc7fa3533d64925: Status 404 returned error can't find the container with id 1668373bb5c59544e235378472f44e345dab45db75055c7afcc7fa3533d64925 Jan 29 08:15:48 crc kubenswrapper[4826]: I0129 08:15:48.441456 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:15:48 crc kubenswrapper[4826]: I0129 08:15:48.456226 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6107bcc9-cfe4-45d2-a776-f3633688ae3e","Type":"ContainerStarted","Data":"1668373bb5c59544e235378472f44e345dab45db75055c7afcc7fa3533d64925"} Jan 29 08:15:48 crc kubenswrapper[4826]: I0129 08:15:48.516930 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:48 crc kubenswrapper[4826]: I0129 08:15:48.562147 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtwhw"] Jan 29 08:15:48 crc kubenswrapper[4826]: I0129 08:15:48.825793 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b066caca-1cff-4a24-babc-e1ca371de7ee" path="/var/lib/kubelet/pods/b066caca-1cff-4a24-babc-e1ca371de7ee/volumes" Jan 29 08:15:49 crc kubenswrapper[4826]: I0129 08:15:49.470781 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6107bcc9-cfe4-45d2-a776-f3633688ae3e","Type":"ContainerStarted","Data":"a82e87fef18e2a222452a365ff82b5a79b141a384906bab83ae39e7fbc039b3b"} Jan 29 08:15:49 crc kubenswrapper[4826]: I0129 08:15:49.471045 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6107bcc9-cfe4-45d2-a776-f3633688ae3e","Type":"ContainerStarted","Data":"70efcc075e7671c1480a5e9c11bc6dfe2f862a918bef19def5d664776e4893bb"} Jan 29 08:15:49 crc kubenswrapper[4826]: I0129 08:15:49.486741 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb93d33e-df3f-4a16-b0c4-8e422146a2f9","Type":"ContainerStarted","Data":"a07837ea56a1b161e2d92732fef846dfc042ffef2170ec26a8b1cde01ac6c2d4"} Jan 29 08:15:49 crc kubenswrapper[4826]: I0129 08:15:49.486786 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb93d33e-df3f-4a16-b0c4-8e422146a2f9","Type":"ContainerStarted","Data":"1cb2fa7c586d203a6e9955ba7bd1997b7f6c99097d81dc9e8d16b7eee008fb56"} Jan 29 08:15:49 crc kubenswrapper[4826]: I0129 08:15:49.498174 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.498150239 podStartE2EDuration="3.498150239s" podCreationTimestamp="2026-01-29 08:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:15:49.489026409 +0000 UTC m=+5533.350819478" watchObservedRunningTime="2026-01-29 08:15:49.498150239 +0000 UTC m=+5533.359943308" Jan 29 08:15:50 crc kubenswrapper[4826]: I0129 08:15:50.502906 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb93d33e-df3f-4a16-b0c4-8e422146a2f9","Type":"ContainerStarted","Data":"deb10a6c627085733a33312d760ab4b9a740e5aae8b591a4e004914000ea21b2"} Jan 29 08:15:50 crc kubenswrapper[4826]: I0129 08:15:50.504294 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jtwhw" podUID="64f2249b-7eec-4121-9b5b-3fd964858886" containerName="registry-server" containerID="cri-o://9c089617cbc0c14d6f5dce7d66ae08b2402b566c916423266698794f8ea3267b" gracePeriod=2 Jan 29 08:15:50 crc kubenswrapper[4826]: I0129 08:15:50.539750 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.539729768 podStartE2EDuration="3.539729768s" podCreationTimestamp="2026-01-29 08:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:15:50.525260558 +0000 UTC m=+5534.387053667" watchObservedRunningTime="2026-01-29 08:15:50.539729768 +0000 UTC m=+5534.401522837" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.025793 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.131093 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f2249b-7eec-4121-9b5b-3fd964858886-utilities\") pod \"64f2249b-7eec-4121-9b5b-3fd964858886\" (UID: \"64f2249b-7eec-4121-9b5b-3fd964858886\") " Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.131271 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f2249b-7eec-4121-9b5b-3fd964858886-catalog-content\") pod \"64f2249b-7eec-4121-9b5b-3fd964858886\" (UID: \"64f2249b-7eec-4121-9b5b-3fd964858886\") " Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.131843 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f2249b-7eec-4121-9b5b-3fd964858886-utilities" (OuterVolumeSpecName: "utilities") pod "64f2249b-7eec-4121-9b5b-3fd964858886" (UID: "64f2249b-7eec-4121-9b5b-3fd964858886"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.148179 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdkrf\" (UniqueName: \"kubernetes.io/projected/64f2249b-7eec-4121-9b5b-3fd964858886-kube-api-access-jdkrf\") pod \"64f2249b-7eec-4121-9b5b-3fd964858886\" (UID: \"64f2249b-7eec-4121-9b5b-3fd964858886\") " Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.148807 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f2249b-7eec-4121-9b5b-3fd964858886-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.157541 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64f2249b-7eec-4121-9b5b-3fd964858886-kube-api-access-jdkrf" (OuterVolumeSpecName: "kube-api-access-jdkrf") pod "64f2249b-7eec-4121-9b5b-3fd964858886" (UID: "64f2249b-7eec-4121-9b5b-3fd964858886"). InnerVolumeSpecName "kube-api-access-jdkrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.187638 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f2249b-7eec-4121-9b5b-3fd964858886-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64f2249b-7eec-4121-9b5b-3fd964858886" (UID: "64f2249b-7eec-4121-9b5b-3fd964858886"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.250529 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f2249b-7eec-4121-9b5b-3fd964858886-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.250765 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdkrf\" (UniqueName: \"kubernetes.io/projected/64f2249b-7eec-4121-9b5b-3fd964858886-kube-api-access-jdkrf\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.522702 4826 generic.go:334] "Generic (PLEG): container finished" podID="64f2249b-7eec-4121-9b5b-3fd964858886" containerID="9c089617cbc0c14d6f5dce7d66ae08b2402b566c916423266698794f8ea3267b" exitCode=0 Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.522867 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtwhw" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.522852 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtwhw" event={"ID":"64f2249b-7eec-4121-9b5b-3fd964858886","Type":"ContainerDied","Data":"9c089617cbc0c14d6f5dce7d66ae08b2402b566c916423266698794f8ea3267b"} Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.523043 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtwhw" event={"ID":"64f2249b-7eec-4121-9b5b-3fd964858886","Type":"ContainerDied","Data":"afcbb8d3d47ad968b5aa67602a48e00383515bf0776cd7dc0dbf3a6109f9bbb9"} Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.523066 4826 scope.go:117] "RemoveContainer" containerID="9c089617cbc0c14d6f5dce7d66ae08b2402b566c916423266698794f8ea3267b" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.558256 4826 scope.go:117] "RemoveContainer" containerID="578becf96f0517120d4250e306af3a1da57f708439157e779517edbec09bd1ba" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.567373 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtwhw"] Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.572507 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtwhw"] Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.583711 4826 scope.go:117] "RemoveContainer" containerID="5e0fee1a5797a527f379fc6ec18336e11d0b6b5c4041732931818ee964b7a7bb" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.617007 4826 scope.go:117] "RemoveContainer" containerID="9c089617cbc0c14d6f5dce7d66ae08b2402b566c916423266698794f8ea3267b" Jan 29 08:15:51 crc kubenswrapper[4826]: E0129 08:15:51.617520 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c089617cbc0c14d6f5dce7d66ae08b2402b566c916423266698794f8ea3267b\": container with ID starting with 9c089617cbc0c14d6f5dce7d66ae08b2402b566c916423266698794f8ea3267b not found: ID does not exist" containerID="9c089617cbc0c14d6f5dce7d66ae08b2402b566c916423266698794f8ea3267b" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.617557 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c089617cbc0c14d6f5dce7d66ae08b2402b566c916423266698794f8ea3267b"} err="failed to get container status \"9c089617cbc0c14d6f5dce7d66ae08b2402b566c916423266698794f8ea3267b\": rpc error: code = NotFound desc = could not find container \"9c089617cbc0c14d6f5dce7d66ae08b2402b566c916423266698794f8ea3267b\": container with ID starting with 9c089617cbc0c14d6f5dce7d66ae08b2402b566c916423266698794f8ea3267b not found: ID does not exist" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.617582 4826 scope.go:117] "RemoveContainer" containerID="578becf96f0517120d4250e306af3a1da57f708439157e779517edbec09bd1ba" Jan 29 08:15:51 crc kubenswrapper[4826]: E0129 08:15:51.617889 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"578becf96f0517120d4250e306af3a1da57f708439157e779517edbec09bd1ba\": container with ID starting with 578becf96f0517120d4250e306af3a1da57f708439157e779517edbec09bd1ba not found: ID does not exist" containerID="578becf96f0517120d4250e306af3a1da57f708439157e779517edbec09bd1ba" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.617919 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578becf96f0517120d4250e306af3a1da57f708439157e779517edbec09bd1ba"} err="failed to get container status \"578becf96f0517120d4250e306af3a1da57f708439157e779517edbec09bd1ba\": rpc error: code = NotFound desc = could not find container \"578becf96f0517120d4250e306af3a1da57f708439157e779517edbec09bd1ba\": container with ID starting with 578becf96f0517120d4250e306af3a1da57f708439157e779517edbec09bd1ba not found: ID does not exist" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.617938 4826 scope.go:117] "RemoveContainer" containerID="5e0fee1a5797a527f379fc6ec18336e11d0b6b5c4041732931818ee964b7a7bb" Jan 29 08:15:51 crc kubenswrapper[4826]: E0129 08:15:51.618185 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e0fee1a5797a527f379fc6ec18336e11d0b6b5c4041732931818ee964b7a7bb\": container with ID starting with 5e0fee1a5797a527f379fc6ec18336e11d0b6b5c4041732931818ee964b7a7bb not found: ID does not exist" containerID="5e0fee1a5797a527f379fc6ec18336e11d0b6b5c4041732931818ee964b7a7bb" Jan 29 08:15:51 crc kubenswrapper[4826]: I0129 08:15:51.618209 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e0fee1a5797a527f379fc6ec18336e11d0b6b5c4041732931818ee964b7a7bb"} err="failed to get container status \"5e0fee1a5797a527f379fc6ec18336e11d0b6b5c4041732931818ee964b7a7bb\": rpc error: code = NotFound desc = could not find container \"5e0fee1a5797a527f379fc6ec18336e11d0b6b5c4041732931818ee964b7a7bb\": container with ID starting with 5e0fee1a5797a527f379fc6ec18336e11d0b6b5c4041732931818ee964b7a7bb not found: ID does not exist" Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.181494 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.246125 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79fc76598f-tc976"] Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.246440 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79fc76598f-tc976" podUID="aa3ada56-74ec-4f31-a22a-84e52dfba998" containerName="dnsmasq-dns" containerID="cri-o://a62fc4d2af9fa05b4974e54dc32f0df8d5cd505c9ab779712b08e01036bf033f" gracePeriod=10 Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.532876 4826 generic.go:334] "Generic (PLEG): container finished" podID="aa3ada56-74ec-4f31-a22a-84e52dfba998" containerID="a62fc4d2af9fa05b4974e54dc32f0df8d5cd505c9ab779712b08e01036bf033f" exitCode=0 Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.532931 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79fc76598f-tc976" event={"ID":"aa3ada56-74ec-4f31-a22a-84e52dfba998","Type":"ContainerDied","Data":"a62fc4d2af9fa05b4974e54dc32f0df8d5cd505c9ab779712b08e01036bf033f"} Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.698168 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.819771 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64f2249b-7eec-4121-9b5b-3fd964858886" path="/var/lib/kubelet/pods/64f2249b-7eec-4121-9b5b-3fd964858886/volumes" Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.886548 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-ovsdbserver-nb\") pod \"aa3ada56-74ec-4f31-a22a-84e52dfba998\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.886612 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-config\") pod \"aa3ada56-74ec-4f31-a22a-84e52dfba998\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.886653 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-ovsdbserver-sb\") pod \"aa3ada56-74ec-4f31-a22a-84e52dfba998\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.886694 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pclct\" (UniqueName: \"kubernetes.io/projected/aa3ada56-74ec-4f31-a22a-84e52dfba998-kube-api-access-pclct\") pod \"aa3ada56-74ec-4f31-a22a-84e52dfba998\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.886765 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-dns-svc\") pod \"aa3ada56-74ec-4f31-a22a-84e52dfba998\" (UID: \"aa3ada56-74ec-4f31-a22a-84e52dfba998\") " Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.891863 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3ada56-74ec-4f31-a22a-84e52dfba998-kube-api-access-pclct" (OuterVolumeSpecName: "kube-api-access-pclct") pod "aa3ada56-74ec-4f31-a22a-84e52dfba998" (UID: "aa3ada56-74ec-4f31-a22a-84e52dfba998"). InnerVolumeSpecName "kube-api-access-pclct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.940121 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aa3ada56-74ec-4f31-a22a-84e52dfba998" (UID: "aa3ada56-74ec-4f31-a22a-84e52dfba998"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.940885 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aa3ada56-74ec-4f31-a22a-84e52dfba998" (UID: "aa3ada56-74ec-4f31-a22a-84e52dfba998"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.944088 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aa3ada56-74ec-4f31-a22a-84e52dfba998" (UID: "aa3ada56-74ec-4f31-a22a-84e52dfba998"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.962476 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-config" (OuterVolumeSpecName: "config") pod "aa3ada56-74ec-4f31-a22a-84e52dfba998" (UID: "aa3ada56-74ec-4f31-a22a-84e52dfba998"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.988504 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.988531 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.988542 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.988550 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa3ada56-74ec-4f31-a22a-84e52dfba998-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:52 crc kubenswrapper[4826]: I0129 08:15:52.988558 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pclct\" (UniqueName: \"kubernetes.io/projected/aa3ada56-74ec-4f31-a22a-84e52dfba998-kube-api-access-pclct\") on node \"crc\" DevicePath \"\"" Jan 29 08:15:53 crc kubenswrapper[4826]: I0129 08:15:53.548201 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79fc76598f-tc976" event={"ID":"aa3ada56-74ec-4f31-a22a-84e52dfba998","Type":"ContainerDied","Data":"38d1b907656a0484bdbb4e4f5aff7e1af05685fc813f4bee0ded1e19f9a5c5ef"} Jan 29 08:15:53 crc kubenswrapper[4826]: I0129 08:15:53.548233 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79fc76598f-tc976" Jan 29 08:15:53 crc kubenswrapper[4826]: I0129 08:15:53.548288 4826 scope.go:117] "RemoveContainer" containerID="a62fc4d2af9fa05b4974e54dc32f0df8d5cd505c9ab779712b08e01036bf033f" Jan 29 08:15:53 crc kubenswrapper[4826]: I0129 08:15:53.584953 4826 scope.go:117] "RemoveContainer" containerID="dee3da880cc7e612ddc9f3f2ec64e9580bdd7f0497de820cbc1ab243bb8e4805" Jan 29 08:15:53 crc kubenswrapper[4826]: I0129 08:15:53.585825 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79fc76598f-tc976"] Jan 29 08:15:53 crc kubenswrapper[4826]: I0129 08:15:53.595416 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79fc76598f-tc976"] Jan 29 08:15:54 crc kubenswrapper[4826]: I0129 08:15:54.828230 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa3ada56-74ec-4f31-a22a-84e52dfba998" path="/var/lib/kubelet/pods/aa3ada56-74ec-4f31-a22a-84e52dfba998/volumes" Jan 29 08:15:56 crc kubenswrapper[4826]: I0129 08:15:56.826825 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 08:15:56 crc kubenswrapper[4826]: I0129 08:15:56.827222 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 08:15:56 crc kubenswrapper[4826]: I0129 08:15:56.869227 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 08:15:56 crc kubenswrapper[4826]: I0129 08:15:56.904672 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 08:15:57 crc kubenswrapper[4826]: I0129 08:15:57.604761 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 08:15:57 crc kubenswrapper[4826]: I0129 08:15:57.604856 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 08:15:57 crc kubenswrapper[4826]: I0129 08:15:57.864125 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 08:15:57 crc kubenswrapper[4826]: I0129 08:15:57.864225 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 08:15:57 crc kubenswrapper[4826]: I0129 08:15:57.900894 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 08:15:57 crc kubenswrapper[4826]: I0129 08:15:57.925820 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 08:15:58 crc kubenswrapper[4826]: I0129 08:15:58.620321 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 08:15:58 crc kubenswrapper[4826]: I0129 08:15:58.620629 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 08:15:59 crc kubenswrapper[4826]: I0129 08:15:59.440375 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 08:15:59 crc kubenswrapper[4826]: I0129 08:15:59.498501 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 08:16:00 crc kubenswrapper[4826]: I0129 08:16:00.438109 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 08:16:00 crc kubenswrapper[4826]: I0129 08:16:00.521597 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.550841 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-z6vw2"] Jan 29 08:16:08 crc kubenswrapper[4826]: E0129 08:16:08.551713 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f2249b-7eec-4121-9b5b-3fd964858886" containerName="extract-utilities" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.551734 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f2249b-7eec-4121-9b5b-3fd964858886" containerName="extract-utilities" Jan 29 08:16:08 crc kubenswrapper[4826]: E0129 08:16:08.551759 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f2249b-7eec-4121-9b5b-3fd964858886" containerName="extract-content" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.551767 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f2249b-7eec-4121-9b5b-3fd964858886" containerName="extract-content" Jan 29 08:16:08 crc kubenswrapper[4826]: E0129 08:16:08.551782 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3ada56-74ec-4f31-a22a-84e52dfba998" containerName="dnsmasq-dns" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.551790 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3ada56-74ec-4f31-a22a-84e52dfba998" containerName="dnsmasq-dns" Jan 29 08:16:08 crc kubenswrapper[4826]: E0129 08:16:08.551804 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3ada56-74ec-4f31-a22a-84e52dfba998" containerName="init" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.551812 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3ada56-74ec-4f31-a22a-84e52dfba998" containerName="init" Jan 29 08:16:08 crc kubenswrapper[4826]: E0129 08:16:08.551827 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f2249b-7eec-4121-9b5b-3fd964858886" containerName="registry-server" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.551836 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f2249b-7eec-4121-9b5b-3fd964858886" containerName="registry-server" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.552026 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="64f2249b-7eec-4121-9b5b-3fd964858886" containerName="registry-server" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.552056 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3ada56-74ec-4f31-a22a-84e52dfba998" containerName="dnsmasq-dns" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.552715 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z6vw2" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.602348 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-z6vw2"] Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.603091 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ea4e54-e85e-436c-ba93-ce6c994607be-operator-scripts\") pod \"placement-db-create-z6vw2\" (UID: \"a6ea4e54-e85e-436c-ba93-ce6c994607be\") " pod="openstack/placement-db-create-z6vw2" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.603248 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmv8z\" (UniqueName: \"kubernetes.io/projected/a6ea4e54-e85e-436c-ba93-ce6c994607be-kube-api-access-tmv8z\") pod \"placement-db-create-z6vw2\" (UID: \"a6ea4e54-e85e-436c-ba93-ce6c994607be\") " pod="openstack/placement-db-create-z6vw2" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.649201 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6bd0-account-create-update-7mlzp"] Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.650751 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bd0-account-create-update-7mlzp" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.653674 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.658652 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bd0-account-create-update-7mlzp"] Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.704518 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac1cc5b-8679-46db-8577-0055cf77495f-operator-scripts\") pod \"placement-6bd0-account-create-update-7mlzp\" (UID: \"fac1cc5b-8679-46db-8577-0055cf77495f\") " pod="openstack/placement-6bd0-account-create-update-7mlzp" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.704612 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmv8z\" (UniqueName: \"kubernetes.io/projected/a6ea4e54-e85e-436c-ba93-ce6c994607be-kube-api-access-tmv8z\") pod \"placement-db-create-z6vw2\" (UID: \"a6ea4e54-e85e-436c-ba93-ce6c994607be\") " pod="openstack/placement-db-create-z6vw2" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.704655 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j24m7\" (UniqueName: \"kubernetes.io/projected/fac1cc5b-8679-46db-8577-0055cf77495f-kube-api-access-j24m7\") pod \"placement-6bd0-account-create-update-7mlzp\" (UID: \"fac1cc5b-8679-46db-8577-0055cf77495f\") " pod="openstack/placement-6bd0-account-create-update-7mlzp" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.704731 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ea4e54-e85e-436c-ba93-ce6c994607be-operator-scripts\") pod \"placement-db-create-z6vw2\" (UID: \"a6ea4e54-e85e-436c-ba93-ce6c994607be\") " pod="openstack/placement-db-create-z6vw2" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.705490 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ea4e54-e85e-436c-ba93-ce6c994607be-operator-scripts\") pod \"placement-db-create-z6vw2\" (UID: \"a6ea4e54-e85e-436c-ba93-ce6c994607be\") " pod="openstack/placement-db-create-z6vw2" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.731226 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmv8z\" (UniqueName: \"kubernetes.io/projected/a6ea4e54-e85e-436c-ba93-ce6c994607be-kube-api-access-tmv8z\") pod \"placement-db-create-z6vw2\" (UID: \"a6ea4e54-e85e-436c-ba93-ce6c994607be\") " pod="openstack/placement-db-create-z6vw2" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.805937 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j24m7\" (UniqueName: \"kubernetes.io/projected/fac1cc5b-8679-46db-8577-0055cf77495f-kube-api-access-j24m7\") pod \"placement-6bd0-account-create-update-7mlzp\" (UID: \"fac1cc5b-8679-46db-8577-0055cf77495f\") " pod="openstack/placement-6bd0-account-create-update-7mlzp" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.806111 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac1cc5b-8679-46db-8577-0055cf77495f-operator-scripts\") pod \"placement-6bd0-account-create-update-7mlzp\" (UID: \"fac1cc5b-8679-46db-8577-0055cf77495f\") " pod="openstack/placement-6bd0-account-create-update-7mlzp" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.806854 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac1cc5b-8679-46db-8577-0055cf77495f-operator-scripts\") pod \"placement-6bd0-account-create-update-7mlzp\" (UID: \"fac1cc5b-8679-46db-8577-0055cf77495f\") " pod="openstack/placement-6bd0-account-create-update-7mlzp" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.821918 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j24m7\" (UniqueName: \"kubernetes.io/projected/fac1cc5b-8679-46db-8577-0055cf77495f-kube-api-access-j24m7\") pod \"placement-6bd0-account-create-update-7mlzp\" (UID: \"fac1cc5b-8679-46db-8577-0055cf77495f\") " pod="openstack/placement-6bd0-account-create-update-7mlzp" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.885877 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z6vw2" Jan 29 08:16:08 crc kubenswrapper[4826]: I0129 08:16:08.970506 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bd0-account-create-update-7mlzp" Jan 29 08:16:09 crc kubenswrapper[4826]: I0129 08:16:09.378666 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-z6vw2"] Jan 29 08:16:09 crc kubenswrapper[4826]: I0129 08:16:09.531801 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6bd0-account-create-update-7mlzp"] Jan 29 08:16:09 crc kubenswrapper[4826]: W0129 08:16:09.536452 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfac1cc5b_8679_46db_8577_0055cf77495f.slice/crio-8b95b812a818de8215d93bea6822fb61720352b262c0b8d2e6adf9c411b9df63 WatchSource:0}: Error finding container 8b95b812a818de8215d93bea6822fb61720352b262c0b8d2e6adf9c411b9df63: Status 404 returned error can't find the container with id 8b95b812a818de8215d93bea6822fb61720352b262c0b8d2e6adf9c411b9df63 Jan 29 08:16:09 crc kubenswrapper[4826]: I0129 08:16:09.746623 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bd0-account-create-update-7mlzp" event={"ID":"fac1cc5b-8679-46db-8577-0055cf77495f","Type":"ContainerStarted","Data":"89a0c2239b7b5bf0d60734d143951cfc17acf51a6bf83924bef20a68706177d1"} Jan 29 08:16:09 crc kubenswrapper[4826]: I0129 08:16:09.746661 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bd0-account-create-update-7mlzp" event={"ID":"fac1cc5b-8679-46db-8577-0055cf77495f","Type":"ContainerStarted","Data":"8b95b812a818de8215d93bea6822fb61720352b262c0b8d2e6adf9c411b9df63"} Jan 29 08:16:09 crc kubenswrapper[4826]: I0129 08:16:09.752476 4826 generic.go:334] "Generic (PLEG): container finished" podID="a6ea4e54-e85e-436c-ba93-ce6c994607be" containerID="c34e3abd6de1089b2c045b3d2fbd5203cdfe9efb3fbd46be08700b5a7867a4ad" exitCode=0 Jan 29 08:16:09 crc kubenswrapper[4826]: I0129 08:16:09.752520 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z6vw2" event={"ID":"a6ea4e54-e85e-436c-ba93-ce6c994607be","Type":"ContainerDied","Data":"c34e3abd6de1089b2c045b3d2fbd5203cdfe9efb3fbd46be08700b5a7867a4ad"} Jan 29 08:16:09 crc kubenswrapper[4826]: I0129 08:16:09.752550 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z6vw2" event={"ID":"a6ea4e54-e85e-436c-ba93-ce6c994607be","Type":"ContainerStarted","Data":"813a285d859c1ac9aeea8d5775c82a547cc2c5c46e0debb16cf829f707567b46"} Jan 29 08:16:09 crc kubenswrapper[4826]: I0129 08:16:09.783328 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6bd0-account-create-update-7mlzp" podStartSLOduration=1.783312494 podStartE2EDuration="1.783312494s" podCreationTimestamp="2026-01-29 08:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:16:09.773447934 +0000 UTC m=+5553.635241013" watchObservedRunningTime="2026-01-29 08:16:09.783312494 +0000 UTC m=+5553.645105563" Jan 29 08:16:10 crc kubenswrapper[4826]: I0129 08:16:10.765252 4826 generic.go:334] "Generic (PLEG): container finished" podID="fac1cc5b-8679-46db-8577-0055cf77495f" containerID="89a0c2239b7b5bf0d60734d143951cfc17acf51a6bf83924bef20a68706177d1" exitCode=0 Jan 29 08:16:10 crc kubenswrapper[4826]: I0129 08:16:10.765515 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bd0-account-create-update-7mlzp" event={"ID":"fac1cc5b-8679-46db-8577-0055cf77495f","Type":"ContainerDied","Data":"89a0c2239b7b5bf0d60734d143951cfc17acf51a6bf83924bef20a68706177d1"} Jan 29 08:16:11 crc kubenswrapper[4826]: I0129 08:16:11.159406 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z6vw2" Jan 29 08:16:11 crc kubenswrapper[4826]: I0129 08:16:11.257333 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ea4e54-e85e-436c-ba93-ce6c994607be-operator-scripts\") pod \"a6ea4e54-e85e-436c-ba93-ce6c994607be\" (UID: \"a6ea4e54-e85e-436c-ba93-ce6c994607be\") " Jan 29 08:16:11 crc kubenswrapper[4826]: I0129 08:16:11.258003 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmv8z\" (UniqueName: \"kubernetes.io/projected/a6ea4e54-e85e-436c-ba93-ce6c994607be-kube-api-access-tmv8z\") pod \"a6ea4e54-e85e-436c-ba93-ce6c994607be\" (UID: \"a6ea4e54-e85e-436c-ba93-ce6c994607be\") " Jan 29 08:16:11 crc kubenswrapper[4826]: I0129 08:16:11.258239 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6ea4e54-e85e-436c-ba93-ce6c994607be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6ea4e54-e85e-436c-ba93-ce6c994607be" (UID: "a6ea4e54-e85e-436c-ba93-ce6c994607be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:16:11 crc kubenswrapper[4826]: I0129 08:16:11.258671 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6ea4e54-e85e-436c-ba93-ce6c994607be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:11 crc kubenswrapper[4826]: I0129 08:16:11.264923 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6ea4e54-e85e-436c-ba93-ce6c994607be-kube-api-access-tmv8z" (OuterVolumeSpecName: "kube-api-access-tmv8z") pod "a6ea4e54-e85e-436c-ba93-ce6c994607be" (UID: "a6ea4e54-e85e-436c-ba93-ce6c994607be"). InnerVolumeSpecName "kube-api-access-tmv8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:16:11 crc kubenswrapper[4826]: I0129 08:16:11.360358 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmv8z\" (UniqueName: \"kubernetes.io/projected/a6ea4e54-e85e-436c-ba93-ce6c994607be-kube-api-access-tmv8z\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:11 crc kubenswrapper[4826]: I0129 08:16:11.775999 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-z6vw2" event={"ID":"a6ea4e54-e85e-436c-ba93-ce6c994607be","Type":"ContainerDied","Data":"813a285d859c1ac9aeea8d5775c82a547cc2c5c46e0debb16cf829f707567b46"} Jan 29 08:16:11 crc kubenswrapper[4826]: I0129 08:16:11.776073 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="813a285d859c1ac9aeea8d5775c82a547cc2c5c46e0debb16cf829f707567b46" Jan 29 08:16:11 crc kubenswrapper[4826]: I0129 08:16:11.776020 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-z6vw2" Jan 29 08:16:12 crc kubenswrapper[4826]: I0129 08:16:12.205423 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bd0-account-create-update-7mlzp" Jan 29 08:16:12 crc kubenswrapper[4826]: I0129 08:16:12.277544 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac1cc5b-8679-46db-8577-0055cf77495f-operator-scripts\") pod \"fac1cc5b-8679-46db-8577-0055cf77495f\" (UID: \"fac1cc5b-8679-46db-8577-0055cf77495f\") " Jan 29 08:16:12 crc kubenswrapper[4826]: I0129 08:16:12.277647 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j24m7\" (UniqueName: \"kubernetes.io/projected/fac1cc5b-8679-46db-8577-0055cf77495f-kube-api-access-j24m7\") pod \"fac1cc5b-8679-46db-8577-0055cf77495f\" (UID: \"fac1cc5b-8679-46db-8577-0055cf77495f\") " Jan 29 08:16:12 crc kubenswrapper[4826]: I0129 08:16:12.279011 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac1cc5b-8679-46db-8577-0055cf77495f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fac1cc5b-8679-46db-8577-0055cf77495f" (UID: "fac1cc5b-8679-46db-8577-0055cf77495f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:16:12 crc kubenswrapper[4826]: I0129 08:16:12.284946 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac1cc5b-8679-46db-8577-0055cf77495f-kube-api-access-j24m7" (OuterVolumeSpecName: "kube-api-access-j24m7") pod "fac1cc5b-8679-46db-8577-0055cf77495f" (UID: "fac1cc5b-8679-46db-8577-0055cf77495f"). InnerVolumeSpecName "kube-api-access-j24m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:16:12 crc kubenswrapper[4826]: I0129 08:16:12.380442 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac1cc5b-8679-46db-8577-0055cf77495f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:12 crc kubenswrapper[4826]: I0129 08:16:12.380482 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j24m7\" (UniqueName: \"kubernetes.io/projected/fac1cc5b-8679-46db-8577-0055cf77495f-kube-api-access-j24m7\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:12 crc kubenswrapper[4826]: I0129 08:16:12.786459 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6bd0-account-create-update-7mlzp" event={"ID":"fac1cc5b-8679-46db-8577-0055cf77495f","Type":"ContainerDied","Data":"8b95b812a818de8215d93bea6822fb61720352b262c0b8d2e6adf9c411b9df63"} Jan 29 08:16:12 crc kubenswrapper[4826]: I0129 08:16:12.786815 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b95b812a818de8215d93bea6822fb61720352b262c0b8d2e6adf9c411b9df63" Jan 29 08:16:12 crc kubenswrapper[4826]: I0129 08:16:12.786532 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6bd0-account-create-update-7mlzp" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.057048 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67c6cdcd5c-cqq7r"] Jan 29 08:16:14 crc kubenswrapper[4826]: E0129 08:16:14.057720 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac1cc5b-8679-46db-8577-0055cf77495f" containerName="mariadb-account-create-update" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.057733 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac1cc5b-8679-46db-8577-0055cf77495f" containerName="mariadb-account-create-update" Jan 29 08:16:14 crc kubenswrapper[4826]: E0129 08:16:14.057744 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ea4e54-e85e-436c-ba93-ce6c994607be" containerName="mariadb-database-create" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.057750 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ea4e54-e85e-436c-ba93-ce6c994607be" containerName="mariadb-database-create" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.057913 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac1cc5b-8679-46db-8577-0055cf77495f" containerName="mariadb-account-create-update" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.057933 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ea4e54-e85e-436c-ba93-ce6c994607be" containerName="mariadb-database-create" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.058809 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.078463 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67c6cdcd5c-cqq7r"] Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.088480 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ncbgj"] Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.089673 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.094319 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.094383 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.094653 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sjjzd" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.098580 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ncbgj"] Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.114381 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkvg8\" (UniqueName: \"kubernetes.io/projected/599a1081-272d-49b0-a081-bb866479e81b-kube-api-access-bkvg8\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.114426 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-dns-svc\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.114463 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-ovsdbserver-nb\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.114575 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-ovsdbserver-sb\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.114603 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-config\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.216812 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-ovsdbserver-sb\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.216856 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-config\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.216910 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-combined-ca-bundle\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.216934 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkvg8\" (UniqueName: \"kubernetes.io/projected/599a1081-272d-49b0-a081-bb866479e81b-kube-api-access-bkvg8\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.216950 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-dns-svc\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.216964 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-ovsdbserver-nb\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.217026 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c39a8b1f-3356-4105-96ad-529ee04ca228-logs\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.217048 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kff96\" (UniqueName: \"kubernetes.io/projected/c39a8b1f-3356-4105-96ad-529ee04ca228-kube-api-access-kff96\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.217077 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-config-data\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.217097 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-scripts\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.217917 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-config\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.218052 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-ovsdbserver-sb\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.218462 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-ovsdbserver-nb\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.218619 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-dns-svc\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.240431 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkvg8\" (UniqueName: \"kubernetes.io/projected/599a1081-272d-49b0-a081-bb866479e81b-kube-api-access-bkvg8\") pod \"dnsmasq-dns-67c6cdcd5c-cqq7r\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.319094 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-combined-ca-bundle\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.319249 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c39a8b1f-3356-4105-96ad-529ee04ca228-logs\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.319289 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kff96\" (UniqueName: \"kubernetes.io/projected/c39a8b1f-3356-4105-96ad-529ee04ca228-kube-api-access-kff96\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.319355 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-config-data\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.319394 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-scripts\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.320364 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c39a8b1f-3356-4105-96ad-529ee04ca228-logs\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.323154 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-scripts\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.324064 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-combined-ca-bundle\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.325187 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-config-data\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.341268 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kff96\" (UniqueName: \"kubernetes.io/projected/c39a8b1f-3356-4105-96ad-529ee04ca228-kube-api-access-kff96\") pod \"placement-db-sync-ncbgj\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.381695 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.414704 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.916717 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ncbgj"] Jan 29 08:16:14 crc kubenswrapper[4826]: W0129 08:16:14.995032 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod599a1081_272d_49b0_a081_bb866479e81b.slice/crio-664579ab7bd34edd83b0fe81da790b04174c2cf5d35df2751ea738203649f2ca WatchSource:0}: Error finding container 664579ab7bd34edd83b0fe81da790b04174c2cf5d35df2751ea738203649f2ca: Status 404 returned error can't find the container with id 664579ab7bd34edd83b0fe81da790b04174c2cf5d35df2751ea738203649f2ca Jan 29 08:16:14 crc kubenswrapper[4826]: I0129 08:16:14.996666 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67c6cdcd5c-cqq7r"] Jan 29 08:16:15 crc kubenswrapper[4826]: I0129 08:16:15.823161 4826 generic.go:334] "Generic (PLEG): container finished" podID="599a1081-272d-49b0-a081-bb866479e81b" containerID="697a0518edb80c3defc88deb124cec0f7e3c19c20ab4651c17b11429a6c035ae" exitCode=0 Jan 29 08:16:15 crc kubenswrapper[4826]: I0129 08:16:15.823347 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" event={"ID":"599a1081-272d-49b0-a081-bb866479e81b","Type":"ContainerDied","Data":"697a0518edb80c3defc88deb124cec0f7e3c19c20ab4651c17b11429a6c035ae"} Jan 29 08:16:15 crc kubenswrapper[4826]: I0129 08:16:15.826028 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" event={"ID":"599a1081-272d-49b0-a081-bb866479e81b","Type":"ContainerStarted","Data":"664579ab7bd34edd83b0fe81da790b04174c2cf5d35df2751ea738203649f2ca"} Jan 29 08:16:15 crc kubenswrapper[4826]: I0129 08:16:15.828046 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ncbgj" event={"ID":"c39a8b1f-3356-4105-96ad-529ee04ca228","Type":"ContainerStarted","Data":"7c11124bd98571ab9cd42b00d9627b7aab3a005dc391bc59b928a0ad363c1f3d"} Jan 29 08:16:16 crc kubenswrapper[4826]: I0129 08:16:16.841884 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" event={"ID":"599a1081-272d-49b0-a081-bb866479e81b","Type":"ContainerStarted","Data":"d452642237e7a695c7c2c5d4eeaed39970754da1e6aed17d01d9607f997100ba"} Jan 29 08:16:16 crc kubenswrapper[4826]: I0129 08:16:16.842258 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:18 crc kubenswrapper[4826]: I0129 08:16:18.870099 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ncbgj" event={"ID":"c39a8b1f-3356-4105-96ad-529ee04ca228","Type":"ContainerStarted","Data":"641eb95a5e74910a1ff4d369cbdb325fe9a7393dd0f037fa7619dd878b9547d0"} Jan 29 08:16:18 crc kubenswrapper[4826]: I0129 08:16:18.907357 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ncbgj" podStartSLOduration=1.322827924 podStartE2EDuration="4.907325602s" podCreationTimestamp="2026-01-29 08:16:14 +0000 UTC" firstStartedPulling="2026-01-29 08:16:14.907990973 +0000 UTC m=+5558.769784042" lastFinishedPulling="2026-01-29 08:16:18.492488651 +0000 UTC m=+5562.354281720" observedRunningTime="2026-01-29 08:16:18.895658995 +0000 UTC m=+5562.757452144" watchObservedRunningTime="2026-01-29 08:16:18.907325602 +0000 UTC m=+5562.769118711" Jan 29 08:16:18 crc kubenswrapper[4826]: I0129 08:16:18.917886 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" podStartSLOduration=4.917858919 podStartE2EDuration="4.917858919s" podCreationTimestamp="2026-01-29 08:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:16:16.862009807 +0000 UTC m=+5560.723802876" watchObservedRunningTime="2026-01-29 08:16:18.917858919 +0000 UTC m=+5562.779652018" Jan 29 08:16:20 crc kubenswrapper[4826]: I0129 08:16:20.886581 4826 generic.go:334] "Generic (PLEG): container finished" podID="c39a8b1f-3356-4105-96ad-529ee04ca228" containerID="641eb95a5e74910a1ff4d369cbdb325fe9a7393dd0f037fa7619dd878b9547d0" exitCode=0 Jan 29 08:16:20 crc kubenswrapper[4826]: I0129 08:16:20.886679 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ncbgj" event={"ID":"c39a8b1f-3356-4105-96ad-529ee04ca228","Type":"ContainerDied","Data":"641eb95a5e74910a1ff4d369cbdb325fe9a7393dd0f037fa7619dd878b9547d0"} Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.316879 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.413945 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-scripts\") pod \"c39a8b1f-3356-4105-96ad-529ee04ca228\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.414070 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-config-data\") pod \"c39a8b1f-3356-4105-96ad-529ee04ca228\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.414139 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-combined-ca-bundle\") pod \"c39a8b1f-3356-4105-96ad-529ee04ca228\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.414164 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c39a8b1f-3356-4105-96ad-529ee04ca228-logs\") pod \"c39a8b1f-3356-4105-96ad-529ee04ca228\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.414226 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kff96\" (UniqueName: \"kubernetes.io/projected/c39a8b1f-3356-4105-96ad-529ee04ca228-kube-api-access-kff96\") pod \"c39a8b1f-3356-4105-96ad-529ee04ca228\" (UID: \"c39a8b1f-3356-4105-96ad-529ee04ca228\") " Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.414747 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c39a8b1f-3356-4105-96ad-529ee04ca228-logs" (OuterVolumeSpecName: "logs") pod "c39a8b1f-3356-4105-96ad-529ee04ca228" (UID: "c39a8b1f-3356-4105-96ad-529ee04ca228"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.415151 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c39a8b1f-3356-4105-96ad-529ee04ca228-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.420784 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-scripts" (OuterVolumeSpecName: "scripts") pod "c39a8b1f-3356-4105-96ad-529ee04ca228" (UID: "c39a8b1f-3356-4105-96ad-529ee04ca228"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.423270 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39a8b1f-3356-4105-96ad-529ee04ca228-kube-api-access-kff96" (OuterVolumeSpecName: "kube-api-access-kff96") pod "c39a8b1f-3356-4105-96ad-529ee04ca228" (UID: "c39a8b1f-3356-4105-96ad-529ee04ca228"). InnerVolumeSpecName "kube-api-access-kff96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.442703 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c39a8b1f-3356-4105-96ad-529ee04ca228" (UID: "c39a8b1f-3356-4105-96ad-529ee04ca228"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.445088 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-config-data" (OuterVolumeSpecName: "config-data") pod "c39a8b1f-3356-4105-96ad-529ee04ca228" (UID: "c39a8b1f-3356-4105-96ad-529ee04ca228"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.516825 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kff96\" (UniqueName: \"kubernetes.io/projected/c39a8b1f-3356-4105-96ad-529ee04ca228-kube-api-access-kff96\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.516861 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.516872 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.516882 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39a8b1f-3356-4105-96ad-529ee04ca228-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.913665 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ncbgj" event={"ID":"c39a8b1f-3356-4105-96ad-529ee04ca228","Type":"ContainerDied","Data":"7c11124bd98571ab9cd42b00d9627b7aab3a005dc391bc59b928a0ad363c1f3d"} Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.913725 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c11124bd98571ab9cd42b00d9627b7aab3a005dc391bc59b928a0ad363c1f3d" Jan 29 08:16:22 crc kubenswrapper[4826]: I0129 08:16:22.913742 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ncbgj" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.447900 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-856c6b9568-k76hl"] Jan 29 08:16:23 crc kubenswrapper[4826]: E0129 08:16:23.448595 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39a8b1f-3356-4105-96ad-529ee04ca228" containerName="placement-db-sync" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.448612 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39a8b1f-3356-4105-96ad-529ee04ca228" containerName="placement-db-sync" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.448780 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39a8b1f-3356-4105-96ad-529ee04ca228" containerName="placement-db-sync" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.450082 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.465487 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.465840 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sjjzd" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.465902 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.466063 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.466389 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.477028 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-856c6b9568-k76hl"] Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.539442 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-public-tls-certs\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.539511 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-scripts\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.539570 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-config-data\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.539612 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-internal-tls-certs\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.539646 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqrxm\" (UniqueName: \"kubernetes.io/projected/e36928a2-9c24-4179-9607-07b9fcbd334e-kube-api-access-wqrxm\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.539753 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e36928a2-9c24-4179-9607-07b9fcbd334e-logs\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.539802 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-combined-ca-bundle\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.641895 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e36928a2-9c24-4179-9607-07b9fcbd334e-logs\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.641983 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-combined-ca-bundle\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.642040 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-public-tls-certs\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.642060 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-scripts\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.642087 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-config-data\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.642112 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-internal-tls-certs\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.642293 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqrxm\" (UniqueName: \"kubernetes.io/projected/e36928a2-9c24-4179-9607-07b9fcbd334e-kube-api-access-wqrxm\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.642984 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e36928a2-9c24-4179-9607-07b9fcbd334e-logs\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.647721 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-combined-ca-bundle\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.647723 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-config-data\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.648866 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-public-tls-certs\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.653681 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-internal-tls-certs\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.660174 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqrxm\" (UniqueName: \"kubernetes.io/projected/e36928a2-9c24-4179-9607-07b9fcbd334e-kube-api-access-wqrxm\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.667695 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e36928a2-9c24-4179-9607-07b9fcbd334e-scripts\") pod \"placement-856c6b9568-k76hl\" (UID: \"e36928a2-9c24-4179-9607-07b9fcbd334e\") " pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:23 crc kubenswrapper[4826]: I0129 08:16:23.784600 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.248911 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-856c6b9568-k76hl"] Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.383488 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.464156 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85f559f45c-jcxmn"] Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.464922 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" podUID="ad84c848-d300-4e24-bcf0-c6f5ff046a87" containerName="dnsmasq-dns" containerID="cri-o://98d355213920248f787fd4090c2ab02a4988299129c22853b1b19f36f0dba021" gracePeriod=10 Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.934230 4826 generic.go:334] "Generic (PLEG): container finished" podID="ad84c848-d300-4e24-bcf0-c6f5ff046a87" containerID="98d355213920248f787fd4090c2ab02a4988299129c22853b1b19f36f0dba021" exitCode=0 Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.934355 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" event={"ID":"ad84c848-d300-4e24-bcf0-c6f5ff046a87","Type":"ContainerDied","Data":"98d355213920248f787fd4090c2ab02a4988299129c22853b1b19f36f0dba021"} Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.934675 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" event={"ID":"ad84c848-d300-4e24-bcf0-c6f5ff046a87","Type":"ContainerDied","Data":"d2005472a79416f543323fa31cc752fb02113c2c66042cf8653111a2eadd19ca"} Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.934694 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2005472a79416f543323fa31cc752fb02113c2c66042cf8653111a2eadd19ca" Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.937516 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-856c6b9568-k76hl" event={"ID":"e36928a2-9c24-4179-9607-07b9fcbd334e","Type":"ContainerStarted","Data":"70e3745e80499fd4c30223233331b13fa8d9247e788d204fbeb5f3977c8d5ef9"} Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.937552 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-856c6b9568-k76hl" event={"ID":"e36928a2-9c24-4179-9607-07b9fcbd334e","Type":"ContainerStarted","Data":"458f9c68f576932567b96f2f93999339ce47df601df31e1c61b5e4a298eedb7d"} Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.937567 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-856c6b9568-k76hl" event={"ID":"e36928a2-9c24-4179-9607-07b9fcbd334e","Type":"ContainerStarted","Data":"f5b3be964c7d4b895ae72fa8d494a6ede2fee2210f34d0f1313a7859fb9c7af8"} Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.937800 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.937828 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.967973 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:16:24 crc kubenswrapper[4826]: I0129 08:16:24.974796 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-856c6b9568-k76hl" podStartSLOduration=1.974772443 podStartE2EDuration="1.974772443s" podCreationTimestamp="2026-01-29 08:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:16:24.956430881 +0000 UTC m=+5568.818223970" watchObservedRunningTime="2026-01-29 08:16:24.974772443 +0000 UTC m=+5568.836565532" Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.068943 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-ovsdbserver-sb\") pod \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.069037 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-ovsdbserver-nb\") pod \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.069125 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-config\") pod \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.069171 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4ljt\" (UniqueName: \"kubernetes.io/projected/ad84c848-d300-4e24-bcf0-c6f5ff046a87-kube-api-access-w4ljt\") pod \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.069314 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-dns-svc\") pod \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\" (UID: \"ad84c848-d300-4e24-bcf0-c6f5ff046a87\") " Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.074100 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad84c848-d300-4e24-bcf0-c6f5ff046a87-kube-api-access-w4ljt" (OuterVolumeSpecName: "kube-api-access-w4ljt") pod "ad84c848-d300-4e24-bcf0-c6f5ff046a87" (UID: "ad84c848-d300-4e24-bcf0-c6f5ff046a87"). InnerVolumeSpecName "kube-api-access-w4ljt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.120045 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ad84c848-d300-4e24-bcf0-c6f5ff046a87" (UID: "ad84c848-d300-4e24-bcf0-c6f5ff046a87"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.123115 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-config" (OuterVolumeSpecName: "config") pod "ad84c848-d300-4e24-bcf0-c6f5ff046a87" (UID: "ad84c848-d300-4e24-bcf0-c6f5ff046a87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.125219 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ad84c848-d300-4e24-bcf0-c6f5ff046a87" (UID: "ad84c848-d300-4e24-bcf0-c6f5ff046a87"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.136735 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad84c848-d300-4e24-bcf0-c6f5ff046a87" (UID: "ad84c848-d300-4e24-bcf0-c6f5ff046a87"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.171349 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.171395 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.171409 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.171425 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4ljt\" (UniqueName: \"kubernetes.io/projected/ad84c848-d300-4e24-bcf0-c6f5ff046a87-kube-api-access-w4ljt\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.171443 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad84c848-d300-4e24-bcf0-c6f5ff046a87-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:16:25 crc kubenswrapper[4826]: I0129 08:16:25.949009 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f559f45c-jcxmn" Jan 29 08:16:26 crc kubenswrapper[4826]: I0129 08:16:26.009184 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85f559f45c-jcxmn"] Jan 29 08:16:26 crc kubenswrapper[4826]: I0129 08:16:26.022535 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85f559f45c-jcxmn"] Jan 29 08:16:26 crc kubenswrapper[4826]: I0129 08:16:26.834974 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad84c848-d300-4e24-bcf0-c6f5ff046a87" path="/var/lib/kubelet/pods/ad84c848-d300-4e24-bcf0-c6f5ff046a87/volumes" Jan 29 08:16:54 crc kubenswrapper[4826]: I0129 08:16:54.720191 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:16:54 crc kubenswrapper[4826]: I0129 08:16:54.747362 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-856c6b9568-k76hl" Jan 29 08:17:05 crc kubenswrapper[4826]: I0129 08:17:05.656005 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:17:05 crc kubenswrapper[4826]: I0129 08:17:05.656567 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.203937 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-s4sks"] Jan 29 08:17:17 crc kubenswrapper[4826]: E0129 08:17:17.205693 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad84c848-d300-4e24-bcf0-c6f5ff046a87" containerName="init" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.205768 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad84c848-d300-4e24-bcf0-c6f5ff046a87" containerName="init" Jan 29 08:17:17 crc kubenswrapper[4826]: E0129 08:17:17.205827 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad84c848-d300-4e24-bcf0-c6f5ff046a87" containerName="dnsmasq-dns" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.205876 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad84c848-d300-4e24-bcf0-c6f5ff046a87" containerName="dnsmasq-dns" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.206073 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad84c848-d300-4e24-bcf0-c6f5ff046a87" containerName="dnsmasq-dns" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.206713 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s4sks" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.220635 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-s4sks"] Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.298061 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8da5547-d767-4dc6-96bd-985b44f4743d-operator-scripts\") pod \"nova-api-db-create-s4sks\" (UID: \"d8da5547-d767-4dc6-96bd-985b44f4743d\") " pod="openstack/nova-api-db-create-s4sks" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.298161 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96nf6\" (UniqueName: \"kubernetes.io/projected/d8da5547-d767-4dc6-96bd-985b44f4743d-kube-api-access-96nf6\") pod \"nova-api-db-create-s4sks\" (UID: \"d8da5547-d767-4dc6-96bd-985b44f4743d\") " pod="openstack/nova-api-db-create-s4sks" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.310199 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-nmz88"] Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.311255 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nmz88" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.325357 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-be2d-account-create-update-kwqfd"] Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.326596 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-be2d-account-create-update-kwqfd" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.328684 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.344761 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nmz88"] Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.357646 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-be2d-account-create-update-kwqfd"] Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.400390 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/066e5538-568a-4d24-a574-14c40f353fe3-operator-scripts\") pod \"nova-cell0-db-create-nmz88\" (UID: \"066e5538-568a-4d24-a574-14c40f353fe3\") " pod="openstack/nova-cell0-db-create-nmz88" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.400467 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96nf6\" (UniqueName: \"kubernetes.io/projected/d8da5547-d767-4dc6-96bd-985b44f4743d-kube-api-access-96nf6\") pod \"nova-api-db-create-s4sks\" (UID: \"d8da5547-d767-4dc6-96bd-985b44f4743d\") " pod="openstack/nova-api-db-create-s4sks" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.400562 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqbz2\" (UniqueName: \"kubernetes.io/projected/319f3f29-51d6-4821-a56b-9178f703d9a3-kube-api-access-lqbz2\") pod \"nova-api-be2d-account-create-update-kwqfd\" (UID: \"319f3f29-51d6-4821-a56b-9178f703d9a3\") " pod="openstack/nova-api-be2d-account-create-update-kwqfd" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.400708 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8da5547-d767-4dc6-96bd-985b44f4743d-operator-scripts\") pod \"nova-api-db-create-s4sks\" (UID: \"d8da5547-d767-4dc6-96bd-985b44f4743d\") " pod="openstack/nova-api-db-create-s4sks" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.400755 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319f3f29-51d6-4821-a56b-9178f703d9a3-operator-scripts\") pod \"nova-api-be2d-account-create-update-kwqfd\" (UID: \"319f3f29-51d6-4821-a56b-9178f703d9a3\") " pod="openstack/nova-api-be2d-account-create-update-kwqfd" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.400816 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg29h\" (UniqueName: \"kubernetes.io/projected/066e5538-568a-4d24-a574-14c40f353fe3-kube-api-access-sg29h\") pod \"nova-cell0-db-create-nmz88\" (UID: \"066e5538-568a-4d24-a574-14c40f353fe3\") " pod="openstack/nova-cell0-db-create-nmz88" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.401799 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8da5547-d767-4dc6-96bd-985b44f4743d-operator-scripts\") pod \"nova-api-db-create-s4sks\" (UID: \"d8da5547-d767-4dc6-96bd-985b44f4743d\") " pod="openstack/nova-api-db-create-s4sks" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.414743 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-sdmv5"] Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.415919 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sdmv5" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.424680 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96nf6\" (UniqueName: \"kubernetes.io/projected/d8da5547-d767-4dc6-96bd-985b44f4743d-kube-api-access-96nf6\") pod \"nova-api-db-create-s4sks\" (UID: \"d8da5547-d767-4dc6-96bd-985b44f4743d\") " pod="openstack/nova-api-db-create-s4sks" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.443338 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sdmv5"] Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.502052 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319f3f29-51d6-4821-a56b-9178f703d9a3-operator-scripts\") pod \"nova-api-be2d-account-create-update-kwqfd\" (UID: \"319f3f29-51d6-4821-a56b-9178f703d9a3\") " pod="openstack/nova-api-be2d-account-create-update-kwqfd" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.502115 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg29h\" (UniqueName: \"kubernetes.io/projected/066e5538-568a-4d24-a574-14c40f353fe3-kube-api-access-sg29h\") pod \"nova-cell0-db-create-nmz88\" (UID: \"066e5538-568a-4d24-a574-14c40f353fe3\") " pod="openstack/nova-cell0-db-create-nmz88" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.502143 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/066e5538-568a-4d24-a574-14c40f353fe3-operator-scripts\") pod \"nova-cell0-db-create-nmz88\" (UID: \"066e5538-568a-4d24-a574-14c40f353fe3\") " pod="openstack/nova-cell0-db-create-nmz88" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.502183 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ba56186-f1c3-4c15-ab42-ae984dd43507-operator-scripts\") pod \"nova-cell1-db-create-sdmv5\" (UID: \"2ba56186-f1c3-4c15-ab42-ae984dd43507\") " pod="openstack/nova-cell1-db-create-sdmv5" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.502217 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqbz2\" (UniqueName: \"kubernetes.io/projected/319f3f29-51d6-4821-a56b-9178f703d9a3-kube-api-access-lqbz2\") pod \"nova-api-be2d-account-create-update-kwqfd\" (UID: \"319f3f29-51d6-4821-a56b-9178f703d9a3\") " pod="openstack/nova-api-be2d-account-create-update-kwqfd" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.502237 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgz7w\" (UniqueName: \"kubernetes.io/projected/2ba56186-f1c3-4c15-ab42-ae984dd43507-kube-api-access-xgz7w\") pod \"nova-cell1-db-create-sdmv5\" (UID: \"2ba56186-f1c3-4c15-ab42-ae984dd43507\") " pod="openstack/nova-cell1-db-create-sdmv5" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.503041 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319f3f29-51d6-4821-a56b-9178f703d9a3-operator-scripts\") pod \"nova-api-be2d-account-create-update-kwqfd\" (UID: \"319f3f29-51d6-4821-a56b-9178f703d9a3\") " pod="openstack/nova-api-be2d-account-create-update-kwqfd" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.503361 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/066e5538-568a-4d24-a574-14c40f353fe3-operator-scripts\") pod \"nova-cell0-db-create-nmz88\" (UID: \"066e5538-568a-4d24-a574-14c40f353fe3\") " pod="openstack/nova-cell0-db-create-nmz88" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.510763 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-f0a9-account-create-update-xmpzq"] Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.512044 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f0a9-account-create-update-xmpzq" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.514823 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.520034 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f0a9-account-create-update-xmpzq"] Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.527525 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqbz2\" (UniqueName: \"kubernetes.io/projected/319f3f29-51d6-4821-a56b-9178f703d9a3-kube-api-access-lqbz2\") pod \"nova-api-be2d-account-create-update-kwqfd\" (UID: \"319f3f29-51d6-4821-a56b-9178f703d9a3\") " pod="openstack/nova-api-be2d-account-create-update-kwqfd" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.530972 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg29h\" (UniqueName: \"kubernetes.io/projected/066e5538-568a-4d24-a574-14c40f353fe3-kube-api-access-sg29h\") pod \"nova-cell0-db-create-nmz88\" (UID: \"066e5538-568a-4d24-a574-14c40f353fe3\") " pod="openstack/nova-cell0-db-create-nmz88" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.590201 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s4sks" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.604781 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a408905f-ba5e-4280-bf59-6e820f34a8cb-operator-scripts\") pod \"nova-cell0-f0a9-account-create-update-xmpzq\" (UID: \"a408905f-ba5e-4280-bf59-6e820f34a8cb\") " pod="openstack/nova-cell0-f0a9-account-create-update-xmpzq" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.604859 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ba56186-f1c3-4c15-ab42-ae984dd43507-operator-scripts\") pod \"nova-cell1-db-create-sdmv5\" (UID: \"2ba56186-f1c3-4c15-ab42-ae984dd43507\") " pod="openstack/nova-cell1-db-create-sdmv5" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.604885 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dxhd\" (UniqueName: \"kubernetes.io/projected/a408905f-ba5e-4280-bf59-6e820f34a8cb-kube-api-access-4dxhd\") pod \"nova-cell0-f0a9-account-create-update-xmpzq\" (UID: \"a408905f-ba5e-4280-bf59-6e820f34a8cb\") " pod="openstack/nova-cell0-f0a9-account-create-update-xmpzq" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.604954 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgz7w\" (UniqueName: \"kubernetes.io/projected/2ba56186-f1c3-4c15-ab42-ae984dd43507-kube-api-access-xgz7w\") pod \"nova-cell1-db-create-sdmv5\" (UID: \"2ba56186-f1c3-4c15-ab42-ae984dd43507\") " pod="openstack/nova-cell1-db-create-sdmv5" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.606140 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ba56186-f1c3-4c15-ab42-ae984dd43507-operator-scripts\") pod \"nova-cell1-db-create-sdmv5\" (UID: \"2ba56186-f1c3-4c15-ab42-ae984dd43507\") " pod="openstack/nova-cell1-db-create-sdmv5" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.620679 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgz7w\" (UniqueName: \"kubernetes.io/projected/2ba56186-f1c3-4c15-ab42-ae984dd43507-kube-api-access-xgz7w\") pod \"nova-cell1-db-create-sdmv5\" (UID: \"2ba56186-f1c3-4c15-ab42-ae984dd43507\") " pod="openstack/nova-cell1-db-create-sdmv5" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.638750 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nmz88" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.646880 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-be2d-account-create-update-kwqfd" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.730028 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a408905f-ba5e-4280-bf59-6e820f34a8cb-operator-scripts\") pod \"nova-cell0-f0a9-account-create-update-xmpzq\" (UID: \"a408905f-ba5e-4280-bf59-6e820f34a8cb\") " pod="openstack/nova-cell0-f0a9-account-create-update-xmpzq" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.730358 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dxhd\" (UniqueName: \"kubernetes.io/projected/a408905f-ba5e-4280-bf59-6e820f34a8cb-kube-api-access-4dxhd\") pod \"nova-cell0-f0a9-account-create-update-xmpzq\" (UID: \"a408905f-ba5e-4280-bf59-6e820f34a8cb\") " pod="openstack/nova-cell0-f0a9-account-create-update-xmpzq" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.731440 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a408905f-ba5e-4280-bf59-6e820f34a8cb-operator-scripts\") pod \"nova-cell0-f0a9-account-create-update-xmpzq\" (UID: \"a408905f-ba5e-4280-bf59-6e820f34a8cb\") " pod="openstack/nova-cell0-f0a9-account-create-update-xmpzq" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.743711 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-16c1-account-create-update-vwfhw"] Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.746434 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-16c1-account-create-update-vwfhw" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.750094 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.756528 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-16c1-account-create-update-vwfhw"] Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.762480 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dxhd\" (UniqueName: \"kubernetes.io/projected/a408905f-ba5e-4280-bf59-6e820f34a8cb-kube-api-access-4dxhd\") pod \"nova-cell0-f0a9-account-create-update-xmpzq\" (UID: \"a408905f-ba5e-4280-bf59-6e820f34a8cb\") " pod="openstack/nova-cell0-f0a9-account-create-update-xmpzq" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.768433 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sdmv5" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.832197 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc0f8f9-c400-47f2-91b9-9a77da288710-operator-scripts\") pod \"nova-cell1-16c1-account-create-update-vwfhw\" (UID: \"3fc0f8f9-c400-47f2-91b9-9a77da288710\") " pod="openstack/nova-cell1-16c1-account-create-update-vwfhw" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.832265 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppnkr\" (UniqueName: \"kubernetes.io/projected/3fc0f8f9-c400-47f2-91b9-9a77da288710-kube-api-access-ppnkr\") pod \"nova-cell1-16c1-account-create-update-vwfhw\" (UID: \"3fc0f8f9-c400-47f2-91b9-9a77da288710\") " pod="openstack/nova-cell1-16c1-account-create-update-vwfhw" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.906636 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f0a9-account-create-update-xmpzq" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.933508 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc0f8f9-c400-47f2-91b9-9a77da288710-operator-scripts\") pod \"nova-cell1-16c1-account-create-update-vwfhw\" (UID: \"3fc0f8f9-c400-47f2-91b9-9a77da288710\") " pod="openstack/nova-cell1-16c1-account-create-update-vwfhw" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.933581 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppnkr\" (UniqueName: \"kubernetes.io/projected/3fc0f8f9-c400-47f2-91b9-9a77da288710-kube-api-access-ppnkr\") pod \"nova-cell1-16c1-account-create-update-vwfhw\" (UID: \"3fc0f8f9-c400-47f2-91b9-9a77da288710\") " pod="openstack/nova-cell1-16c1-account-create-update-vwfhw" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.934420 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc0f8f9-c400-47f2-91b9-9a77da288710-operator-scripts\") pod \"nova-cell1-16c1-account-create-update-vwfhw\" (UID: \"3fc0f8f9-c400-47f2-91b9-9a77da288710\") " pod="openstack/nova-cell1-16c1-account-create-update-vwfhw" Jan 29 08:17:17 crc kubenswrapper[4826]: I0129 08:17:17.955824 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppnkr\" (UniqueName: \"kubernetes.io/projected/3fc0f8f9-c400-47f2-91b9-9a77da288710-kube-api-access-ppnkr\") pod \"nova-cell1-16c1-account-create-update-vwfhw\" (UID: \"3fc0f8f9-c400-47f2-91b9-9a77da288710\") " pod="openstack/nova-cell1-16c1-account-create-update-vwfhw" Jan 29 08:17:18 crc kubenswrapper[4826]: I0129 08:17:18.070106 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-16c1-account-create-update-vwfhw" Jan 29 08:17:18 crc kubenswrapper[4826]: I0129 08:17:18.188004 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-s4sks"] Jan 29 08:17:18 crc kubenswrapper[4826]: I0129 08:17:18.263827 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nmz88"] Jan 29 08:17:18 crc kubenswrapper[4826]: W0129 08:17:18.273494 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod066e5538_568a_4d24_a574_14c40f353fe3.slice/crio-cbc521faf6a799ceea38aaace166b27a71116da58833ec86fae42eeae8305bc3 WatchSource:0}: Error finding container cbc521faf6a799ceea38aaace166b27a71116da58833ec86fae42eeae8305bc3: Status 404 returned error can't find the container with id cbc521faf6a799ceea38aaace166b27a71116da58833ec86fae42eeae8305bc3 Jan 29 08:17:18 crc kubenswrapper[4826]: W0129 08:17:18.273863 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod319f3f29_51d6_4821_a56b_9178f703d9a3.slice/crio-48917d66cd25b7e5f5ce8d29fafabd340294d039467e2adc8de90f4d8f3a864b WatchSource:0}: Error finding container 48917d66cd25b7e5f5ce8d29fafabd340294d039467e2adc8de90f4d8f3a864b: Status 404 returned error can't find the container with id 48917d66cd25b7e5f5ce8d29fafabd340294d039467e2adc8de90f4d8f3a864b Jan 29 08:17:18 crc kubenswrapper[4826]: I0129 08:17:18.275975 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-be2d-account-create-update-kwqfd"] Jan 29 08:17:18 crc kubenswrapper[4826]: I0129 08:17:18.404437 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sdmv5"] Jan 29 08:17:18 crc kubenswrapper[4826]: I0129 08:17:18.433439 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f0a9-account-create-update-xmpzq"] Jan 29 08:17:18 crc kubenswrapper[4826]: I0129 08:17:18.512188 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f0a9-account-create-update-xmpzq" event={"ID":"a408905f-ba5e-4280-bf59-6e820f34a8cb","Type":"ContainerStarted","Data":"845f44f6be9c12753ddaa20a8869945678d91bb476ef2feb513080367eae535a"} Jan 29 08:17:18 crc kubenswrapper[4826]: I0129 08:17:18.514016 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sdmv5" event={"ID":"2ba56186-f1c3-4c15-ab42-ae984dd43507","Type":"ContainerStarted","Data":"e7ea8c45d4226ef1009c4cfd8996fecbe4f3c80fe7a5dafb72bff96798e671b9"} Jan 29 08:17:18 crc kubenswrapper[4826]: I0129 08:17:18.516912 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s4sks" event={"ID":"d8da5547-d767-4dc6-96bd-985b44f4743d","Type":"ContainerStarted","Data":"b3b41496700504b943888ffc84552c3d32e031f8fd6fafacb78af02f6a259a5a"} Jan 29 08:17:18 crc kubenswrapper[4826]: I0129 08:17:18.518881 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nmz88" event={"ID":"066e5538-568a-4d24-a574-14c40f353fe3","Type":"ContainerStarted","Data":"cbc521faf6a799ceea38aaace166b27a71116da58833ec86fae42eeae8305bc3"} Jan 29 08:17:18 crc kubenswrapper[4826]: I0129 08:17:18.520198 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-be2d-account-create-update-kwqfd" event={"ID":"319f3f29-51d6-4821-a56b-9178f703d9a3","Type":"ContainerStarted","Data":"48917d66cd25b7e5f5ce8d29fafabd340294d039467e2adc8de90f4d8f3a864b"} Jan 29 08:17:18 crc kubenswrapper[4826]: I0129 08:17:18.648090 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-16c1-account-create-update-vwfhw"] Jan 29 08:17:18 crc kubenswrapper[4826]: W0129 08:17:18.661834 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fc0f8f9_c400_47f2_91b9_9a77da288710.slice/crio-8bccfe22657c59ac5fe90f59c838eae6544a6acf8533be77b23331219cab913c WatchSource:0}: Error finding container 8bccfe22657c59ac5fe90f59c838eae6544a6acf8533be77b23331219cab913c: Status 404 returned error can't find the container with id 8bccfe22657c59ac5fe90f59c838eae6544a6acf8533be77b23331219cab913c Jan 29 08:17:19 crc kubenswrapper[4826]: I0129 08:17:19.538722 4826 generic.go:334] "Generic (PLEG): container finished" podID="a408905f-ba5e-4280-bf59-6e820f34a8cb" containerID="e4f9d0f86ec6defef588b5c84d44ba9afce5bcecf9ea76dbb9be45ff0b723cfb" exitCode=0 Jan 29 08:17:19 crc kubenswrapper[4826]: I0129 08:17:19.538967 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f0a9-account-create-update-xmpzq" event={"ID":"a408905f-ba5e-4280-bf59-6e820f34a8cb","Type":"ContainerDied","Data":"e4f9d0f86ec6defef588b5c84d44ba9afce5bcecf9ea76dbb9be45ff0b723cfb"} Jan 29 08:17:19 crc kubenswrapper[4826]: I0129 08:17:19.542698 4826 generic.go:334] "Generic (PLEG): container finished" podID="2ba56186-f1c3-4c15-ab42-ae984dd43507" containerID="ba5249bfe67f8817ad367ba80f43570a10bbdc3502bc4bb4fce5e43abc79165a" exitCode=0 Jan 29 08:17:19 crc kubenswrapper[4826]: I0129 08:17:19.542938 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sdmv5" event={"ID":"2ba56186-f1c3-4c15-ab42-ae984dd43507","Type":"ContainerDied","Data":"ba5249bfe67f8817ad367ba80f43570a10bbdc3502bc4bb4fce5e43abc79165a"} Jan 29 08:17:19 crc kubenswrapper[4826]: I0129 08:17:19.545570 4826 generic.go:334] "Generic (PLEG): container finished" podID="d8da5547-d767-4dc6-96bd-985b44f4743d" containerID="980d041bb735f9039019e759638d426de8239aa7447a01ccf767cf2a5058daa8" exitCode=0 Jan 29 08:17:19 crc kubenswrapper[4826]: I0129 08:17:19.545666 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s4sks" event={"ID":"d8da5547-d767-4dc6-96bd-985b44f4743d","Type":"ContainerDied","Data":"980d041bb735f9039019e759638d426de8239aa7447a01ccf767cf2a5058daa8"} Jan 29 08:17:19 crc kubenswrapper[4826]: I0129 08:17:19.548190 4826 generic.go:334] "Generic (PLEG): container finished" podID="066e5538-568a-4d24-a574-14c40f353fe3" containerID="67f9c55a291f27069cc16fe5085407ab17430634982c82d41d4a8a98b32a3ee3" exitCode=0 Jan 29 08:17:19 crc kubenswrapper[4826]: I0129 08:17:19.548267 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nmz88" event={"ID":"066e5538-568a-4d24-a574-14c40f353fe3","Type":"ContainerDied","Data":"67f9c55a291f27069cc16fe5085407ab17430634982c82d41d4a8a98b32a3ee3"} Jan 29 08:17:19 crc kubenswrapper[4826]: I0129 08:17:19.552883 4826 generic.go:334] "Generic (PLEG): container finished" podID="319f3f29-51d6-4821-a56b-9178f703d9a3" containerID="fb752a604a67acd118576aa608891f3d94978c22b0300712d4025c7496fcd2df" exitCode=0 Jan 29 08:17:19 crc kubenswrapper[4826]: I0129 08:17:19.552987 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-be2d-account-create-update-kwqfd" event={"ID":"319f3f29-51d6-4821-a56b-9178f703d9a3","Type":"ContainerDied","Data":"fb752a604a67acd118576aa608891f3d94978c22b0300712d4025c7496fcd2df"} Jan 29 08:17:19 crc kubenswrapper[4826]: I0129 08:17:19.558677 4826 generic.go:334] "Generic (PLEG): container finished" podID="3fc0f8f9-c400-47f2-91b9-9a77da288710" containerID="77df29119a7b651a069843831b09b69d6d4e4beb8400878aff1f47cedda2f148" exitCode=0 Jan 29 08:17:19 crc kubenswrapper[4826]: I0129 08:17:19.558773 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-16c1-account-create-update-vwfhw" event={"ID":"3fc0f8f9-c400-47f2-91b9-9a77da288710","Type":"ContainerDied","Data":"77df29119a7b651a069843831b09b69d6d4e4beb8400878aff1f47cedda2f148"} Jan 29 08:17:19 crc kubenswrapper[4826]: I0129 08:17:19.558891 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-16c1-account-create-update-vwfhw" event={"ID":"3fc0f8f9-c400-47f2-91b9-9a77da288710","Type":"ContainerStarted","Data":"8bccfe22657c59ac5fe90f59c838eae6544a6acf8533be77b23331219cab913c"} Jan 29 08:17:20 crc kubenswrapper[4826]: I0129 08:17:20.974694 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-be2d-account-create-update-kwqfd" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.111087 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqbz2\" (UniqueName: \"kubernetes.io/projected/319f3f29-51d6-4821-a56b-9178f703d9a3-kube-api-access-lqbz2\") pod \"319f3f29-51d6-4821-a56b-9178f703d9a3\" (UID: \"319f3f29-51d6-4821-a56b-9178f703d9a3\") " Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.111238 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319f3f29-51d6-4821-a56b-9178f703d9a3-operator-scripts\") pod \"319f3f29-51d6-4821-a56b-9178f703d9a3\" (UID: \"319f3f29-51d6-4821-a56b-9178f703d9a3\") " Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.112208 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/319f3f29-51d6-4821-a56b-9178f703d9a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "319f3f29-51d6-4821-a56b-9178f703d9a3" (UID: "319f3f29-51d6-4821-a56b-9178f703d9a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.121589 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/319f3f29-51d6-4821-a56b-9178f703d9a3-kube-api-access-lqbz2" (OuterVolumeSpecName: "kube-api-access-lqbz2") pod "319f3f29-51d6-4821-a56b-9178f703d9a3" (UID: "319f3f29-51d6-4821-a56b-9178f703d9a3"). InnerVolumeSpecName "kube-api-access-lqbz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.207488 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f0a9-account-create-update-xmpzq" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.212948 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqbz2\" (UniqueName: \"kubernetes.io/projected/319f3f29-51d6-4821-a56b-9178f703d9a3-kube-api-access-lqbz2\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.212976 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/319f3f29-51d6-4821-a56b-9178f703d9a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.214466 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nmz88" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.223097 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sdmv5" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.242578 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s4sks" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.250415 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-16c1-account-create-update-vwfhw" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.313814 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgz7w\" (UniqueName: \"kubernetes.io/projected/2ba56186-f1c3-4c15-ab42-ae984dd43507-kube-api-access-xgz7w\") pod \"2ba56186-f1c3-4c15-ab42-ae984dd43507\" (UID: \"2ba56186-f1c3-4c15-ab42-ae984dd43507\") " Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.314228 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96nf6\" (UniqueName: \"kubernetes.io/projected/d8da5547-d767-4dc6-96bd-985b44f4743d-kube-api-access-96nf6\") pod \"d8da5547-d767-4dc6-96bd-985b44f4743d\" (UID: \"d8da5547-d767-4dc6-96bd-985b44f4743d\") " Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.314344 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/066e5538-568a-4d24-a574-14c40f353fe3-operator-scripts\") pod \"066e5538-568a-4d24-a574-14c40f353fe3\" (UID: \"066e5538-568a-4d24-a574-14c40f353fe3\") " Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.314369 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc0f8f9-c400-47f2-91b9-9a77da288710-operator-scripts\") pod \"3fc0f8f9-c400-47f2-91b9-9a77da288710\" (UID: \"3fc0f8f9-c400-47f2-91b9-9a77da288710\") " Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.314410 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a408905f-ba5e-4280-bf59-6e820f34a8cb-operator-scripts\") pod \"a408905f-ba5e-4280-bf59-6e820f34a8cb\" (UID: \"a408905f-ba5e-4280-bf59-6e820f34a8cb\") " Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.314444 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppnkr\" (UniqueName: \"kubernetes.io/projected/3fc0f8f9-c400-47f2-91b9-9a77da288710-kube-api-access-ppnkr\") pod \"3fc0f8f9-c400-47f2-91b9-9a77da288710\" (UID: \"3fc0f8f9-c400-47f2-91b9-9a77da288710\") " Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.314487 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ba56186-f1c3-4c15-ab42-ae984dd43507-operator-scripts\") pod \"2ba56186-f1c3-4c15-ab42-ae984dd43507\" (UID: \"2ba56186-f1c3-4c15-ab42-ae984dd43507\") " Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.314516 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg29h\" (UniqueName: \"kubernetes.io/projected/066e5538-568a-4d24-a574-14c40f353fe3-kube-api-access-sg29h\") pod \"066e5538-568a-4d24-a574-14c40f353fe3\" (UID: \"066e5538-568a-4d24-a574-14c40f353fe3\") " Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.314537 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dxhd\" (UniqueName: \"kubernetes.io/projected/a408905f-ba5e-4280-bf59-6e820f34a8cb-kube-api-access-4dxhd\") pod \"a408905f-ba5e-4280-bf59-6e820f34a8cb\" (UID: \"a408905f-ba5e-4280-bf59-6e820f34a8cb\") " Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.314625 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8da5547-d767-4dc6-96bd-985b44f4743d-operator-scripts\") pod \"d8da5547-d767-4dc6-96bd-985b44f4743d\" (UID: \"d8da5547-d767-4dc6-96bd-985b44f4743d\") " Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.315388 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a408905f-ba5e-4280-bf59-6e820f34a8cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a408905f-ba5e-4280-bf59-6e820f34a8cb" (UID: "a408905f-ba5e-4280-bf59-6e820f34a8cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.315478 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8da5547-d767-4dc6-96bd-985b44f4743d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8da5547-d767-4dc6-96bd-985b44f4743d" (UID: "d8da5547-d767-4dc6-96bd-985b44f4743d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.315467 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/066e5538-568a-4d24-a574-14c40f353fe3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "066e5538-568a-4d24-a574-14c40f353fe3" (UID: "066e5538-568a-4d24-a574-14c40f353fe3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.315599 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba56186-f1c3-4c15-ab42-ae984dd43507-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ba56186-f1c3-4c15-ab42-ae984dd43507" (UID: "2ba56186-f1c3-4c15-ab42-ae984dd43507"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.316027 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc0f8f9-c400-47f2-91b9-9a77da288710-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fc0f8f9-c400-47f2-91b9-9a77da288710" (UID: "3fc0f8f9-c400-47f2-91b9-9a77da288710"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.318155 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a408905f-ba5e-4280-bf59-6e820f34a8cb-kube-api-access-4dxhd" (OuterVolumeSpecName: "kube-api-access-4dxhd") pod "a408905f-ba5e-4280-bf59-6e820f34a8cb" (UID: "a408905f-ba5e-4280-bf59-6e820f34a8cb"). InnerVolumeSpecName "kube-api-access-4dxhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.319714 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba56186-f1c3-4c15-ab42-ae984dd43507-kube-api-access-xgz7w" (OuterVolumeSpecName: "kube-api-access-xgz7w") pod "2ba56186-f1c3-4c15-ab42-ae984dd43507" (UID: "2ba56186-f1c3-4c15-ab42-ae984dd43507"). InnerVolumeSpecName "kube-api-access-xgz7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.319763 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8da5547-d767-4dc6-96bd-985b44f4743d-kube-api-access-96nf6" (OuterVolumeSpecName: "kube-api-access-96nf6") pod "d8da5547-d767-4dc6-96bd-985b44f4743d" (UID: "d8da5547-d767-4dc6-96bd-985b44f4743d"). InnerVolumeSpecName "kube-api-access-96nf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.319788 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc0f8f9-c400-47f2-91b9-9a77da288710-kube-api-access-ppnkr" (OuterVolumeSpecName: "kube-api-access-ppnkr") pod "3fc0f8f9-c400-47f2-91b9-9a77da288710" (UID: "3fc0f8f9-c400-47f2-91b9-9a77da288710"). InnerVolumeSpecName "kube-api-access-ppnkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.320485 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066e5538-568a-4d24-a574-14c40f353fe3-kube-api-access-sg29h" (OuterVolumeSpecName: "kube-api-access-sg29h") pod "066e5538-568a-4d24-a574-14c40f353fe3" (UID: "066e5538-568a-4d24-a574-14c40f353fe3"). InnerVolumeSpecName "kube-api-access-sg29h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.417457 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8da5547-d767-4dc6-96bd-985b44f4743d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.417744 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgz7w\" (UniqueName: \"kubernetes.io/projected/2ba56186-f1c3-4c15-ab42-ae984dd43507-kube-api-access-xgz7w\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.417862 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96nf6\" (UniqueName: \"kubernetes.io/projected/d8da5547-d767-4dc6-96bd-985b44f4743d-kube-api-access-96nf6\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.418006 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/066e5538-568a-4d24-a574-14c40f353fe3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.418117 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc0f8f9-c400-47f2-91b9-9a77da288710-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.418223 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a408905f-ba5e-4280-bf59-6e820f34a8cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.418366 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppnkr\" (UniqueName: \"kubernetes.io/projected/3fc0f8f9-c400-47f2-91b9-9a77da288710-kube-api-access-ppnkr\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.418482 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ba56186-f1c3-4c15-ab42-ae984dd43507-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.418748 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg29h\" (UniqueName: \"kubernetes.io/projected/066e5538-568a-4d24-a574-14c40f353fe3-kube-api-access-sg29h\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.418871 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dxhd\" (UniqueName: \"kubernetes.io/projected/a408905f-ba5e-4280-bf59-6e820f34a8cb-kube-api-access-4dxhd\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.587633 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-16c1-account-create-update-vwfhw" event={"ID":"3fc0f8f9-c400-47f2-91b9-9a77da288710","Type":"ContainerDied","Data":"8bccfe22657c59ac5fe90f59c838eae6544a6acf8533be77b23331219cab913c"} Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.588222 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bccfe22657c59ac5fe90f59c838eae6544a6acf8533be77b23331219cab913c" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.587678 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-16c1-account-create-update-vwfhw" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.590881 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f0a9-account-create-update-xmpzq" event={"ID":"a408905f-ba5e-4280-bf59-6e820f34a8cb","Type":"ContainerDied","Data":"845f44f6be9c12753ddaa20a8869945678d91bb476ef2feb513080367eae535a"} Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.591435 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="845f44f6be9c12753ddaa20a8869945678d91bb476ef2feb513080367eae535a" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.590976 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f0a9-account-create-update-xmpzq" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.593606 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sdmv5" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.593617 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sdmv5" event={"ID":"2ba56186-f1c3-4c15-ab42-ae984dd43507","Type":"ContainerDied","Data":"e7ea8c45d4226ef1009c4cfd8996fecbe4f3c80fe7a5dafb72bff96798e671b9"} Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.593777 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7ea8c45d4226ef1009c4cfd8996fecbe4f3c80fe7a5dafb72bff96798e671b9" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.596446 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s4sks" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.596468 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s4sks" event={"ID":"d8da5547-d767-4dc6-96bd-985b44f4743d","Type":"ContainerDied","Data":"b3b41496700504b943888ffc84552c3d32e031f8fd6fafacb78af02f6a259a5a"} Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.596515 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3b41496700504b943888ffc84552c3d32e031f8fd6fafacb78af02f6a259a5a" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.599678 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nmz88" event={"ID":"066e5538-568a-4d24-a574-14c40f353fe3","Type":"ContainerDied","Data":"cbc521faf6a799ceea38aaace166b27a71116da58833ec86fae42eeae8305bc3"} Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.600079 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbc521faf6a799ceea38aaace166b27a71116da58833ec86fae42eeae8305bc3" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.599717 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nmz88" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.602863 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-be2d-account-create-update-kwqfd" event={"ID":"319f3f29-51d6-4821-a56b-9178f703d9a3","Type":"ContainerDied","Data":"48917d66cd25b7e5f5ce8d29fafabd340294d039467e2adc8de90f4d8f3a864b"} Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.603094 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48917d66cd25b7e5f5ce8d29fafabd340294d039467e2adc8de90f4d8f3a864b" Jan 29 08:17:21 crc kubenswrapper[4826]: I0129 08:17:21.603017 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-be2d-account-create-update-kwqfd" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.759249 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dp85l"] Jan 29 08:17:22 crc kubenswrapper[4826]: E0129 08:17:22.759776 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8da5547-d767-4dc6-96bd-985b44f4743d" containerName="mariadb-database-create" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.759799 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8da5547-d767-4dc6-96bd-985b44f4743d" containerName="mariadb-database-create" Jan 29 08:17:22 crc kubenswrapper[4826]: E0129 08:17:22.759818 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a408905f-ba5e-4280-bf59-6e820f34a8cb" containerName="mariadb-account-create-update" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.759829 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="a408905f-ba5e-4280-bf59-6e820f34a8cb" containerName="mariadb-account-create-update" Jan 29 08:17:22 crc kubenswrapper[4826]: E0129 08:17:22.759860 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066e5538-568a-4d24-a574-14c40f353fe3" containerName="mariadb-database-create" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.759871 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="066e5538-568a-4d24-a574-14c40f353fe3" containerName="mariadb-database-create" Jan 29 08:17:22 crc kubenswrapper[4826]: E0129 08:17:22.759896 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc0f8f9-c400-47f2-91b9-9a77da288710" containerName="mariadb-account-create-update" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.759907 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc0f8f9-c400-47f2-91b9-9a77da288710" containerName="mariadb-account-create-update" Jan 29 08:17:22 crc kubenswrapper[4826]: E0129 08:17:22.759928 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba56186-f1c3-4c15-ab42-ae984dd43507" containerName="mariadb-database-create" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.759962 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba56186-f1c3-4c15-ab42-ae984dd43507" containerName="mariadb-database-create" Jan 29 08:17:22 crc kubenswrapper[4826]: E0129 08:17:22.759995 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="319f3f29-51d6-4821-a56b-9178f703d9a3" containerName="mariadb-account-create-update" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.760009 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="319f3f29-51d6-4821-a56b-9178f703d9a3" containerName="mariadb-account-create-update" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.760289 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="319f3f29-51d6-4821-a56b-9178f703d9a3" containerName="mariadb-account-create-update" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.760340 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8da5547-d767-4dc6-96bd-985b44f4743d" containerName="mariadb-database-create" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.760357 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="a408905f-ba5e-4280-bf59-6e820f34a8cb" containerName="mariadb-account-create-update" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.760370 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc0f8f9-c400-47f2-91b9-9a77da288710" containerName="mariadb-account-create-update" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.760387 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="066e5538-568a-4d24-a574-14c40f353fe3" containerName="mariadb-database-create" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.760412 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba56186-f1c3-4c15-ab42-ae984dd43507" containerName="mariadb-database-create" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.761264 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.768245 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.781781 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.782429 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fndzq" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.786864 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dp85l"] Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.856107 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qpzf\" (UniqueName: \"kubernetes.io/projected/231c03c2-0a51-4052-9d72-3a5c1526181b-kube-api-access-4qpzf\") pod \"nova-cell0-conductor-db-sync-dp85l\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.856490 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dp85l\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.856597 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-scripts\") pod \"nova-cell0-conductor-db-sync-dp85l\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.856690 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-config-data\") pod \"nova-cell0-conductor-db-sync-dp85l\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.958524 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qpzf\" (UniqueName: \"kubernetes.io/projected/231c03c2-0a51-4052-9d72-3a5c1526181b-kube-api-access-4qpzf\") pod \"nova-cell0-conductor-db-sync-dp85l\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.958624 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dp85l\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.958664 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-scripts\") pod \"nova-cell0-conductor-db-sync-dp85l\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.958686 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-config-data\") pod \"nova-cell0-conductor-db-sync-dp85l\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.968983 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dp85l\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.969632 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-scripts\") pod \"nova-cell0-conductor-db-sync-dp85l\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.979208 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-config-data\") pod \"nova-cell0-conductor-db-sync-dp85l\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:22 crc kubenswrapper[4826]: I0129 08:17:22.996847 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qpzf\" (UniqueName: \"kubernetes.io/projected/231c03c2-0a51-4052-9d72-3a5c1526181b-kube-api-access-4qpzf\") pod \"nova-cell0-conductor-db-sync-dp85l\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:23 crc kubenswrapper[4826]: I0129 08:17:23.096514 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:23 crc kubenswrapper[4826]: I0129 08:17:23.594089 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dp85l"] Jan 29 08:17:23 crc kubenswrapper[4826]: W0129 08:17:23.595965 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod231c03c2_0a51_4052_9d72_3a5c1526181b.slice/crio-f352f99ebc3a73d472f19285bb241d89fd9d1bcfd7aefbc5f40aed5abc1712dc WatchSource:0}: Error finding container f352f99ebc3a73d472f19285bb241d89fd9d1bcfd7aefbc5f40aed5abc1712dc: Status 404 returned error can't find the container with id f352f99ebc3a73d472f19285bb241d89fd9d1bcfd7aefbc5f40aed5abc1712dc Jan 29 08:17:23 crc kubenswrapper[4826]: I0129 08:17:23.622615 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dp85l" event={"ID":"231c03c2-0a51-4052-9d72-3a5c1526181b","Type":"ContainerStarted","Data":"f352f99ebc3a73d472f19285bb241d89fd9d1bcfd7aefbc5f40aed5abc1712dc"} Jan 29 08:17:33 crc kubenswrapper[4826]: I0129 08:17:33.786656 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dp85l" event={"ID":"231c03c2-0a51-4052-9d72-3a5c1526181b","Type":"ContainerStarted","Data":"7c8e90e4b7f41076b58b56a42de98d31922925f762b84a4367852a3f63a6ae11"} Jan 29 08:17:33 crc kubenswrapper[4826]: I0129 08:17:33.834720 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-dp85l" podStartSLOduration=2.804384329 podStartE2EDuration="11.834696071s" podCreationTimestamp="2026-01-29 08:17:22 +0000 UTC" firstStartedPulling="2026-01-29 08:17:23.598361741 +0000 UTC m=+5627.460154810" lastFinishedPulling="2026-01-29 08:17:32.628673483 +0000 UTC m=+5636.490466552" observedRunningTime="2026-01-29 08:17:33.813599916 +0000 UTC m=+5637.675393005" watchObservedRunningTime="2026-01-29 08:17:33.834696071 +0000 UTC m=+5637.696489150" Jan 29 08:17:35 crc kubenswrapper[4826]: I0129 08:17:35.656393 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:17:35 crc kubenswrapper[4826]: I0129 08:17:35.656791 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:17:37 crc kubenswrapper[4826]: I0129 08:17:37.844381 4826 generic.go:334] "Generic (PLEG): container finished" podID="231c03c2-0a51-4052-9d72-3a5c1526181b" containerID="7c8e90e4b7f41076b58b56a42de98d31922925f762b84a4367852a3f63a6ae11" exitCode=0 Jan 29 08:17:37 crc kubenswrapper[4826]: I0129 08:17:37.844466 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dp85l" event={"ID":"231c03c2-0a51-4052-9d72-3a5c1526181b","Type":"ContainerDied","Data":"7c8e90e4b7f41076b58b56a42de98d31922925f762b84a4367852a3f63a6ae11"} Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.255824 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.305383 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-config-data\") pod \"231c03c2-0a51-4052-9d72-3a5c1526181b\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.305496 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qpzf\" (UniqueName: \"kubernetes.io/projected/231c03c2-0a51-4052-9d72-3a5c1526181b-kube-api-access-4qpzf\") pod \"231c03c2-0a51-4052-9d72-3a5c1526181b\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.305565 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-combined-ca-bundle\") pod \"231c03c2-0a51-4052-9d72-3a5c1526181b\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.305598 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-scripts\") pod \"231c03c2-0a51-4052-9d72-3a5c1526181b\" (UID: \"231c03c2-0a51-4052-9d72-3a5c1526181b\") " Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.313189 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231c03c2-0a51-4052-9d72-3a5c1526181b-kube-api-access-4qpzf" (OuterVolumeSpecName: "kube-api-access-4qpzf") pod "231c03c2-0a51-4052-9d72-3a5c1526181b" (UID: "231c03c2-0a51-4052-9d72-3a5c1526181b"). InnerVolumeSpecName "kube-api-access-4qpzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.314246 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-scripts" (OuterVolumeSpecName: "scripts") pod "231c03c2-0a51-4052-9d72-3a5c1526181b" (UID: "231c03c2-0a51-4052-9d72-3a5c1526181b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.336050 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-config-data" (OuterVolumeSpecName: "config-data") pod "231c03c2-0a51-4052-9d72-3a5c1526181b" (UID: "231c03c2-0a51-4052-9d72-3a5c1526181b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.352345 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "231c03c2-0a51-4052-9d72-3a5c1526181b" (UID: "231c03c2-0a51-4052-9d72-3a5c1526181b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.408856 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.409236 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.409255 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231c03c2-0a51-4052-9d72-3a5c1526181b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.409276 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qpzf\" (UniqueName: \"kubernetes.io/projected/231c03c2-0a51-4052-9d72-3a5c1526181b-kube-api-access-4qpzf\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.866607 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dp85l" event={"ID":"231c03c2-0a51-4052-9d72-3a5c1526181b","Type":"ContainerDied","Data":"f352f99ebc3a73d472f19285bb241d89fd9d1bcfd7aefbc5f40aed5abc1712dc"} Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.866651 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f352f99ebc3a73d472f19285bb241d89fd9d1bcfd7aefbc5f40aed5abc1712dc" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.866765 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dp85l" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.956904 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:17:39 crc kubenswrapper[4826]: E0129 08:17:39.957394 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231c03c2-0a51-4052-9d72-3a5c1526181b" containerName="nova-cell0-conductor-db-sync" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.957416 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="231c03c2-0a51-4052-9d72-3a5c1526181b" containerName="nova-cell0-conductor-db-sync" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.957633 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="231c03c2-0a51-4052-9d72-3a5c1526181b" containerName="nova-cell0-conductor-db-sync" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.958412 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.960708 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fndzq" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.961264 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 08:17:39 crc kubenswrapper[4826]: I0129 08:17:39.967493 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.018453 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fab69a-7c08-45a2-99f4-1686d2e89f2c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.018991 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fab69a-7c08-45a2-99f4-1686d2e89f2c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.019251 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbk24\" (UniqueName: \"kubernetes.io/projected/45fab69a-7c08-45a2-99f4-1686d2e89f2c-kube-api-access-hbk24\") pod \"nova-cell0-conductor-0\" (UID: \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.120638 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbk24\" (UniqueName: \"kubernetes.io/projected/45fab69a-7c08-45a2-99f4-1686d2e89f2c-kube-api-access-hbk24\") pod \"nova-cell0-conductor-0\" (UID: \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.120696 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fab69a-7c08-45a2-99f4-1686d2e89f2c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.120841 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fab69a-7c08-45a2-99f4-1686d2e89f2c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.125969 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fab69a-7c08-45a2-99f4-1686d2e89f2c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.128240 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fab69a-7c08-45a2-99f4-1686d2e89f2c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.137247 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbk24\" (UniqueName: \"kubernetes.io/projected/45fab69a-7c08-45a2-99f4-1686d2e89f2c-kube-api-access-hbk24\") pod \"nova-cell0-conductor-0\" (UID: \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\") " pod="openstack/nova-cell0-conductor-0" Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.275793 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.600421 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.879012 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"45fab69a-7c08-45a2-99f4-1686d2e89f2c","Type":"ContainerStarted","Data":"8825ea0c41626b3259b0e36656ac85d2e4904b8fb977b0c21d16ad1eb32b7df0"} Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.879562 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"45fab69a-7c08-45a2-99f4-1686d2e89f2c","Type":"ContainerStarted","Data":"e97cb34153edc5a75b66ce1c1e2f4bf2eb48adc1f2c5a796e90c956ee00be4ba"} Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.879657 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 08:17:40 crc kubenswrapper[4826]: I0129 08:17:40.915010 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.914979674 podStartE2EDuration="1.914979674s" podCreationTimestamp="2026-01-29 08:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:17:40.896407425 +0000 UTC m=+5644.758200534" watchObservedRunningTime="2026-01-29 08:17:40.914979674 +0000 UTC m=+5644.776772783" Jan 29 08:17:45 crc kubenswrapper[4826]: I0129 08:17:45.318669 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 08:17:45 crc kubenswrapper[4826]: I0129 08:17:45.951389 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7lbbq"] Jan 29 08:17:45 crc kubenswrapper[4826]: I0129 08:17:45.953473 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:45 crc kubenswrapper[4826]: I0129 08:17:45.964344 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7lbbq"] Jan 29 08:17:45 crc kubenswrapper[4826]: I0129 08:17:45.983869 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 08:17:45 crc kubenswrapper[4826]: I0129 08:17:45.985025 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.083739 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-scripts\") pod \"nova-cell0-cell-mapping-7lbbq\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.083822 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-config-data\") pod \"nova-cell0-cell-mapping-7lbbq\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.083859 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7lbbq\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.083898 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89c7c\" (UniqueName: \"kubernetes.io/projected/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-kube-api-access-89c7c\") pod \"nova-cell0-cell-mapping-7lbbq\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.098482 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.099904 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.102913 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.117220 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.139665 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.141782 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.146777 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.157872 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.185882 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-config-data\") pod \"nova-cell0-cell-mapping-7lbbq\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.185944 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7lbbq\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.185988 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89c7c\" (UniqueName: \"kubernetes.io/projected/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-kube-api-access-89c7c\") pod \"nova-cell0-cell-mapping-7lbbq\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.186074 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-scripts\") pod \"nova-cell0-cell-mapping-7lbbq\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.190960 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.194599 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.200177 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.201088 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.203874 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-scripts\") pod \"nova-cell0-cell-mapping-7lbbq\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.207311 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7lbbq\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.218663 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-config-data\") pod \"nova-cell0-cell-mapping-7lbbq\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.227613 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89c7c\" (UniqueName: \"kubernetes.io/projected/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-kube-api-access-89c7c\") pod \"nova-cell0-cell-mapping-7lbbq\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.287408 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.289104 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.294568 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.300893 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83093822-df53-4202-be80-6213a6d2885c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83093822-df53-4202-be80-6213a6d2885c\") " pod="openstack/nova-scheduler-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.300947 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4dh8\" (UniqueName: \"kubernetes.io/projected/4dd0c0ba-5266-4691-84af-b801d0ae00b3-kube-api-access-m4dh8\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.301082 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83093822-df53-4202-be80-6213a6d2885c-config-data\") pod \"nova-scheduler-0\" (UID: \"83093822-df53-4202-be80-6213a6d2885c\") " pod="openstack/nova-scheduler-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.301101 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-config-data\") pod \"nova-api-0\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.301129 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd0c0ba-5266-4691-84af-b801d0ae00b3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.301172 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd0c0ba-5266-4691-84af-b801d0ae00b3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.301225 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-logs\") pod \"nova-api-0\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.301244 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.301280 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqbcf\" (UniqueName: \"kubernetes.io/projected/83093822-df53-4202-be80-6213a6d2885c-kube-api-access-sqbcf\") pod \"nova-scheduler-0\" (UID: \"83093822-df53-4202-be80-6213a6d2885c\") " pod="openstack/nova-scheduler-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.301327 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hln6t\" (UniqueName: \"kubernetes.io/projected/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-kube-api-access-hln6t\") pod \"nova-api-0\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.315678 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.321374 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.362474 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d6c5877d9-p6q7z"] Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.365640 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.399496 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d6c5877d9-p6q7z"] Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.403850 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.403914 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-config-data\") pod \"nova-metadata-0\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.403940 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83093822-df53-4202-be80-6213a6d2885c-config-data\") pod \"nova-scheduler-0\" (UID: \"83093822-df53-4202-be80-6213a6d2885c\") " pod="openstack/nova-scheduler-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.403961 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-config-data\") pod \"nova-api-0\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.403984 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd0c0ba-5266-4691-84af-b801d0ae00b3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.404016 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd0c0ba-5266-4691-84af-b801d0ae00b3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.404052 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-logs\") pod \"nova-api-0\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.404068 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.404095 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqbcf\" (UniqueName: \"kubernetes.io/projected/83093822-df53-4202-be80-6213a6d2885c-kube-api-access-sqbcf\") pod \"nova-scheduler-0\" (UID: \"83093822-df53-4202-be80-6213a6d2885c\") " pod="openstack/nova-scheduler-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.404221 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hln6t\" (UniqueName: \"kubernetes.io/projected/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-kube-api-access-hln6t\") pod \"nova-api-0\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.404262 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2gcs\" (UniqueName: \"kubernetes.io/projected/5c543e93-7d21-4da8-be03-c4febe406083-kube-api-access-x2gcs\") pod \"nova-metadata-0\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.404353 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c543e93-7d21-4da8-be03-c4febe406083-logs\") pod \"nova-metadata-0\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.404380 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83093822-df53-4202-be80-6213a6d2885c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83093822-df53-4202-be80-6213a6d2885c\") " pod="openstack/nova-scheduler-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.404398 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4dh8\" (UniqueName: \"kubernetes.io/projected/4dd0c0ba-5266-4691-84af-b801d0ae00b3-kube-api-access-m4dh8\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.413822 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83093822-df53-4202-be80-6213a6d2885c-config-data\") pod \"nova-scheduler-0\" (UID: \"83093822-df53-4202-be80-6213a6d2885c\") " pod="openstack/nova-scheduler-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.415037 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-logs\") pod \"nova-api-0\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.416716 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd0c0ba-5266-4691-84af-b801d0ae00b3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.418479 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd0c0ba-5266-4691-84af-b801d0ae00b3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.422941 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.424006 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-config-data\") pod \"nova-api-0\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.424700 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4dh8\" (UniqueName: \"kubernetes.io/projected/4dd0c0ba-5266-4691-84af-b801d0ae00b3-kube-api-access-m4dh8\") pod \"nova-cell1-novncproxy-0\" (UID: \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.426525 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83093822-df53-4202-be80-6213a6d2885c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"83093822-df53-4202-be80-6213a6d2885c\") " pod="openstack/nova-scheduler-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.432000 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqbcf\" (UniqueName: \"kubernetes.io/projected/83093822-df53-4202-be80-6213a6d2885c-kube-api-access-sqbcf\") pod \"nova-scheduler-0\" (UID: \"83093822-df53-4202-be80-6213a6d2885c\") " pod="openstack/nova-scheduler-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.434085 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hln6t\" (UniqueName: \"kubernetes.io/projected/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-kube-api-access-hln6t\") pod \"nova-api-0\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.472355 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.506027 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.506081 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.506127 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-dns-svc\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.506154 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2gcs\" (UniqueName: \"kubernetes.io/projected/5c543e93-7d21-4da8-be03-c4febe406083-kube-api-access-x2gcs\") pod \"nova-metadata-0\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.506204 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-config\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.506256 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c543e93-7d21-4da8-be03-c4febe406083-logs\") pod \"nova-metadata-0\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.506289 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzjtr\" (UniqueName: \"kubernetes.io/projected/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-kube-api-access-lzjtr\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.506340 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.506376 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-config-data\") pod \"nova-metadata-0\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.507127 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c543e93-7d21-4da8-be03-c4febe406083-logs\") pod \"nova-metadata-0\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.513696 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.514031 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-config-data\") pod \"nova-metadata-0\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.521701 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2gcs\" (UniqueName: \"kubernetes.io/projected/5c543e93-7d21-4da8-be03-c4febe406083-kube-api-access-x2gcs\") pod \"nova-metadata-0\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.608074 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-dns-svc\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.608135 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-config\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.608181 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzjtr\" (UniqueName: \"kubernetes.io/projected/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-kube-api-access-lzjtr\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.608278 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.608326 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.610455 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-dns-svc\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.610492 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-config\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.610648 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.610771 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.613163 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.624439 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzjtr\" (UniqueName: \"kubernetes.io/projected/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-kube-api-access-lzjtr\") pod \"dnsmasq-dns-5d6c5877d9-p6q7z\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.635685 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.720688 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.783874 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.856593 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5xxjk"] Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.858625 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.877518 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.877780 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.891005 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5xxjk"] Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.919944 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7lbbq"] Jan 29 08:17:46 crc kubenswrapper[4826]: I0129 08:17:46.964944 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7lbbq" event={"ID":"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880","Type":"ContainerStarted","Data":"5b18efa85bc6b99e13fc4ff290b92f6a16f6500e851eff2e3d91dbd79bd390ea"} Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.020603 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-scripts\") pod \"nova-cell1-conductor-db-sync-5xxjk\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.020682 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5xxjk\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.020784 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-config-data\") pod \"nova-cell1-conductor-db-sync-5xxjk\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.020862 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2v5b\" (UniqueName: \"kubernetes.io/projected/8a397374-869e-4bb1-9432-5d96bb65411a-kube-api-access-k2v5b\") pod \"nova-cell1-conductor-db-sync-5xxjk\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.045439 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.122714 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2v5b\" (UniqueName: \"kubernetes.io/projected/8a397374-869e-4bb1-9432-5d96bb65411a-kube-api-access-k2v5b\") pod \"nova-cell1-conductor-db-sync-5xxjk\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.123012 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-scripts\") pod \"nova-cell1-conductor-db-sync-5xxjk\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.123066 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5xxjk\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.123126 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-config-data\") pod \"nova-cell1-conductor-db-sync-5xxjk\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.129246 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-scripts\") pod \"nova-cell1-conductor-db-sync-5xxjk\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.129442 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-config-data\") pod \"nova-cell1-conductor-db-sync-5xxjk\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.129847 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5xxjk\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.138985 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2v5b\" (UniqueName: \"kubernetes.io/projected/8a397374-869e-4bb1-9432-5d96bb65411a-kube-api-access-k2v5b\") pod \"nova-cell1-conductor-db-sync-5xxjk\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.170699 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.206972 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.255702 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:47 crc kubenswrapper[4826]: W0129 08:17:47.271784 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c543e93_7d21_4da8_be03_c4febe406083.slice/crio-42b810f80a506b0b1a2b54be425f5a1e04a8bd879dbdd508579fa78fbb0ea503 WatchSource:0}: Error finding container 42b810f80a506b0b1a2b54be425f5a1e04a8bd879dbdd508579fa78fbb0ea503: Status 404 returned error can't find the container with id 42b810f80a506b0b1a2b54be425f5a1e04a8bd879dbdd508579fa78fbb0ea503 Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.321016 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.393533 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d6c5877d9-p6q7z"] Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.736052 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5xxjk"] Jan 29 08:17:47 crc kubenswrapper[4826]: W0129 08:17:47.746421 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a397374_869e_4bb1_9432_5d96bb65411a.slice/crio-9884e26a6e9e6c626cb3ff1922494a4196b96edc144fc1ca055d8253861cb9d3 WatchSource:0}: Error finding container 9884e26a6e9e6c626cb3ff1922494a4196b96edc144fc1ca055d8253861cb9d3: Status 404 returned error can't find the container with id 9884e26a6e9e6c626cb3ff1922494a4196b96edc144fc1ca055d8253861cb9d3 Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.976098 4826 generic.go:334] "Generic (PLEG): container finished" podID="2dfdaf43-55a9-45c0-9550-5d191df8ccc7" containerID="5fc9608d5135ec98187593936ee76ffcaab7710ba8a2e12320f0603057da1315" exitCode=0 Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.976278 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" event={"ID":"2dfdaf43-55a9-45c0-9550-5d191df8ccc7","Type":"ContainerDied","Data":"5fc9608d5135ec98187593936ee76ffcaab7710ba8a2e12320f0603057da1315"} Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.976484 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" event={"ID":"2dfdaf43-55a9-45c0-9550-5d191df8ccc7","Type":"ContainerStarted","Data":"6f41c433b7d36fb91ec0a6873c57c03d8abf84f4ad222c81cd5911a5f9bf7b48"} Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.978951 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4dd0c0ba-5266-4691-84af-b801d0ae00b3","Type":"ContainerStarted","Data":"6267e58a3ee0c75f98cfcf033d9cb38093c21d897e076548358946149f3f9f62"} Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.980227 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c543e93-7d21-4da8-be03-c4febe406083","Type":"ContainerStarted","Data":"42b810f80a506b0b1a2b54be425f5a1e04a8bd879dbdd508579fa78fbb0ea503"} Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.981928 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7lbbq" event={"ID":"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880","Type":"ContainerStarted","Data":"e313c31e9558789bce757f84d738a188c40a80f3634f1374e57eae6ff170da1e"} Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.984251 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f3e8c12-ff7d-4915-975f-90977aa7dbeb","Type":"ContainerStarted","Data":"d1af96f5e548b9a8a47ce0f83b05d4ea10977791d21241703b830bc95f911462"} Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.985692 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5xxjk" event={"ID":"8a397374-869e-4bb1-9432-5d96bb65411a","Type":"ContainerStarted","Data":"dfced1e3782e9d65b1311526b5792aa211e33e29061a920f972cb37dcd9c47ac"} Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.985719 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5xxjk" event={"ID":"8a397374-869e-4bb1-9432-5d96bb65411a","Type":"ContainerStarted","Data":"9884e26a6e9e6c626cb3ff1922494a4196b96edc144fc1ca055d8253861cb9d3"} Jan 29 08:17:47 crc kubenswrapper[4826]: I0129 08:17:47.988041 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83093822-df53-4202-be80-6213a6d2885c","Type":"ContainerStarted","Data":"e4c005f58d46c72b968f9f7c275479f6718ca773e75f6db41ce5867313fb7005"} Jan 29 08:17:48 crc kubenswrapper[4826]: I0129 08:17:48.016148 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7lbbq" podStartSLOduration=3.016129186 podStartE2EDuration="3.016129186s" podCreationTimestamp="2026-01-29 08:17:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:17:48.009533292 +0000 UTC m=+5651.871326361" watchObservedRunningTime="2026-01-29 08:17:48.016129186 +0000 UTC m=+5651.877922255" Jan 29 08:17:48 crc kubenswrapper[4826]: I0129 08:17:48.030501 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5xxjk" podStartSLOduration=2.030484694 podStartE2EDuration="2.030484694s" podCreationTimestamp="2026-01-29 08:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:17:48.025609466 +0000 UTC m=+5651.887402535" watchObservedRunningTime="2026-01-29 08:17:48.030484694 +0000 UTC m=+5651.892277763" Jan 29 08:17:50 crc kubenswrapper[4826]: I0129 08:17:50.152345 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:50 crc kubenswrapper[4826]: I0129 08:17:50.160777 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.034329 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83093822-df53-4202-be80-6213a6d2885c","Type":"ContainerStarted","Data":"2e0584da702a9c9b52903f793b5a6159a94287e81ffad7609ccc3e49914da4a5"} Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.055545 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" event={"ID":"2dfdaf43-55a9-45c0-9550-5d191df8ccc7","Type":"ContainerStarted","Data":"954a77c43a953a94ee25b7f656fe5d6a95bfc9558b03f7c4e7e55f2a4638f754"} Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.055876 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.057501 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4dd0c0ba-5266-4691-84af-b801d0ae00b3","Type":"ContainerStarted","Data":"b81e97a8516a0a8678d4de3e999c0190d42e75c132f51c8c0e4809ccbfea3152"} Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.057632 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4dd0c0ba-5266-4691-84af-b801d0ae00b3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b81e97a8516a0a8678d4de3e999c0190d42e75c132f51c8c0e4809ccbfea3152" gracePeriod=30 Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.064774 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c543e93-7d21-4da8-be03-c4febe406083","Type":"ContainerStarted","Data":"a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40"} Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.064827 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c543e93-7d21-4da8-be03-c4febe406083","Type":"ContainerStarted","Data":"716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966"} Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.065384 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5c543e93-7d21-4da8-be03-c4febe406083" containerName="nova-metadata-metadata" containerID="cri-o://a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40" gracePeriod=30 Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.066238 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5c543e93-7d21-4da8-be03-c4febe406083" containerName="nova-metadata-log" containerID="cri-o://716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966" gracePeriod=30 Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.072540 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f3e8c12-ff7d-4915-975f-90977aa7dbeb","Type":"ContainerStarted","Data":"6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea"} Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.072613 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f3e8c12-ff7d-4915-975f-90977aa7dbeb","Type":"ContainerStarted","Data":"cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66"} Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.075962 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.075231912 podStartE2EDuration="5.075947329s" podCreationTimestamp="2026-01-29 08:17:46 +0000 UTC" firstStartedPulling="2026-01-29 08:17:47.173684322 +0000 UTC m=+5651.035477401" lastFinishedPulling="2026-01-29 08:17:50.174399749 +0000 UTC m=+5654.036192818" observedRunningTime="2026-01-29 08:17:51.071945794 +0000 UTC m=+5654.933738863" watchObservedRunningTime="2026-01-29 08:17:51.075947329 +0000 UTC m=+5654.937740408" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.078511 4826 generic.go:334] "Generic (PLEG): container finished" podID="8a397374-869e-4bb1-9432-5d96bb65411a" containerID="dfced1e3782e9d65b1311526b5792aa211e33e29061a920f972cb37dcd9c47ac" exitCode=0 Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.078561 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5xxjk" event={"ID":"8a397374-869e-4bb1-9432-5d96bb65411a","Type":"ContainerDied","Data":"dfced1e3782e9d65b1311526b5792aa211e33e29061a920f972cb37dcd9c47ac"} Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.101669 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" podStartSLOduration=5.101648716 podStartE2EDuration="5.101648716s" podCreationTimestamp="2026-01-29 08:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:17:51.096375227 +0000 UTC m=+5654.958168296" watchObservedRunningTime="2026-01-29 08:17:51.101648716 +0000 UTC m=+5654.963441785" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.125143 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.007905899 podStartE2EDuration="5.125125514s" podCreationTimestamp="2026-01-29 08:17:46 +0000 UTC" firstStartedPulling="2026-01-29 08:17:47.058465448 +0000 UTC m=+5650.920258517" lastFinishedPulling="2026-01-29 08:17:50.175685053 +0000 UTC m=+5654.037478132" observedRunningTime="2026-01-29 08:17:51.117995686 +0000 UTC m=+5654.979788745" watchObservedRunningTime="2026-01-29 08:17:51.125125514 +0000 UTC m=+5654.986918583" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.137654 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.233266214 podStartE2EDuration="5.137632203s" podCreationTimestamp="2026-01-29 08:17:46 +0000 UTC" firstStartedPulling="2026-01-29 08:17:47.274009384 +0000 UTC m=+5651.135802443" lastFinishedPulling="2026-01-29 08:17:50.178375363 +0000 UTC m=+5654.040168432" observedRunningTime="2026-01-29 08:17:51.134552232 +0000 UTC m=+5654.996345301" watchObservedRunningTime="2026-01-29 08:17:51.137632203 +0000 UTC m=+5654.999425282" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.166623 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.3833311950000002 podStartE2EDuration="5.166603776s" podCreationTimestamp="2026-01-29 08:17:46 +0000 UTC" firstStartedPulling="2026-01-29 08:17:47.391090217 +0000 UTC m=+5651.252883286" lastFinishedPulling="2026-01-29 08:17:50.174362798 +0000 UTC m=+5654.036155867" observedRunningTime="2026-01-29 08:17:51.165764204 +0000 UTC m=+5655.027557293" watchObservedRunningTime="2026-01-29 08:17:51.166603776 +0000 UTC m=+5655.028396835" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.473279 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.613366 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.636646 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.636719 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.699736 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.838099 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2gcs\" (UniqueName: \"kubernetes.io/projected/5c543e93-7d21-4da8-be03-c4febe406083-kube-api-access-x2gcs\") pod \"5c543e93-7d21-4da8-be03-c4febe406083\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.838197 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c543e93-7d21-4da8-be03-c4febe406083-logs\") pod \"5c543e93-7d21-4da8-be03-c4febe406083\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.838432 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-combined-ca-bundle\") pod \"5c543e93-7d21-4da8-be03-c4febe406083\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.838483 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-config-data\") pod \"5c543e93-7d21-4da8-be03-c4febe406083\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.839658 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c543e93-7d21-4da8-be03-c4febe406083-logs" (OuterVolumeSpecName: "logs") pod "5c543e93-7d21-4da8-be03-c4febe406083" (UID: "5c543e93-7d21-4da8-be03-c4febe406083"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.850550 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c543e93-7d21-4da8-be03-c4febe406083-kube-api-access-x2gcs" (OuterVolumeSpecName: "kube-api-access-x2gcs") pod "5c543e93-7d21-4da8-be03-c4febe406083" (UID: "5c543e93-7d21-4da8-be03-c4febe406083"). InnerVolumeSpecName "kube-api-access-x2gcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:51 crc kubenswrapper[4826]: E0129 08:17:51.879814 4826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-combined-ca-bundle podName:5c543e93-7d21-4da8-be03-c4febe406083 nodeName:}" failed. No retries permitted until 2026-01-29 08:17:52.379778216 +0000 UTC m=+5656.241571285 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-combined-ca-bundle") pod "5c543e93-7d21-4da8-be03-c4febe406083" (UID: "5c543e93-7d21-4da8-be03-c4febe406083") : error deleting /var/lib/kubelet/pods/5c543e93-7d21-4da8-be03-c4febe406083/volume-subpaths: remove /var/lib/kubelet/pods/5c543e93-7d21-4da8-be03-c4febe406083/volume-subpaths: no such file or directory Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.882953 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-config-data" (OuterVolumeSpecName: "config-data") pod "5c543e93-7d21-4da8-be03-c4febe406083" (UID: "5c543e93-7d21-4da8-be03-c4febe406083"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.941058 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2gcs\" (UniqueName: \"kubernetes.io/projected/5c543e93-7d21-4da8-be03-c4febe406083-kube-api-access-x2gcs\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.941349 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c543e93-7d21-4da8-be03-c4febe406083-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:51 crc kubenswrapper[4826]: I0129 08:17:51.941447 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.088907 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.088900 4826 generic.go:334] "Generic (PLEG): container finished" podID="5c543e93-7d21-4da8-be03-c4febe406083" containerID="a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40" exitCode=0 Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.089283 4826 generic.go:334] "Generic (PLEG): container finished" podID="5c543e93-7d21-4da8-be03-c4febe406083" containerID="716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966" exitCode=143 Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.088932 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c543e93-7d21-4da8-be03-c4febe406083","Type":"ContainerDied","Data":"a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40"} Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.089435 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c543e93-7d21-4da8-be03-c4febe406083","Type":"ContainerDied","Data":"716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966"} Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.089457 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c543e93-7d21-4da8-be03-c4febe406083","Type":"ContainerDied","Data":"42b810f80a506b0b1a2b54be425f5a1e04a8bd879dbdd508579fa78fbb0ea503"} Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.089489 4826 scope.go:117] "RemoveContainer" containerID="a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.092789 4826 generic.go:334] "Generic (PLEG): container finished" podID="c22f0edc-d56e-4aaa-b9f4-ec8394a3d880" containerID="e313c31e9558789bce757f84d738a188c40a80f3634f1374e57eae6ff170da1e" exitCode=0 Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.093089 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7lbbq" event={"ID":"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880","Type":"ContainerDied","Data":"e313c31e9558789bce757f84d738a188c40a80f3634f1374e57eae6ff170da1e"} Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.121834 4826 scope.go:117] "RemoveContainer" containerID="716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.164716 4826 scope.go:117] "RemoveContainer" containerID="a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40" Jan 29 08:17:52 crc kubenswrapper[4826]: E0129 08:17:52.165351 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40\": container with ID starting with a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40 not found: ID does not exist" containerID="a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.165409 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40"} err="failed to get container status \"a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40\": rpc error: code = NotFound desc = could not find container \"a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40\": container with ID starting with a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40 not found: ID does not exist" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.165441 4826 scope.go:117] "RemoveContainer" containerID="716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966" Jan 29 08:17:52 crc kubenswrapper[4826]: E0129 08:17:52.165810 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966\": container with ID starting with 716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966 not found: ID does not exist" containerID="716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.165862 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966"} err="failed to get container status \"716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966\": rpc error: code = NotFound desc = could not find container \"716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966\": container with ID starting with 716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966 not found: ID does not exist" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.165895 4826 scope.go:117] "RemoveContainer" containerID="a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.166257 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40"} err="failed to get container status \"a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40\": rpc error: code = NotFound desc = could not find container \"a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40\": container with ID starting with a6fecabfc0a58c6842debf4d07bc8041066ebf8897f6decb568045b50cdcbd40 not found: ID does not exist" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.166339 4826 scope.go:117] "RemoveContainer" containerID="716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.166859 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966"} err="failed to get container status \"716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966\": rpc error: code = NotFound desc = could not find container \"716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966\": container with ID starting with 716f9c8c79517721beb3c0a231230651cc28b2ebba520ac618f65354db9f2966 not found: ID does not exist" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.451154 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-combined-ca-bundle\") pod \"5c543e93-7d21-4da8-be03-c4febe406083\" (UID: \"5c543e93-7d21-4da8-be03-c4febe406083\") " Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.454349 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c543e93-7d21-4da8-be03-c4febe406083" (UID: "5c543e93-7d21-4da8-be03-c4febe406083"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.513486 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.553586 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c543e93-7d21-4da8-be03-c4febe406083-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.654964 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-config-data\") pod \"8a397374-869e-4bb1-9432-5d96bb65411a\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.655008 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-scripts\") pod \"8a397374-869e-4bb1-9432-5d96bb65411a\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.655051 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2v5b\" (UniqueName: \"kubernetes.io/projected/8a397374-869e-4bb1-9432-5d96bb65411a-kube-api-access-k2v5b\") pod \"8a397374-869e-4bb1-9432-5d96bb65411a\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.655112 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-combined-ca-bundle\") pod \"8a397374-869e-4bb1-9432-5d96bb65411a\" (UID: \"8a397374-869e-4bb1-9432-5d96bb65411a\") " Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.671411 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-scripts" (OuterVolumeSpecName: "scripts") pod "8a397374-869e-4bb1-9432-5d96bb65411a" (UID: "8a397374-869e-4bb1-9432-5d96bb65411a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.674419 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a397374-869e-4bb1-9432-5d96bb65411a-kube-api-access-k2v5b" (OuterVolumeSpecName: "kube-api-access-k2v5b") pod "8a397374-869e-4bb1-9432-5d96bb65411a" (UID: "8a397374-869e-4bb1-9432-5d96bb65411a"). InnerVolumeSpecName "kube-api-access-k2v5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.698789 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a397374-869e-4bb1-9432-5d96bb65411a" (UID: "8a397374-869e-4bb1-9432-5d96bb65411a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.700740 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-config-data" (OuterVolumeSpecName: "config-data") pod "8a397374-869e-4bb1-9432-5d96bb65411a" (UID: "8a397374-869e-4bb1-9432-5d96bb65411a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.757797 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.757832 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.757845 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2v5b\" (UniqueName: \"kubernetes.io/projected/8a397374-869e-4bb1-9432-5d96bb65411a-kube-api-access-k2v5b\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.757887 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a397374-869e-4bb1-9432-5d96bb65411a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.786329 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.798049 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.820164 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c543e93-7d21-4da8-be03-c4febe406083" path="/var/lib/kubelet/pods/5c543e93-7d21-4da8-be03-c4febe406083/volumes" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.822227 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:52 crc kubenswrapper[4826]: E0129 08:17:52.822749 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a397374-869e-4bb1-9432-5d96bb65411a" containerName="nova-cell1-conductor-db-sync" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.822775 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a397374-869e-4bb1-9432-5d96bb65411a" containerName="nova-cell1-conductor-db-sync" Jan 29 08:17:52 crc kubenswrapper[4826]: E0129 08:17:52.822811 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c543e93-7d21-4da8-be03-c4febe406083" containerName="nova-metadata-metadata" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.822824 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c543e93-7d21-4da8-be03-c4febe406083" containerName="nova-metadata-metadata" Jan 29 08:17:52 crc kubenswrapper[4826]: E0129 08:17:52.822853 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c543e93-7d21-4da8-be03-c4febe406083" containerName="nova-metadata-log" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.822867 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c543e93-7d21-4da8-be03-c4febe406083" containerName="nova-metadata-log" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.823150 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a397374-869e-4bb1-9432-5d96bb65411a" containerName="nova-cell1-conductor-db-sync" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.823180 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c543e93-7d21-4da8-be03-c4febe406083" containerName="nova-metadata-log" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.823226 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c543e93-7d21-4da8-be03-c4febe406083" containerName="nova-metadata-metadata" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.826075 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.827188 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.828652 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.828919 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.961445 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-config-data\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.961779 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.961932 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf94f776-6c62-4657-a6d4-71ef99842021-logs\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.962197 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:52 crc kubenswrapper[4826]: I0129 08:17:52.962486 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rlb7\" (UniqueName: \"kubernetes.io/projected/cf94f776-6c62-4657-a6d4-71ef99842021-kube-api-access-8rlb7\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.064879 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.064939 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf94f776-6c62-4657-a6d4-71ef99842021-logs\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.065067 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.065177 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rlb7\" (UniqueName: \"kubernetes.io/projected/cf94f776-6c62-4657-a6d4-71ef99842021-kube-api-access-8rlb7\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.065229 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-config-data\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.066127 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf94f776-6c62-4657-a6d4-71ef99842021-logs\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.070417 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-config-data\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.071124 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.071544 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.085682 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rlb7\" (UniqueName: \"kubernetes.io/projected/cf94f776-6c62-4657-a6d4-71ef99842021-kube-api-access-8rlb7\") pod \"nova-metadata-0\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " pod="openstack/nova-metadata-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.107059 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5xxjk" event={"ID":"8a397374-869e-4bb1-9432-5d96bb65411a","Type":"ContainerDied","Data":"9884e26a6e9e6c626cb3ff1922494a4196b96edc144fc1ca055d8253861cb9d3"} Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.107113 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5xxjk" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.107134 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9884e26a6e9e6c626cb3ff1922494a4196b96edc144fc1ca055d8253861cb9d3" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.155118 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.284364 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.287084 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.295754 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.313484 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.373585 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.373694 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm4bz\" (UniqueName: \"kubernetes.io/projected/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-kube-api-access-gm4bz\") pod \"nova-cell1-conductor-0\" (UID: \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.373744 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.465760 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.483415 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.483618 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.483781 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm4bz\" (UniqueName: \"kubernetes.io/projected/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-kube-api-access-gm4bz\") pod \"nova-cell1-conductor-0\" (UID: \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.489613 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.490164 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.506581 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm4bz\" (UniqueName: \"kubernetes.io/projected/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-kube-api-access-gm4bz\") pod \"nova-cell1-conductor-0\" (UID: \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.585034 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-scripts\") pod \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.585766 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-config-data\") pod \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.585896 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89c7c\" (UniqueName: \"kubernetes.io/projected/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-kube-api-access-89c7c\") pod \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.585939 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-combined-ca-bundle\") pod \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\" (UID: \"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880\") " Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.587945 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-scripts" (OuterVolumeSpecName: "scripts") pod "c22f0edc-d56e-4aaa-b9f4-ec8394a3d880" (UID: "c22f0edc-d56e-4aaa-b9f4-ec8394a3d880"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.589576 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-kube-api-access-89c7c" (OuterVolumeSpecName: "kube-api-access-89c7c") pod "c22f0edc-d56e-4aaa-b9f4-ec8394a3d880" (UID: "c22f0edc-d56e-4aaa-b9f4-ec8394a3d880"). InnerVolumeSpecName "kube-api-access-89c7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.610522 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-config-data" (OuterVolumeSpecName: "config-data") pod "c22f0edc-d56e-4aaa-b9f4-ec8394a3d880" (UID: "c22f0edc-d56e-4aaa-b9f4-ec8394a3d880"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.620447 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c22f0edc-d56e-4aaa-b9f4-ec8394a3d880" (UID: "c22f0edc-d56e-4aaa-b9f4-ec8394a3d880"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.624261 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.691666 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89c7c\" (UniqueName: \"kubernetes.io/projected/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-kube-api-access-89c7c\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.691697 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.691707 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.691715 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:53 crc kubenswrapper[4826]: I0129 08:17:53.717798 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:53 crc kubenswrapper[4826]: W0129 08:17:53.720781 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf94f776_6c62_4657_a6d4_71ef99842021.slice/crio-4971cdfb2e9bb572970ff7fe74e983ab4a1547b166a41a8a98446fec09d2f05d WatchSource:0}: Error finding container 4971cdfb2e9bb572970ff7fe74e983ab4a1547b166a41a8a98446fec09d2f05d: Status 404 returned error can't find the container with id 4971cdfb2e9bb572970ff7fe74e983ab4a1547b166a41a8a98446fec09d2f05d Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.080896 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 08:17:54 crc kubenswrapper[4826]: W0129 08:17:54.091233 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod133f3eaf_ea6b_4214_b61d_ba8573bc20f8.slice/crio-38bf14aa3d0d942cd02063973a9f2624958c2e05e64545afdff2037d20404d4f WatchSource:0}: Error finding container 38bf14aa3d0d942cd02063973a9f2624958c2e05e64545afdff2037d20404d4f: Status 404 returned error can't find the container with id 38bf14aa3d0d942cd02063973a9f2624958c2e05e64545afdff2037d20404d4f Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.115462 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"133f3eaf-ea6b-4214-b61d-ba8573bc20f8","Type":"ContainerStarted","Data":"38bf14aa3d0d942cd02063973a9f2624958c2e05e64545afdff2037d20404d4f"} Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.117501 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf94f776-6c62-4657-a6d4-71ef99842021","Type":"ContainerStarted","Data":"2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc"} Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.117582 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf94f776-6c62-4657-a6d4-71ef99842021","Type":"ContainerStarted","Data":"b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426"} Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.117603 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf94f776-6c62-4657-a6d4-71ef99842021","Type":"ContainerStarted","Data":"4971cdfb2e9bb572970ff7fe74e983ab4a1547b166a41a8a98446fec09d2f05d"} Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.119583 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7lbbq" event={"ID":"c22f0edc-d56e-4aaa-b9f4-ec8394a3d880","Type":"ContainerDied","Data":"5b18efa85bc6b99e13fc4ff290b92f6a16f6500e851eff2e3d91dbd79bd390ea"} Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.119634 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b18efa85bc6b99e13fc4ff290b92f6a16f6500e851eff2e3d91dbd79bd390ea" Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.119652 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7lbbq" Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.155316 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.155285226 podStartE2EDuration="2.155285226s" podCreationTimestamp="2026-01-29 08:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:17:54.140395274 +0000 UTC m=+5658.002188343" watchObservedRunningTime="2026-01-29 08:17:54.155285226 +0000 UTC m=+5658.017078295" Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.302536 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.302845 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1f3e8c12-ff7d-4915-975f-90977aa7dbeb" containerName="nova-api-log" containerID="cri-o://cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66" gracePeriod=30 Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.303670 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1f3e8c12-ff7d-4915-975f-90977aa7dbeb" containerName="nova-api-api" containerID="cri-o://6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea" gracePeriod=30 Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.316014 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.316366 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="83093822-df53-4202-be80-6213a6d2885c" containerName="nova-scheduler-scheduler" containerID="cri-o://2e0584da702a9c9b52903f793b5a6159a94287e81ffad7609ccc3e49914da4a5" gracePeriod=30 Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.348151 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:54 crc kubenswrapper[4826]: I0129 08:17:54.920542 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.028189 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-config-data\") pod \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.028591 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-logs\") pod \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.028670 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hln6t\" (UniqueName: \"kubernetes.io/projected/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-kube-api-access-hln6t\") pod \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.028706 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-combined-ca-bundle\") pod \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\" (UID: \"1f3e8c12-ff7d-4915-975f-90977aa7dbeb\") " Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.029132 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-logs" (OuterVolumeSpecName: "logs") pod "1f3e8c12-ff7d-4915-975f-90977aa7dbeb" (UID: "1f3e8c12-ff7d-4915-975f-90977aa7dbeb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.030075 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.034804 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-kube-api-access-hln6t" (OuterVolumeSpecName: "kube-api-access-hln6t") pod "1f3e8c12-ff7d-4915-975f-90977aa7dbeb" (UID: "1f3e8c12-ff7d-4915-975f-90977aa7dbeb"). InnerVolumeSpecName "kube-api-access-hln6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.053510 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-config-data" (OuterVolumeSpecName: "config-data") pod "1f3e8c12-ff7d-4915-975f-90977aa7dbeb" (UID: "1f3e8c12-ff7d-4915-975f-90977aa7dbeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.054729 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f3e8c12-ff7d-4915-975f-90977aa7dbeb" (UID: "1f3e8c12-ff7d-4915-975f-90977aa7dbeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.131842 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.131885 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hln6t\" (UniqueName: \"kubernetes.io/projected/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-kube-api-access-hln6t\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.131904 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3e8c12-ff7d-4915-975f-90977aa7dbeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.132583 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"133f3eaf-ea6b-4214-b61d-ba8573bc20f8","Type":"ContainerStarted","Data":"4683903cfe2d99ef8f3df76bab53bd4cc76d6cb88af95f2a0d466f0b12211fd2"} Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.132655 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.134834 4826 generic.go:334] "Generic (PLEG): container finished" podID="1f3e8c12-ff7d-4915-975f-90977aa7dbeb" containerID="6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea" exitCode=0 Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.134866 4826 generic.go:334] "Generic (PLEG): container finished" podID="1f3e8c12-ff7d-4915-975f-90977aa7dbeb" containerID="cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66" exitCode=143 Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.135594 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.142388 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f3e8c12-ff7d-4915-975f-90977aa7dbeb","Type":"ContainerDied","Data":"6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea"} Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.142432 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f3e8c12-ff7d-4915-975f-90977aa7dbeb","Type":"ContainerDied","Data":"cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66"} Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.142447 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1f3e8c12-ff7d-4915-975f-90977aa7dbeb","Type":"ContainerDied","Data":"d1af96f5e548b9a8a47ce0f83b05d4ea10977791d21241703b830bc95f911462"} Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.142465 4826 scope.go:117] "RemoveContainer" containerID="6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.171885 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.171864756 podStartE2EDuration="2.171864756s" podCreationTimestamp="2026-01-29 08:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:17:55.168821865 +0000 UTC m=+5659.030614944" watchObservedRunningTime="2026-01-29 08:17:55.171864756 +0000 UTC m=+5659.033657825" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.198281 4826 scope.go:117] "RemoveContainer" containerID="cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.202168 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.211565 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.236788 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 08:17:55 crc kubenswrapper[4826]: E0129 08:17:55.237164 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3e8c12-ff7d-4915-975f-90977aa7dbeb" containerName="nova-api-api" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.237182 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3e8c12-ff7d-4915-975f-90977aa7dbeb" containerName="nova-api-api" Jan 29 08:17:55 crc kubenswrapper[4826]: E0129 08:17:55.237211 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3e8c12-ff7d-4915-975f-90977aa7dbeb" containerName="nova-api-log" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.237218 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3e8c12-ff7d-4915-975f-90977aa7dbeb" containerName="nova-api-log" Jan 29 08:17:55 crc kubenswrapper[4826]: E0129 08:17:55.237232 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22f0edc-d56e-4aaa-b9f4-ec8394a3d880" containerName="nova-manage" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.237238 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22f0edc-d56e-4aaa-b9f4-ec8394a3d880" containerName="nova-manage" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.237399 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3e8c12-ff7d-4915-975f-90977aa7dbeb" containerName="nova-api-log" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.237421 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3e8c12-ff7d-4915-975f-90977aa7dbeb" containerName="nova-api-api" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.237438 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22f0edc-d56e-4aaa-b9f4-ec8394a3d880" containerName="nova-manage" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.238409 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.240167 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.247153 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.256096 4826 scope.go:117] "RemoveContainer" containerID="6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea" Jan 29 08:17:55 crc kubenswrapper[4826]: E0129 08:17:55.256680 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea\": container with ID starting with 6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea not found: ID does not exist" containerID="6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.256708 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea"} err="failed to get container status \"6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea\": rpc error: code = NotFound desc = could not find container \"6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea\": container with ID starting with 6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea not found: ID does not exist" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.256727 4826 scope.go:117] "RemoveContainer" containerID="cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66" Jan 29 08:17:55 crc kubenswrapper[4826]: E0129 08:17:55.257002 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66\": container with ID starting with cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66 not found: ID does not exist" containerID="cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.257023 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66"} err="failed to get container status \"cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66\": rpc error: code = NotFound desc = could not find container \"cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66\": container with ID starting with cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66 not found: ID does not exist" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.257036 4826 scope.go:117] "RemoveContainer" containerID="6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.257235 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea"} err="failed to get container status \"6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea\": rpc error: code = NotFound desc = could not find container \"6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea\": container with ID starting with 6b0816c2739157f4c1e105f816840e4f9fa5358e469cf5608ae3b08161b528ea not found: ID does not exist" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.257253 4826 scope.go:117] "RemoveContainer" containerID="cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.257512 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66"} err="failed to get container status \"cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66\": rpc error: code = NotFound desc = could not find container \"cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66\": container with ID starting with cea5c5aa25d8bf160f7e8c3ff99ebc83dab322c88ea6a1a70fb0e88497a8ed66 not found: ID does not exist" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.334722 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcjw4\" (UniqueName: \"kubernetes.io/projected/deb07f33-e9ca-47ec-ad41-65d1756c84af-kube-api-access-tcjw4\") pod \"nova-api-0\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.334776 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07f33-e9ca-47ec-ad41-65d1756c84af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.334889 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07f33-e9ca-47ec-ad41-65d1756c84af-logs\") pod \"nova-api-0\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.334934 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07f33-e9ca-47ec-ad41-65d1756c84af-config-data\") pod \"nova-api-0\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.436603 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcjw4\" (UniqueName: \"kubernetes.io/projected/deb07f33-e9ca-47ec-ad41-65d1756c84af-kube-api-access-tcjw4\") pod \"nova-api-0\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.436664 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07f33-e9ca-47ec-ad41-65d1756c84af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.436749 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07f33-e9ca-47ec-ad41-65d1756c84af-logs\") pod \"nova-api-0\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.436784 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07f33-e9ca-47ec-ad41-65d1756c84af-config-data\") pod \"nova-api-0\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.437481 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07f33-e9ca-47ec-ad41-65d1756c84af-logs\") pod \"nova-api-0\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.441006 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07f33-e9ca-47ec-ad41-65d1756c84af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.454284 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcjw4\" (UniqueName: \"kubernetes.io/projected/deb07f33-e9ca-47ec-ad41-65d1756c84af-kube-api-access-tcjw4\") pod \"nova-api-0\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.456590 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07f33-e9ca-47ec-ad41-65d1756c84af-config-data\") pod \"nova-api-0\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " pod="openstack/nova-api-0" Jan 29 08:17:55 crc kubenswrapper[4826]: I0129 08:17:55.553591 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:17:56 crc kubenswrapper[4826]: W0129 08:17:56.117203 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddeb07f33_e9ca_47ec_ad41_65d1756c84af.slice/crio-9cc77bc148d95324e8d86a78c1affbee2d41e5b7e1f5816c305667f17230114e WatchSource:0}: Error finding container 9cc77bc148d95324e8d86a78c1affbee2d41e5b7e1f5816c305667f17230114e: Status 404 returned error can't find the container with id 9cc77bc148d95324e8d86a78c1affbee2d41e5b7e1f5816c305667f17230114e Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.119197 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.151182 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb07f33-e9ca-47ec-ad41-65d1756c84af","Type":"ContainerStarted","Data":"9cc77bc148d95324e8d86a78c1affbee2d41e5b7e1f5816c305667f17230114e"} Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.151378 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cf94f776-6c62-4657-a6d4-71ef99842021" containerName="nova-metadata-log" containerID="cri-o://b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426" gracePeriod=30 Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.151471 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cf94f776-6c62-4657-a6d4-71ef99842021" containerName="nova-metadata-metadata" containerID="cri-o://2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc" gracePeriod=30 Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.707601 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.767015 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-nova-metadata-tls-certs\") pod \"cf94f776-6c62-4657-a6d4-71ef99842021\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.767206 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rlb7\" (UniqueName: \"kubernetes.io/projected/cf94f776-6c62-4657-a6d4-71ef99842021-kube-api-access-8rlb7\") pod \"cf94f776-6c62-4657-a6d4-71ef99842021\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.767289 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-config-data\") pod \"cf94f776-6c62-4657-a6d4-71ef99842021\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.767350 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf94f776-6c62-4657-a6d4-71ef99842021-logs\") pod \"cf94f776-6c62-4657-a6d4-71ef99842021\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.767397 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-combined-ca-bundle\") pod \"cf94f776-6c62-4657-a6d4-71ef99842021\" (UID: \"cf94f776-6c62-4657-a6d4-71ef99842021\") " Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.773196 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf94f776-6c62-4657-a6d4-71ef99842021-logs" (OuterVolumeSpecName: "logs") pod "cf94f776-6c62-4657-a6d4-71ef99842021" (UID: "cf94f776-6c62-4657-a6d4-71ef99842021"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.782567 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf94f776-6c62-4657-a6d4-71ef99842021-kube-api-access-8rlb7" (OuterVolumeSpecName: "kube-api-access-8rlb7") pod "cf94f776-6c62-4657-a6d4-71ef99842021" (UID: "cf94f776-6c62-4657-a6d4-71ef99842021"). InnerVolumeSpecName "kube-api-access-8rlb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.786494 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.794884 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf94f776-6c62-4657-a6d4-71ef99842021" (UID: "cf94f776-6c62-4657-a6d4-71ef99842021"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.827260 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3e8c12-ff7d-4915-975f-90977aa7dbeb" path="/var/lib/kubelet/pods/1f3e8c12-ff7d-4915-975f-90977aa7dbeb/volumes" Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.848919 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67c6cdcd5c-cqq7r"] Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.849193 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" podUID="599a1081-272d-49b0-a081-bb866479e81b" containerName="dnsmasq-dns" containerID="cri-o://d452642237e7a695c7c2c5d4eeaed39970754da1e6aed17d01d9607f997100ba" gracePeriod=10 Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.859069 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-config-data" (OuterVolumeSpecName: "config-data") pod "cf94f776-6c62-4657-a6d4-71ef99842021" (UID: "cf94f776-6c62-4657-a6d4-71ef99842021"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.861496 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cf94f776-6c62-4657-a6d4-71ef99842021" (UID: "cf94f776-6c62-4657-a6d4-71ef99842021"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.871363 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.871394 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf94f776-6c62-4657-a6d4-71ef99842021-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.871405 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.871416 4826 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf94f776-6c62-4657-a6d4-71ef99842021-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:56 crc kubenswrapper[4826]: I0129 08:17:56.871426 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rlb7\" (UniqueName: \"kubernetes.io/projected/cf94f776-6c62-4657-a6d4-71ef99842021-kube-api-access-8rlb7\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.164019 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb07f33-e9ca-47ec-ad41-65d1756c84af","Type":"ContainerStarted","Data":"cc4cf9868c809eb79fea3d9df7b5fae3c1a0bd4a32970bb9e8dd16deb3d0ed93"} Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.164068 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb07f33-e9ca-47ec-ad41-65d1756c84af","Type":"ContainerStarted","Data":"c94a81169418473cd450d5b7c9db0f7d0ece2c5d56f648b9867b4bd0ff9fec4d"} Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.167872 4826 generic.go:334] "Generic (PLEG): container finished" podID="cf94f776-6c62-4657-a6d4-71ef99842021" containerID="2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc" exitCode=0 Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.167899 4826 generic.go:334] "Generic (PLEG): container finished" podID="cf94f776-6c62-4657-a6d4-71ef99842021" containerID="b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426" exitCode=143 Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.167927 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf94f776-6c62-4657-a6d4-71ef99842021","Type":"ContainerDied","Data":"2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc"} Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.167949 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.167979 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf94f776-6c62-4657-a6d4-71ef99842021","Type":"ContainerDied","Data":"b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426"} Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.167999 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf94f776-6c62-4657-a6d4-71ef99842021","Type":"ContainerDied","Data":"4971cdfb2e9bb572970ff7fe74e983ab4a1547b166a41a8a98446fec09d2f05d"} Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.168022 4826 scope.go:117] "RemoveContainer" containerID="2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.172199 4826 generic.go:334] "Generic (PLEG): container finished" podID="599a1081-272d-49b0-a081-bb866479e81b" containerID="d452642237e7a695c7c2c5d4eeaed39970754da1e6aed17d01d9607f997100ba" exitCode=0 Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.172256 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" event={"ID":"599a1081-272d-49b0-a081-bb866479e81b","Type":"ContainerDied","Data":"d452642237e7a695c7c2c5d4eeaed39970754da1e6aed17d01d9607f997100ba"} Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.187380 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.187361269 podStartE2EDuration="2.187361269s" podCreationTimestamp="2026-01-29 08:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:17:57.177125109 +0000 UTC m=+5661.038918208" watchObservedRunningTime="2026-01-29 08:17:57.187361269 +0000 UTC m=+5661.049154338" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.200627 4826 scope.go:117] "RemoveContainer" containerID="b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.226184 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.245342 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.260959 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:57 crc kubenswrapper[4826]: E0129 08:17:57.261413 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf94f776-6c62-4657-a6d4-71ef99842021" containerName="nova-metadata-log" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.261431 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf94f776-6c62-4657-a6d4-71ef99842021" containerName="nova-metadata-log" Jan 29 08:17:57 crc kubenswrapper[4826]: E0129 08:17:57.261447 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf94f776-6c62-4657-a6d4-71ef99842021" containerName="nova-metadata-metadata" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.261456 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf94f776-6c62-4657-a6d4-71ef99842021" containerName="nova-metadata-metadata" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.261769 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf94f776-6c62-4657-a6d4-71ef99842021" containerName="nova-metadata-log" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.261786 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf94f776-6c62-4657-a6d4-71ef99842021" containerName="nova-metadata-metadata" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.262776 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.264969 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.265278 4826 scope.go:117] "RemoveContainer" containerID="2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.265336 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 08:17:57 crc kubenswrapper[4826]: E0129 08:17:57.266382 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc\": container with ID starting with 2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc not found: ID does not exist" containerID="2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.266415 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc"} err="failed to get container status \"2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc\": rpc error: code = NotFound desc = could not find container \"2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc\": container with ID starting with 2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc not found: ID does not exist" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.266439 4826 scope.go:117] "RemoveContainer" containerID="b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426" Jan 29 08:17:57 crc kubenswrapper[4826]: E0129 08:17:57.266928 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426\": container with ID starting with b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426 not found: ID does not exist" containerID="b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.266948 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426"} err="failed to get container status \"b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426\": rpc error: code = NotFound desc = could not find container \"b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426\": container with ID starting with b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426 not found: ID does not exist" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.266960 4826 scope.go:117] "RemoveContainer" containerID="2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.269493 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.270826 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc"} err="failed to get container status \"2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc\": rpc error: code = NotFound desc = could not find container \"2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc\": container with ID starting with 2f1827772defa3524eb32a3dd360c43e15031291dba55ba3f59cf3f0ee0b00cc not found: ID does not exist" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.270864 4826 scope.go:117] "RemoveContainer" containerID="b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.272475 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426"} err="failed to get container status \"b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426\": rpc error: code = NotFound desc = could not find container \"b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426\": container with ID starting with b033d6e3473cb012fbae779e9f9e288b6afef74941e443b161f35ebdfb502426 not found: ID does not exist" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.330515 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.381273 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-config\") pod \"599a1081-272d-49b0-a081-bb866479e81b\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.381357 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-dns-svc\") pod \"599a1081-272d-49b0-a081-bb866479e81b\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.381391 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkvg8\" (UniqueName: \"kubernetes.io/projected/599a1081-272d-49b0-a081-bb866479e81b-kube-api-access-bkvg8\") pod \"599a1081-272d-49b0-a081-bb866479e81b\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.381456 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-ovsdbserver-nb\") pod \"599a1081-272d-49b0-a081-bb866479e81b\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.381529 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-ovsdbserver-sb\") pod \"599a1081-272d-49b0-a081-bb866479e81b\" (UID: \"599a1081-272d-49b0-a081-bb866479e81b\") " Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.382355 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-logs\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.382473 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.382538 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2phd\" (UniqueName: \"kubernetes.io/projected/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-kube-api-access-b2phd\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.382569 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.382700 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-config-data\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.387915 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/599a1081-272d-49b0-a081-bb866479e81b-kube-api-access-bkvg8" (OuterVolumeSpecName: "kube-api-access-bkvg8") pod "599a1081-272d-49b0-a081-bb866479e81b" (UID: "599a1081-272d-49b0-a081-bb866479e81b"). InnerVolumeSpecName "kube-api-access-bkvg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.447584 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "599a1081-272d-49b0-a081-bb866479e81b" (UID: "599a1081-272d-49b0-a081-bb866479e81b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.450717 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "599a1081-272d-49b0-a081-bb866479e81b" (UID: "599a1081-272d-49b0-a081-bb866479e81b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.459584 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-config" (OuterVolumeSpecName: "config") pod "599a1081-272d-49b0-a081-bb866479e81b" (UID: "599a1081-272d-49b0-a081-bb866479e81b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.461317 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "599a1081-272d-49b0-a081-bb866479e81b" (UID: "599a1081-272d-49b0-a081-bb866479e81b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.485526 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2phd\" (UniqueName: \"kubernetes.io/projected/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-kube-api-access-b2phd\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.485582 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.485629 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-config-data\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.485703 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-logs\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.485789 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.485866 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.485876 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.485886 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkvg8\" (UniqueName: \"kubernetes.io/projected/599a1081-272d-49b0-a081-bb866479e81b-kube-api-access-bkvg8\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.485896 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.485905 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/599a1081-272d-49b0-a081-bb866479e81b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.486920 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-logs\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.490135 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.490779 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.492469 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-config-data\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.501716 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2phd\" (UniqueName: \"kubernetes.io/projected/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-kube-api-access-b2phd\") pod \"nova-metadata-0\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " pod="openstack/nova-metadata-0" Jan 29 08:17:57 crc kubenswrapper[4826]: I0129 08:17:57.639505 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:17:58 crc kubenswrapper[4826]: I0129 08:17:58.127035 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:17:58 crc kubenswrapper[4826]: W0129 08:17:58.140716 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b2ffeff_9154_48d4_afb8_ba1d7ec2b9ef.slice/crio-ae2491eca248f5cfd1b1a03edebee64edecce4cc84ddb79d30bac534d8b358e4 WatchSource:0}: Error finding container ae2491eca248f5cfd1b1a03edebee64edecce4cc84ddb79d30bac534d8b358e4: Status 404 returned error can't find the container with id ae2491eca248f5cfd1b1a03edebee64edecce4cc84ddb79d30bac534d8b358e4 Jan 29 08:17:58 crc kubenswrapper[4826]: I0129 08:17:58.183323 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef","Type":"ContainerStarted","Data":"ae2491eca248f5cfd1b1a03edebee64edecce4cc84ddb79d30bac534d8b358e4"} Jan 29 08:17:58 crc kubenswrapper[4826]: I0129 08:17:58.185435 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" event={"ID":"599a1081-272d-49b0-a081-bb866479e81b","Type":"ContainerDied","Data":"664579ab7bd34edd83b0fe81da790b04174c2cf5d35df2751ea738203649f2ca"} Jan 29 08:17:58 crc kubenswrapper[4826]: I0129 08:17:58.185499 4826 scope.go:117] "RemoveContainer" containerID="d452642237e7a695c7c2c5d4eeaed39970754da1e6aed17d01d9607f997100ba" Jan 29 08:17:58 crc kubenswrapper[4826]: I0129 08:17:58.185446 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67c6cdcd5c-cqq7r" Jan 29 08:17:58 crc kubenswrapper[4826]: I0129 08:17:58.258684 4826 scope.go:117] "RemoveContainer" containerID="697a0518edb80c3defc88deb124cec0f7e3c19c20ab4651c17b11429a6c035ae" Jan 29 08:17:58 crc kubenswrapper[4826]: I0129 08:17:58.300436 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67c6cdcd5c-cqq7r"] Jan 29 08:17:58 crc kubenswrapper[4826]: I0129 08:17:58.309786 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67c6cdcd5c-cqq7r"] Jan 29 08:17:58 crc kubenswrapper[4826]: I0129 08:17:58.824845 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="599a1081-272d-49b0-a081-bb866479e81b" path="/var/lib/kubelet/pods/599a1081-272d-49b0-a081-bb866479e81b/volumes" Jan 29 08:17:58 crc kubenswrapper[4826]: I0129 08:17:58.826270 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf94f776-6c62-4657-a6d4-71ef99842021" path="/var/lib/kubelet/pods/cf94f776-6c62-4657-a6d4-71ef99842021/volumes" Jan 29 08:17:59 crc kubenswrapper[4826]: I0129 08:17:59.202254 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef","Type":"ContainerStarted","Data":"1372d89b9e69b5bb1e19da846f4269d76f9f29d756db4d2666a887e2ea98e35b"} Jan 29 08:17:59 crc kubenswrapper[4826]: I0129 08:17:59.202571 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef","Type":"ContainerStarted","Data":"8e4e9f9864983adcbca300b38d09172f56ed0779104f88eb86d472af9cfb06c4"} Jan 29 08:17:59 crc kubenswrapper[4826]: I0129 08:17:59.225975 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.225946581 podStartE2EDuration="2.225946581s" podCreationTimestamp="2026-01-29 08:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:17:59.222696095 +0000 UTC m=+5663.084489164" watchObservedRunningTime="2026-01-29 08:17:59.225946581 +0000 UTC m=+5663.087739680" Jan 29 08:18:02 crc kubenswrapper[4826]: I0129 08:18:02.640128 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:18:02 crc kubenswrapper[4826]: I0129 08:18:02.640508 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:18:03 crc kubenswrapper[4826]: I0129 08:18:03.661645 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 08:18:05 crc kubenswrapper[4826]: I0129 08:18:05.554079 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:18:05 crc kubenswrapper[4826]: I0129 08:18:05.554524 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:18:05 crc kubenswrapper[4826]: I0129 08:18:05.656391 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:18:05 crc kubenswrapper[4826]: I0129 08:18:05.656455 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:18:05 crc kubenswrapper[4826]: I0129 08:18:05.656504 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 08:18:05 crc kubenswrapper[4826]: I0129 08:18:05.657389 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fa58273e324a815b3443a21ebdc2b51cbff96c021a9b9a81653f9dc2de446316"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:18:05 crc kubenswrapper[4826]: I0129 08:18:05.657498 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://fa58273e324a815b3443a21ebdc2b51cbff96c021a9b9a81653f9dc2de446316" gracePeriod=600 Jan 29 08:18:06 crc kubenswrapper[4826]: I0129 08:18:06.308864 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="fa58273e324a815b3443a21ebdc2b51cbff96c021a9b9a81653f9dc2de446316" exitCode=0 Jan 29 08:18:06 crc kubenswrapper[4826]: I0129 08:18:06.308978 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"fa58273e324a815b3443a21ebdc2b51cbff96c021a9b9a81653f9dc2de446316"} Jan 29 08:18:06 crc kubenswrapper[4826]: I0129 08:18:06.309184 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570"} Jan 29 08:18:06 crc kubenswrapper[4826]: I0129 08:18:06.309204 4826 scope.go:117] "RemoveContainer" containerID="491d2214652be539c5a02abd82d2f7f7b125c3f1a64568b35d69e37bd575365a" Jan 29 08:18:06 crc kubenswrapper[4826]: I0129 08:18:06.636540 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.87:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:06 crc kubenswrapper[4826]: I0129 08:18:06.637530 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.87:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:07 crc kubenswrapper[4826]: I0129 08:18:07.639716 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 08:18:07 crc kubenswrapper[4826]: I0129 08:18:07.640114 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 08:18:08 crc kubenswrapper[4826]: I0129 08:18:08.686547 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.88:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:08 crc kubenswrapper[4826]: I0129 08:18:08.686630 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.88:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:16 crc kubenswrapper[4826]: I0129 08:18:16.637663 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.87:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:16 crc kubenswrapper[4826]: I0129 08:18:16.637696 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.87:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:18 crc kubenswrapper[4826]: I0129 08:18:18.647540 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.88:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:18 crc kubenswrapper[4826]: I0129 08:18:18.647587 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.88:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.537835 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.539396 4826 generic.go:334] "Generic (PLEG): container finished" podID="4dd0c0ba-5266-4691-84af-b801d0ae00b3" containerID="b81e97a8516a0a8678d4de3e999c0190d42e75c132f51c8c0e4809ccbfea3152" exitCode=137 Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.539442 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4dd0c0ba-5266-4691-84af-b801d0ae00b3","Type":"ContainerDied","Data":"b81e97a8516a0a8678d4de3e999c0190d42e75c132f51c8c0e4809ccbfea3152"} Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.539468 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4dd0c0ba-5266-4691-84af-b801d0ae00b3","Type":"ContainerDied","Data":"6267e58a3ee0c75f98cfcf033d9cb38093c21d897e076548358946149f3f9f62"} Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.539486 4826 scope.go:117] "RemoveContainer" containerID="b81e97a8516a0a8678d4de3e999c0190d42e75c132f51c8c0e4809ccbfea3152" Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.574474 4826 scope.go:117] "RemoveContainer" containerID="b81e97a8516a0a8678d4de3e999c0190d42e75c132f51c8c0e4809ccbfea3152" Jan 29 08:18:21 crc kubenswrapper[4826]: E0129 08:18:21.576883 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b81e97a8516a0a8678d4de3e999c0190d42e75c132f51c8c0e4809ccbfea3152\": container with ID starting with b81e97a8516a0a8678d4de3e999c0190d42e75c132f51c8c0e4809ccbfea3152 not found: ID does not exist" containerID="b81e97a8516a0a8678d4de3e999c0190d42e75c132f51c8c0e4809ccbfea3152" Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.576942 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81e97a8516a0a8678d4de3e999c0190d42e75c132f51c8c0e4809ccbfea3152"} err="failed to get container status \"b81e97a8516a0a8678d4de3e999c0190d42e75c132f51c8c0e4809ccbfea3152\": rpc error: code = NotFound desc = could not find container \"b81e97a8516a0a8678d4de3e999c0190d42e75c132f51c8c0e4809ccbfea3152\": container with ID starting with b81e97a8516a0a8678d4de3e999c0190d42e75c132f51c8c0e4809ccbfea3152 not found: ID does not exist" Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.686506 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd0c0ba-5266-4691-84af-b801d0ae00b3-config-data\") pod \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\" (UID: \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\") " Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.686683 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd0c0ba-5266-4691-84af-b801d0ae00b3-combined-ca-bundle\") pod \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\" (UID: \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\") " Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.686839 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4dh8\" (UniqueName: \"kubernetes.io/projected/4dd0c0ba-5266-4691-84af-b801d0ae00b3-kube-api-access-m4dh8\") pod \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\" (UID: \"4dd0c0ba-5266-4691-84af-b801d0ae00b3\") " Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.692085 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd0c0ba-5266-4691-84af-b801d0ae00b3-kube-api-access-m4dh8" (OuterVolumeSpecName: "kube-api-access-m4dh8") pod "4dd0c0ba-5266-4691-84af-b801d0ae00b3" (UID: "4dd0c0ba-5266-4691-84af-b801d0ae00b3"). InnerVolumeSpecName "kube-api-access-m4dh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.714969 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd0c0ba-5266-4691-84af-b801d0ae00b3-config-data" (OuterVolumeSpecName: "config-data") pod "4dd0c0ba-5266-4691-84af-b801d0ae00b3" (UID: "4dd0c0ba-5266-4691-84af-b801d0ae00b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.716346 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd0c0ba-5266-4691-84af-b801d0ae00b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dd0c0ba-5266-4691-84af-b801d0ae00b3" (UID: "4dd0c0ba-5266-4691-84af-b801d0ae00b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.789888 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dd0c0ba-5266-4691-84af-b801d0ae00b3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.789940 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dd0c0ba-5266-4691-84af-b801d0ae00b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:21 crc kubenswrapper[4826]: I0129 08:18:21.789960 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4dh8\" (UniqueName: \"kubernetes.io/projected/4dd0c0ba-5266-4691-84af-b801d0ae00b3-kube-api-access-m4dh8\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.555112 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.657840 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.668989 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.677205 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:18:22 crc kubenswrapper[4826]: E0129 08:18:22.678882 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599a1081-272d-49b0-a081-bb866479e81b" containerName="init" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.679047 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="599a1081-272d-49b0-a081-bb866479e81b" containerName="init" Jan 29 08:18:22 crc kubenswrapper[4826]: E0129 08:18:22.679184 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd0c0ba-5266-4691-84af-b801d0ae00b3" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.679353 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd0c0ba-5266-4691-84af-b801d0ae00b3" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 08:18:22 crc kubenswrapper[4826]: E0129 08:18:22.679484 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599a1081-272d-49b0-a081-bb866479e81b" containerName="dnsmasq-dns" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.679580 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="599a1081-272d-49b0-a081-bb866479e81b" containerName="dnsmasq-dns" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.679964 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd0c0ba-5266-4691-84af-b801d0ae00b3" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.680081 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="599a1081-272d-49b0-a081-bb866479e81b" containerName="dnsmasq-dns" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.681750 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.684576 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.685283 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.685706 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.693206 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.819513 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/392179e6-1366-45dd-9742-d270beeefad6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.819553 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gts5\" (UniqueName: \"kubernetes.io/projected/392179e6-1366-45dd-9742-d270beeefad6-kube-api-access-7gts5\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.819582 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/392179e6-1366-45dd-9742-d270beeefad6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.819786 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392179e6-1366-45dd-9742-d270beeefad6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.819949 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/392179e6-1366-45dd-9742-d270beeefad6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.821658 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd0c0ba-5266-4691-84af-b801d0ae00b3" path="/var/lib/kubelet/pods/4dd0c0ba-5266-4691-84af-b801d0ae00b3/volumes" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.922137 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gts5\" (UniqueName: \"kubernetes.io/projected/392179e6-1366-45dd-9742-d270beeefad6-kube-api-access-7gts5\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.922187 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/392179e6-1366-45dd-9742-d270beeefad6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.922215 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/392179e6-1366-45dd-9742-d270beeefad6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.922340 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392179e6-1366-45dd-9742-d270beeefad6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.922461 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/392179e6-1366-45dd-9742-d270beeefad6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.926761 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392179e6-1366-45dd-9742-d270beeefad6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.927143 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/392179e6-1366-45dd-9742-d270beeefad6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.927797 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/392179e6-1366-45dd-9742-d270beeefad6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.936737 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/392179e6-1366-45dd-9742-d270beeefad6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:22 crc kubenswrapper[4826]: I0129 08:18:22.957654 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gts5\" (UniqueName: \"kubernetes.io/projected/392179e6-1366-45dd-9742-d270beeefad6-kube-api-access-7gts5\") pod \"nova-cell1-novncproxy-0\" (UID: \"392179e6-1366-45dd-9742-d270beeefad6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:23 crc kubenswrapper[4826]: I0129 08:18:23.019500 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:23 crc kubenswrapper[4826]: I0129 08:18:23.517623 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 08:18:23 crc kubenswrapper[4826]: I0129 08:18:23.585802 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"392179e6-1366-45dd-9742-d270beeefad6","Type":"ContainerStarted","Data":"bae1bb5e8d367500ca457a660872fbe144e566cf78750623841535d373e8fae1"} Jan 29 08:18:24 crc kubenswrapper[4826]: I0129 08:18:24.612952 4826 generic.go:334] "Generic (PLEG): container finished" podID="83093822-df53-4202-be80-6213a6d2885c" containerID="2e0584da702a9c9b52903f793b5a6159a94287e81ffad7609ccc3e49914da4a5" exitCode=137 Jan 29 08:18:24 crc kubenswrapper[4826]: I0129 08:18:24.613143 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83093822-df53-4202-be80-6213a6d2885c","Type":"ContainerDied","Data":"2e0584da702a9c9b52903f793b5a6159a94287e81ffad7609ccc3e49914da4a5"} Jan 29 08:18:24 crc kubenswrapper[4826]: I0129 08:18:24.620530 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"392179e6-1366-45dd-9742-d270beeefad6","Type":"ContainerStarted","Data":"f780402cb4bb8e9a10880953a8f4960d415fa635a950fe9dfeddd434ea13c78c"} Jan 29 08:18:24 crc kubenswrapper[4826]: I0129 08:18:24.657106 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.657079128 podStartE2EDuration="2.657079128s" podCreationTimestamp="2026-01-29 08:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:18:24.646204982 +0000 UTC m=+5688.507998101" watchObservedRunningTime="2026-01-29 08:18:24.657079128 +0000 UTC m=+5688.518872207" Jan 29 08:18:24 crc kubenswrapper[4826]: I0129 08:18:24.823289 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:18:24 crc kubenswrapper[4826]: I0129 08:18:24.969813 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83093822-df53-4202-be80-6213a6d2885c-combined-ca-bundle\") pod \"83093822-df53-4202-be80-6213a6d2885c\" (UID: \"83093822-df53-4202-be80-6213a6d2885c\") " Jan 29 08:18:24 crc kubenswrapper[4826]: I0129 08:18:24.970785 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqbcf\" (UniqueName: \"kubernetes.io/projected/83093822-df53-4202-be80-6213a6d2885c-kube-api-access-sqbcf\") pod \"83093822-df53-4202-be80-6213a6d2885c\" (UID: \"83093822-df53-4202-be80-6213a6d2885c\") " Jan 29 08:18:24 crc kubenswrapper[4826]: I0129 08:18:24.970827 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83093822-df53-4202-be80-6213a6d2885c-config-data\") pod \"83093822-df53-4202-be80-6213a6d2885c\" (UID: \"83093822-df53-4202-be80-6213a6d2885c\") " Jan 29 08:18:24 crc kubenswrapper[4826]: I0129 08:18:24.975625 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83093822-df53-4202-be80-6213a6d2885c-kube-api-access-sqbcf" (OuterVolumeSpecName: "kube-api-access-sqbcf") pod "83093822-df53-4202-be80-6213a6d2885c" (UID: "83093822-df53-4202-be80-6213a6d2885c"). InnerVolumeSpecName "kube-api-access-sqbcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.002270 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83093822-df53-4202-be80-6213a6d2885c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83093822-df53-4202-be80-6213a6d2885c" (UID: "83093822-df53-4202-be80-6213a6d2885c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.023761 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83093822-df53-4202-be80-6213a6d2885c-config-data" (OuterVolumeSpecName: "config-data") pod "83093822-df53-4202-be80-6213a6d2885c" (UID: "83093822-df53-4202-be80-6213a6d2885c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.073627 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83093822-df53-4202-be80-6213a6d2885c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.073680 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqbcf\" (UniqueName: \"kubernetes.io/projected/83093822-df53-4202-be80-6213a6d2885c-kube-api-access-sqbcf\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.073694 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83093822-df53-4202-be80-6213a6d2885c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.554284 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.554662 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.634609 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"83093822-df53-4202-be80-6213a6d2885c","Type":"ContainerDied","Data":"e4c005f58d46c72b968f9f7c275479f6718ca773e75f6db41ce5867313fb7005"} Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.634684 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.634685 4826 scope.go:117] "RemoveContainer" containerID="2e0584da702a9c9b52903f793b5a6159a94287e81ffad7609ccc3e49914da4a5" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.722445 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.740223 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.747936 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:18:25 crc kubenswrapper[4826]: E0129 08:18:25.748826 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83093822-df53-4202-be80-6213a6d2885c" containerName="nova-scheduler-scheduler" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.748865 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="83093822-df53-4202-be80-6213a6d2885c" containerName="nova-scheduler-scheduler" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.749209 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="83093822-df53-4202-be80-6213a6d2885c" containerName="nova-scheduler-scheduler" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.750458 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.757015 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.757883 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.891809 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2547ad0-130d-4111-b34c-51a4a9b09ac6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\") " pod="openstack/nova-scheduler-0" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.891990 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2547ad0-130d-4111-b34c-51a4a9b09ac6-config-data\") pod \"nova-scheduler-0\" (UID: \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\") " pod="openstack/nova-scheduler-0" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.892085 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdhzp\" (UniqueName: \"kubernetes.io/projected/b2547ad0-130d-4111-b34c-51a4a9b09ac6-kube-api-access-fdhzp\") pod \"nova-scheduler-0\" (UID: \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\") " pod="openstack/nova-scheduler-0" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.995479 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2547ad0-130d-4111-b34c-51a4a9b09ac6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\") " pod="openstack/nova-scheduler-0" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.995598 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2547ad0-130d-4111-b34c-51a4a9b09ac6-config-data\") pod \"nova-scheduler-0\" (UID: \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\") " pod="openstack/nova-scheduler-0" Jan 29 08:18:25 crc kubenswrapper[4826]: I0129 08:18:25.995662 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdhzp\" (UniqueName: \"kubernetes.io/projected/b2547ad0-130d-4111-b34c-51a4a9b09ac6-kube-api-access-fdhzp\") pod \"nova-scheduler-0\" (UID: \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\") " pod="openstack/nova-scheduler-0" Jan 29 08:18:26 crc kubenswrapper[4826]: I0129 08:18:26.001444 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2547ad0-130d-4111-b34c-51a4a9b09ac6-config-data\") pod \"nova-scheduler-0\" (UID: \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\") " pod="openstack/nova-scheduler-0" Jan 29 08:18:26 crc kubenswrapper[4826]: I0129 08:18:26.015116 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2547ad0-130d-4111-b34c-51a4a9b09ac6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\") " pod="openstack/nova-scheduler-0" Jan 29 08:18:26 crc kubenswrapper[4826]: I0129 08:18:26.015776 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdhzp\" (UniqueName: \"kubernetes.io/projected/b2547ad0-130d-4111-b34c-51a4a9b09ac6-kube-api-access-fdhzp\") pod \"nova-scheduler-0\" (UID: \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\") " pod="openstack/nova-scheduler-0" Jan 29 08:18:26 crc kubenswrapper[4826]: I0129 08:18:26.080659 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:18:26 crc kubenswrapper[4826]: I0129 08:18:26.600940 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:18:26 crc kubenswrapper[4826]: I0129 08:18:26.639458 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.87:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:26 crc kubenswrapper[4826]: I0129 08:18:26.639509 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.87:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:26 crc kubenswrapper[4826]: I0129 08:18:26.660651 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b2547ad0-130d-4111-b34c-51a4a9b09ac6","Type":"ContainerStarted","Data":"c31fda4e051a8025ffc4e1d27096112534189aa1a7978f3a11fdbf1dac03c5ce"} Jan 29 08:18:26 crc kubenswrapper[4826]: I0129 08:18:26.826461 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83093822-df53-4202-be80-6213a6d2885c" path="/var/lib/kubelet/pods/83093822-df53-4202-be80-6213a6d2885c/volumes" Jan 29 08:18:27 crc kubenswrapper[4826]: I0129 08:18:27.698558 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b2547ad0-130d-4111-b34c-51a4a9b09ac6","Type":"ContainerStarted","Data":"5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834"} Jan 29 08:18:27 crc kubenswrapper[4826]: I0129 08:18:27.732461 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7324370719999997 podStartE2EDuration="2.732437072s" podCreationTimestamp="2026-01-29 08:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:18:27.718262528 +0000 UTC m=+5691.580055637" watchObservedRunningTime="2026-01-29 08:18:27.732437072 +0000 UTC m=+5691.594230151" Jan 29 08:18:28 crc kubenswrapper[4826]: I0129 08:18:28.020398 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:28 crc kubenswrapper[4826]: I0129 08:18:28.649434 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.88:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:28 crc kubenswrapper[4826]: I0129 08:18:28.649517 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.88:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:31 crc kubenswrapper[4826]: I0129 08:18:31.081517 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 08:18:33 crc kubenswrapper[4826]: I0129 08:18:33.019900 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:33 crc kubenswrapper[4826]: I0129 08:18:33.045431 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:33 crc kubenswrapper[4826]: I0129 08:18:33.788898 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 08:18:33 crc kubenswrapper[4826]: I0129 08:18:33.980440 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-zw526"] Jan 29 08:18:33 crc kubenswrapper[4826]: I0129 08:18:33.982166 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:33 crc kubenswrapper[4826]: I0129 08:18:33.984473 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 08:18:33 crc kubenswrapper[4826]: I0129 08:18:33.985565 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.001855 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zw526"] Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.078426 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-config-data\") pod \"nova-cell1-cell-mapping-zw526\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.078670 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhzxh\" (UniqueName: \"kubernetes.io/projected/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-kube-api-access-jhzxh\") pod \"nova-cell1-cell-mapping-zw526\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.078862 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zw526\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.078968 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-scripts\") pod \"nova-cell1-cell-mapping-zw526\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.185270 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-config-data\") pod \"nova-cell1-cell-mapping-zw526\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.185507 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhzxh\" (UniqueName: \"kubernetes.io/projected/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-kube-api-access-jhzxh\") pod \"nova-cell1-cell-mapping-zw526\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.185692 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zw526\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.185805 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-scripts\") pod \"nova-cell1-cell-mapping-zw526\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.191817 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zw526\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.192027 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-config-data\") pod \"nova-cell1-cell-mapping-zw526\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.192958 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-scripts\") pod \"nova-cell1-cell-mapping-zw526\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.213507 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhzxh\" (UniqueName: \"kubernetes.io/projected/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-kube-api-access-jhzxh\") pod \"nova-cell1-cell-mapping-zw526\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.309602 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:34 crc kubenswrapper[4826]: I0129 08:18:34.774434 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zw526"] Jan 29 08:18:35 crc kubenswrapper[4826]: I0129 08:18:35.792894 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zw526" event={"ID":"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f","Type":"ContainerStarted","Data":"8bea7c921d13e8f6e379a3ad4ea2542f1fb47be5f9c620480efe08546287f6eb"} Jan 29 08:18:35 crc kubenswrapper[4826]: I0129 08:18:35.793225 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zw526" event={"ID":"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f","Type":"ContainerStarted","Data":"e20dda9bb9c4cb881ef5ee44e1f70726a7e31d10654318ac38606555dfde74fe"} Jan 29 08:18:35 crc kubenswrapper[4826]: I0129 08:18:35.815823 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-zw526" podStartSLOduration=2.815804878 podStartE2EDuration="2.815804878s" podCreationTimestamp="2026-01-29 08:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:18:35.810729694 +0000 UTC m=+5699.672522773" watchObservedRunningTime="2026-01-29 08:18:35.815804878 +0000 UTC m=+5699.677597947" Jan 29 08:18:36 crc kubenswrapper[4826]: I0129 08:18:36.081204 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 08:18:36 crc kubenswrapper[4826]: I0129 08:18:36.113874 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 08:18:36 crc kubenswrapper[4826]: I0129 08:18:36.636598 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.87:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:36 crc kubenswrapper[4826]: I0129 08:18:36.636903 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.87:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:36 crc kubenswrapper[4826]: I0129 08:18:36.851676 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 08:18:38 crc kubenswrapper[4826]: I0129 08:18:38.649676 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.88:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:38 crc kubenswrapper[4826]: I0129 08:18:38.649788 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.88:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 08:18:39 crc kubenswrapper[4826]: I0129 08:18:39.840933 4826 generic.go:334] "Generic (PLEG): container finished" podID="d64cc12e-e9ea-48e4-9f37-525fe27b1b6f" containerID="8bea7c921d13e8f6e379a3ad4ea2542f1fb47be5f9c620480efe08546287f6eb" exitCode=0 Jan 29 08:18:39 crc kubenswrapper[4826]: I0129 08:18:39.841007 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zw526" event={"ID":"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f","Type":"ContainerDied","Data":"8bea7c921d13e8f6e379a3ad4ea2542f1fb47be5f9c620480efe08546287f6eb"} Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.267898 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.346050 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-scripts\") pod \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.349203 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-config-data\") pod \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.349374 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-combined-ca-bundle\") pod \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.349512 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhzxh\" (UniqueName: \"kubernetes.io/projected/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-kube-api-access-jhzxh\") pod \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\" (UID: \"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f\") " Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.358595 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-scripts" (OuterVolumeSpecName: "scripts") pod "d64cc12e-e9ea-48e4-9f37-525fe27b1b6f" (UID: "d64cc12e-e9ea-48e4-9f37-525fe27b1b6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.358695 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-kube-api-access-jhzxh" (OuterVolumeSpecName: "kube-api-access-jhzxh") pod "d64cc12e-e9ea-48e4-9f37-525fe27b1b6f" (UID: "d64cc12e-e9ea-48e4-9f37-525fe27b1b6f"). InnerVolumeSpecName "kube-api-access-jhzxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.383510 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d64cc12e-e9ea-48e4-9f37-525fe27b1b6f" (UID: "d64cc12e-e9ea-48e4-9f37-525fe27b1b6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.384126 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-config-data" (OuterVolumeSpecName: "config-data") pod "d64cc12e-e9ea-48e4-9f37-525fe27b1b6f" (UID: "d64cc12e-e9ea-48e4-9f37-525fe27b1b6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.452726 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhzxh\" (UniqueName: \"kubernetes.io/projected/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-kube-api-access-jhzxh\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.452768 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.452784 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.452798 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.872174 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zw526" event={"ID":"d64cc12e-e9ea-48e4-9f37-525fe27b1b6f","Type":"ContainerDied","Data":"e20dda9bb9c4cb881ef5ee44e1f70726a7e31d10654318ac38606555dfde74fe"} Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.872228 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e20dda9bb9c4cb881ef5ee44e1f70726a7e31d10654318ac38606555dfde74fe" Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.872385 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zw526" Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.970992 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.971259 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-log" containerID="cri-o://c94a81169418473cd450d5b7c9db0f7d0ece2c5d56f648b9867b4bd0ff9fec4d" gracePeriod=30 Jan 29 08:18:41 crc kubenswrapper[4826]: I0129 08:18:41.971388 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-api" containerID="cri-o://cc4cf9868c809eb79fea3d9df7b5fae3c1a0bd4a32970bb9e8dd16deb3d0ed93" gracePeriod=30 Jan 29 08:18:42 crc kubenswrapper[4826]: I0129 08:18:42.039968 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:18:42 crc kubenswrapper[4826]: I0129 08:18:42.040185 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b2547ad0-130d-4111-b34c-51a4a9b09ac6" containerName="nova-scheduler-scheduler" containerID="cri-o://5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" gracePeriod=30 Jan 29 08:18:42 crc kubenswrapper[4826]: I0129 08:18:42.069113 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:18:42 crc kubenswrapper[4826]: I0129 08:18:42.069419 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-log" containerID="cri-o://8e4e9f9864983adcbca300b38d09172f56ed0779104f88eb86d472af9cfb06c4" gracePeriod=30 Jan 29 08:18:42 crc kubenswrapper[4826]: I0129 08:18:42.069541 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-metadata" containerID="cri-o://1372d89b9e69b5bb1e19da846f4269d76f9f29d756db4d2666a887e2ea98e35b" gracePeriod=30 Jan 29 08:18:42 crc kubenswrapper[4826]: I0129 08:18:42.880027 4826 generic.go:334] "Generic (PLEG): container finished" podID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerID="c94a81169418473cd450d5b7c9db0f7d0ece2c5d56f648b9867b4bd0ff9fec4d" exitCode=143 Jan 29 08:18:42 crc kubenswrapper[4826]: I0129 08:18:42.880123 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb07f33-e9ca-47ec-ad41-65d1756c84af","Type":"ContainerDied","Data":"c94a81169418473cd450d5b7c9db0f7d0ece2c5d56f648b9867b4bd0ff9fec4d"} Jan 29 08:18:42 crc kubenswrapper[4826]: I0129 08:18:42.881471 4826 generic.go:334] "Generic (PLEG): container finished" podID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerID="8e4e9f9864983adcbca300b38d09172f56ed0779104f88eb86d472af9cfb06c4" exitCode=143 Jan 29 08:18:42 crc kubenswrapper[4826]: I0129 08:18:42.881502 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef","Type":"ContainerDied","Data":"8e4e9f9864983adcbca300b38d09172f56ed0779104f88eb86d472af9cfb06c4"} Jan 29 08:18:46 crc kubenswrapper[4826]: E0129 08:18:46.083828 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:18:46 crc kubenswrapper[4826]: E0129 08:18:46.085559 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:18:46 crc kubenswrapper[4826]: E0129 08:18:46.087370 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:18:46 crc kubenswrapper[4826]: E0129 08:18:46.087438 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b2547ad0-130d-4111-b34c-51a4a9b09ac6" containerName="nova-scheduler-scheduler" Jan 29 08:18:48 crc kubenswrapper[4826]: I0129 08:18:48.065847 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-k84pd"] Jan 29 08:18:48 crc kubenswrapper[4826]: I0129 08:18:48.078279 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-k84pd"] Jan 29 08:18:48 crc kubenswrapper[4826]: I0129 08:18:48.829556 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a" path="/var/lib/kubelet/pods/1ee5ef9b-b0e3-4ccf-b1cf-f06608bf006a/volumes" Jan 29 08:18:49 crc kubenswrapper[4826]: I0129 08:18:49.035934 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0d26-account-create-update-p7z5r"] Jan 29 08:18:49 crc kubenswrapper[4826]: I0129 08:18:49.051793 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0d26-account-create-update-p7z5r"] Jan 29 08:18:50 crc kubenswrapper[4826]: I0129 08:18:50.830612 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a71cd30-8e95-4f1e-855a-53f82a51a03e" path="/var/lib/kubelet/pods/4a71cd30-8e95-4f1e-855a-53f82a51a03e/volumes" Jan 29 08:18:51 crc kubenswrapper[4826]: E0129 08:18:51.086230 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:18:51 crc kubenswrapper[4826]: E0129 08:18:51.088896 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:18:51 crc kubenswrapper[4826]: E0129 08:18:51.090380 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:18:51 crc kubenswrapper[4826]: E0129 08:18:51.090444 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b2547ad0-130d-4111-b34c-51a4a9b09ac6" containerName="nova-scheduler-scheduler" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.072811 4826 generic.go:334] "Generic (PLEG): container finished" podID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerID="1372d89b9e69b5bb1e19da846f4269d76f9f29d756db4d2666a887e2ea98e35b" exitCode=0 Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.073449 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef","Type":"ContainerDied","Data":"1372d89b9e69b5bb1e19da846f4269d76f9f29d756db4d2666a887e2ea98e35b"} Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.073477 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef","Type":"ContainerDied","Data":"ae2491eca248f5cfd1b1a03edebee64edecce4cc84ddb79d30bac534d8b358e4"} Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.073488 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae2491eca248f5cfd1b1a03edebee64edecce4cc84ddb79d30bac534d8b358e4" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.079366 4826 generic.go:334] "Generic (PLEG): container finished" podID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerID="cc4cf9868c809eb79fea3d9df7b5fae3c1a0bd4a32970bb9e8dd16deb3d0ed93" exitCode=0 Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.079444 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb07f33-e9ca-47ec-ad41-65d1756c84af","Type":"ContainerDied","Data":"cc4cf9868c809eb79fea3d9df7b5fae3c1a0bd4a32970bb9e8dd16deb3d0ed93"} Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.079468 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"deb07f33-e9ca-47ec-ad41-65d1756c84af","Type":"ContainerDied","Data":"9cc77bc148d95324e8d86a78c1affbee2d41e5b7e1f5816c305667f17230114e"} Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.079509 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cc77bc148d95324e8d86a78c1affbee2d41e5b7e1f5816c305667f17230114e" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.079937 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:18:56 crc kubenswrapper[4826]: E0129 08:18:56.084687 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:18:56 crc kubenswrapper[4826]: E0129 08:18:56.086933 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:18:56 crc kubenswrapper[4826]: E0129 08:18:56.092795 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:18:56 crc kubenswrapper[4826]: E0129 08:18:56.092833 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b2547ad0-130d-4111-b34c-51a4a9b09ac6" containerName="nova-scheduler-scheduler" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.098262 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.104994 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07f33-e9ca-47ec-ad41-65d1756c84af-config-data\") pod \"deb07f33-e9ca-47ec-ad41-65d1756c84af\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.105166 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07f33-e9ca-47ec-ad41-65d1756c84af-logs\") pod \"deb07f33-e9ca-47ec-ad41-65d1756c84af\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.105255 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07f33-e9ca-47ec-ad41-65d1756c84af-combined-ca-bundle\") pod \"deb07f33-e9ca-47ec-ad41-65d1756c84af\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.105352 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcjw4\" (UniqueName: \"kubernetes.io/projected/deb07f33-e9ca-47ec-ad41-65d1756c84af-kube-api-access-tcjw4\") pod \"deb07f33-e9ca-47ec-ad41-65d1756c84af\" (UID: \"deb07f33-e9ca-47ec-ad41-65d1756c84af\") " Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.105893 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb07f33-e9ca-47ec-ad41-65d1756c84af-logs" (OuterVolumeSpecName: "logs") pod "deb07f33-e9ca-47ec-ad41-65d1756c84af" (UID: "deb07f33-e9ca-47ec-ad41-65d1756c84af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.123747 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb07f33-e9ca-47ec-ad41-65d1756c84af-kube-api-access-tcjw4" (OuterVolumeSpecName: "kube-api-access-tcjw4") pod "deb07f33-e9ca-47ec-ad41-65d1756c84af" (UID: "deb07f33-e9ca-47ec-ad41-65d1756c84af"). InnerVolumeSpecName "kube-api-access-tcjw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.140525 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb07f33-e9ca-47ec-ad41-65d1756c84af-config-data" (OuterVolumeSpecName: "config-data") pod "deb07f33-e9ca-47ec-ad41-65d1756c84af" (UID: "deb07f33-e9ca-47ec-ad41-65d1756c84af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.176881 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb07f33-e9ca-47ec-ad41-65d1756c84af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deb07f33-e9ca-47ec-ad41-65d1756c84af" (UID: "deb07f33-e9ca-47ec-ad41-65d1756c84af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.206762 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-logs\") pod \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.206878 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-combined-ca-bundle\") pod \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.206968 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-nova-metadata-tls-certs\") pod \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.207085 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2phd\" (UniqueName: \"kubernetes.io/projected/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-kube-api-access-b2phd\") pod \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.207129 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-config-data\") pod \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\" (UID: \"0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef\") " Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.207475 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-logs" (OuterVolumeSpecName: "logs") pod "0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" (UID: "0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.207861 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb07f33-e9ca-47ec-ad41-65d1756c84af-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.207892 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deb07f33-e9ca-47ec-ad41-65d1756c84af-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.207910 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.207931 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb07f33-e9ca-47ec-ad41-65d1756c84af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.207950 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcjw4\" (UniqueName: \"kubernetes.io/projected/deb07f33-e9ca-47ec-ad41-65d1756c84af-kube-api-access-tcjw4\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.213133 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-kube-api-access-b2phd" (OuterVolumeSpecName: "kube-api-access-b2phd") pod "0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" (UID: "0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef"). InnerVolumeSpecName "kube-api-access-b2phd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.237211 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" (UID: "0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.244543 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-config-data" (OuterVolumeSpecName: "config-data") pod "0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" (UID: "0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.270910 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" (UID: "0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.310087 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.310119 4826 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.310129 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2phd\" (UniqueName: \"kubernetes.io/projected/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-kube-api-access-b2phd\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:56 crc kubenswrapper[4826]: I0129 08:18:56.310138 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.089753 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.089775 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.127072 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.142149 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.160891 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.171647 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.181841 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:18:57 crc kubenswrapper[4826]: E0129 08:18:57.182383 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-log" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.182401 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-log" Jan 29 08:18:57 crc kubenswrapper[4826]: E0129 08:18:57.182429 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-metadata" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.182438 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-metadata" Jan 29 08:18:57 crc kubenswrapper[4826]: E0129 08:18:57.182455 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d64cc12e-e9ea-48e4-9f37-525fe27b1b6f" containerName="nova-manage" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.182463 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d64cc12e-e9ea-48e4-9f37-525fe27b1b6f" containerName="nova-manage" Jan 29 08:18:57 crc kubenswrapper[4826]: E0129 08:18:57.182473 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-log" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.182480 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-log" Jan 29 08:18:57 crc kubenswrapper[4826]: E0129 08:18:57.182502 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-api" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.182508 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-api" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.182745 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-api" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.182761 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-metadata" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.182776 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d64cc12e-e9ea-48e4-9f37-525fe27b1b6f" containerName="nova-manage" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.182785 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" containerName="nova-metadata-log" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.182795 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" containerName="nova-api-log" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.183856 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.186463 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.186711 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.193010 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.198288 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.203354 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.210738 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.225516 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.228142 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca74ca5b-bc95-440c-9d55-370533d21e38-config-data\") pod \"nova-api-0\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.228225 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.228275 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-config-data\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.228478 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca74ca5b-bc95-440c-9d55-370533d21e38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.228640 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.228785 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6bgz\" (UniqueName: \"kubernetes.io/projected/ca74ca5b-bc95-440c-9d55-370533d21e38-kube-api-access-b6bgz\") pod \"nova-api-0\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.228828 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgxsh\" (UniqueName: \"kubernetes.io/projected/1ef24356-51c0-45f8-b98e-6c694ae2f61b-kube-api-access-kgxsh\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.228881 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca74ca5b-bc95-440c-9d55-370533d21e38-logs\") pod \"nova-api-0\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.229084 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ef24356-51c0-45f8-b98e-6c694ae2f61b-logs\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.331057 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgxsh\" (UniqueName: \"kubernetes.io/projected/1ef24356-51c0-45f8-b98e-6c694ae2f61b-kube-api-access-kgxsh\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.331123 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca74ca5b-bc95-440c-9d55-370533d21e38-logs\") pod \"nova-api-0\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.331184 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ef24356-51c0-45f8-b98e-6c694ae2f61b-logs\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.331218 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca74ca5b-bc95-440c-9d55-370533d21e38-config-data\") pod \"nova-api-0\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.331249 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.331281 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-config-data\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.331335 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca74ca5b-bc95-440c-9d55-370533d21e38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.331374 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.331419 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6bgz\" (UniqueName: \"kubernetes.io/projected/ca74ca5b-bc95-440c-9d55-370533d21e38-kube-api-access-b6bgz\") pod \"nova-api-0\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.332131 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca74ca5b-bc95-440c-9d55-370533d21e38-logs\") pod \"nova-api-0\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.332427 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ef24356-51c0-45f8-b98e-6c694ae2f61b-logs\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.337359 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-config-data\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.337550 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca74ca5b-bc95-440c-9d55-370533d21e38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.339077 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca74ca5b-bc95-440c-9d55-370533d21e38-config-data\") pod \"nova-api-0\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.339723 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.340795 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.351941 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgxsh\" (UniqueName: \"kubernetes.io/projected/1ef24356-51c0-45f8-b98e-6c694ae2f61b-kube-api-access-kgxsh\") pod \"nova-metadata-0\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.354822 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6bgz\" (UniqueName: \"kubernetes.io/projected/ca74ca5b-bc95-440c-9d55-370533d21e38-kube-api-access-b6bgz\") pod \"nova-api-0\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " pod="openstack/nova-api-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.520137 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 08:18:57 crc kubenswrapper[4826]: I0129 08:18:57.537669 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:18:58 crc kubenswrapper[4826]: I0129 08:18:58.021889 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 08:18:58 crc kubenswrapper[4826]: I0129 08:18:58.100911 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1ef24356-51c0-45f8-b98e-6c694ae2f61b","Type":"ContainerStarted","Data":"380ccb44874f648e71476d79af104502252e89de504804a2426b6471d8b2913f"} Jan 29 08:18:58 crc kubenswrapper[4826]: I0129 08:18:58.181875 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:18:58 crc kubenswrapper[4826]: W0129 08:18:58.191263 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca74ca5b_bc95_440c_9d55_370533d21e38.slice/crio-54d838d6a9b3b08e683738cdac5169b5d8bdf3bbb6a12d4c846b684ccf002b97 WatchSource:0}: Error finding container 54d838d6a9b3b08e683738cdac5169b5d8bdf3bbb6a12d4c846b684ccf002b97: Status 404 returned error can't find the container with id 54d838d6a9b3b08e683738cdac5169b5d8bdf3bbb6a12d4c846b684ccf002b97 Jan 29 08:18:58 crc kubenswrapper[4826]: I0129 08:18:58.823058 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef" path="/var/lib/kubelet/pods/0b2ffeff-9154-48d4-afb8-ba1d7ec2b9ef/volumes" Jan 29 08:18:58 crc kubenswrapper[4826]: I0129 08:18:58.825874 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb07f33-e9ca-47ec-ad41-65d1756c84af" path="/var/lib/kubelet/pods/deb07f33-e9ca-47ec-ad41-65d1756c84af/volumes" Jan 29 08:18:59 crc kubenswrapper[4826]: I0129 08:18:59.048594 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-vxbg9"] Jan 29 08:18:59 crc kubenswrapper[4826]: I0129 08:18:59.053214 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-vxbg9"] Jan 29 08:18:59 crc kubenswrapper[4826]: I0129 08:18:59.123723 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ca74ca5b-bc95-440c-9d55-370533d21e38","Type":"ContainerStarted","Data":"6740cc0eadaf3f5a709b33822a259e2aa92336d7027ec13859b0d22c1757950a"} Jan 29 08:18:59 crc kubenswrapper[4826]: I0129 08:18:59.123761 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ca74ca5b-bc95-440c-9d55-370533d21e38","Type":"ContainerStarted","Data":"16d878d405111033c03b9fa922678a1c4d67ba7bb966a2ee7dae41ac51c068dd"} Jan 29 08:18:59 crc kubenswrapper[4826]: I0129 08:18:59.123771 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ca74ca5b-bc95-440c-9d55-370533d21e38","Type":"ContainerStarted","Data":"54d838d6a9b3b08e683738cdac5169b5d8bdf3bbb6a12d4c846b684ccf002b97"} Jan 29 08:18:59 crc kubenswrapper[4826]: I0129 08:18:59.128979 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1ef24356-51c0-45f8-b98e-6c694ae2f61b","Type":"ContainerStarted","Data":"eb955492493b97ecc22f8505debd9f845936c63c0749aff9c47f2bbce9f35cd0"} Jan 29 08:18:59 crc kubenswrapper[4826]: I0129 08:18:59.129017 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1ef24356-51c0-45f8-b98e-6c694ae2f61b","Type":"ContainerStarted","Data":"e89a37b46a685e1c9a20a1def84b5602d8d70b2bb92c94f611e6e2ed977df9ab"} Jan 29 08:18:59 crc kubenswrapper[4826]: I0129 08:18:59.176688 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.176670951 podStartE2EDuration="2.176670951s" podCreationTimestamp="2026-01-29 08:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:18:59.15536276 +0000 UTC m=+5723.017155829" watchObservedRunningTime="2026-01-29 08:18:59.176670951 +0000 UTC m=+5723.038464020" Jan 29 08:18:59 crc kubenswrapper[4826]: I0129 08:18:59.179553 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.179546237 podStartE2EDuration="2.179546237s" podCreationTimestamp="2026-01-29 08:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:18:59.174825303 +0000 UTC m=+5723.036618372" watchObservedRunningTime="2026-01-29 08:18:59.179546237 +0000 UTC m=+5723.041339306" Jan 29 08:19:00 crc kubenswrapper[4826]: I0129 08:19:00.824064 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d" path="/var/lib/kubelet/pods/6f15e2ae-56b9-40ec-9be5-0a3cb8d0389d/volumes" Jan 29 08:19:01 crc kubenswrapper[4826]: E0129 08:19:01.086236 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:19:01 crc kubenswrapper[4826]: E0129 08:19:01.088708 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:19:01 crc kubenswrapper[4826]: E0129 08:19:01.090976 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:19:01 crc kubenswrapper[4826]: E0129 08:19:01.091071 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b2547ad0-130d-4111-b34c-51a4a9b09ac6" containerName="nova-scheduler-scheduler" Jan 29 08:19:02 crc kubenswrapper[4826]: I0129 08:19:02.520587 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:19:02 crc kubenswrapper[4826]: I0129 08:19:02.521745 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 08:19:06 crc kubenswrapper[4826]: E0129 08:19:06.083798 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:19:06 crc kubenswrapper[4826]: E0129 08:19:06.088900 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:19:06 crc kubenswrapper[4826]: E0129 08:19:06.091199 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:19:06 crc kubenswrapper[4826]: E0129 08:19:06.091493 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b2547ad0-130d-4111-b34c-51a4a9b09ac6" containerName="nova-scheduler-scheduler" Jan 29 08:19:07 crc kubenswrapper[4826]: I0129 08:19:07.521166 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 08:19:07 crc kubenswrapper[4826]: I0129 08:19:07.521210 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 08:19:07 crc kubenswrapper[4826]: I0129 08:19:07.538170 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:19:07 crc kubenswrapper[4826]: I0129 08:19:07.538235 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:19:08 crc kubenswrapper[4826]: I0129 08:19:08.533500 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.92:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 08:19:08 crc kubenswrapper[4826]: I0129 08:19:08.533545 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.92:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:19:08 crc kubenswrapper[4826]: I0129 08:19:08.622521 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ca74ca5b-bc95-440c-9d55-370533d21e38" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.93:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:19:08 crc kubenswrapper[4826]: I0129 08:19:08.622558 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ca74ca5b-bc95-440c-9d55-370533d21e38" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.93:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 08:19:11 crc kubenswrapper[4826]: E0129 08:19:11.083944 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:19:11 crc kubenswrapper[4826]: E0129 08:19:11.087821 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:19:11 crc kubenswrapper[4826]: E0129 08:19:11.091429 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 08:19:11 crc kubenswrapper[4826]: E0129 08:19:11.091661 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b2547ad0-130d-4111-b34c-51a4a9b09ac6" containerName="nova-scheduler-scheduler" Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.044382 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-c9d8z"] Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.070861 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-c9d8z"] Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.275345 4826 generic.go:334] "Generic (PLEG): container finished" podID="b2547ad0-130d-4111-b34c-51a4a9b09ac6" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" exitCode=137 Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.275692 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b2547ad0-130d-4111-b34c-51a4a9b09ac6","Type":"ContainerDied","Data":"5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834"} Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.518175 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.623149 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2547ad0-130d-4111-b34c-51a4a9b09ac6-combined-ca-bundle\") pod \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\" (UID: \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\") " Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.623370 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdhzp\" (UniqueName: \"kubernetes.io/projected/b2547ad0-130d-4111-b34c-51a4a9b09ac6-kube-api-access-fdhzp\") pod \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\" (UID: \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\") " Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.623439 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2547ad0-130d-4111-b34c-51a4a9b09ac6-config-data\") pod \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\" (UID: \"b2547ad0-130d-4111-b34c-51a4a9b09ac6\") " Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.630525 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2547ad0-130d-4111-b34c-51a4a9b09ac6-kube-api-access-fdhzp" (OuterVolumeSpecName: "kube-api-access-fdhzp") pod "b2547ad0-130d-4111-b34c-51a4a9b09ac6" (UID: "b2547ad0-130d-4111-b34c-51a4a9b09ac6"). InnerVolumeSpecName "kube-api-access-fdhzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.656527 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2547ad0-130d-4111-b34c-51a4a9b09ac6-config-data" (OuterVolumeSpecName: "config-data") pod "b2547ad0-130d-4111-b34c-51a4a9b09ac6" (UID: "b2547ad0-130d-4111-b34c-51a4a9b09ac6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.662687 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2547ad0-130d-4111-b34c-51a4a9b09ac6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2547ad0-130d-4111-b34c-51a4a9b09ac6" (UID: "b2547ad0-130d-4111-b34c-51a4a9b09ac6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.725377 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdhzp\" (UniqueName: \"kubernetes.io/projected/b2547ad0-130d-4111-b34c-51a4a9b09ac6-kube-api-access-fdhzp\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.725408 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2547ad0-130d-4111-b34c-51a4a9b09ac6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.725420 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2547ad0-130d-4111-b34c-51a4a9b09ac6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:12 crc kubenswrapper[4826]: I0129 08:19:12.820750 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e6de92-9f55-4628-97aa-cb6c36a92332" path="/var/lib/kubelet/pods/78e6de92-9f55-4628-97aa-cb6c36a92332/volumes" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.287732 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b2547ad0-130d-4111-b34c-51a4a9b09ac6","Type":"ContainerDied","Data":"c31fda4e051a8025ffc4e1d27096112534189aa1a7978f3a11fdbf1dac03c5ce"} Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.287793 4826 scope.go:117] "RemoveContainer" containerID="5d461da3d786665c809598fef79b0ac6980b60d9f864835176d27a986b650834" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.287894 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.334746 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.355350 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.362946 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:19:13 crc kubenswrapper[4826]: E0129 08:19:13.363679 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2547ad0-130d-4111-b34c-51a4a9b09ac6" containerName="nova-scheduler-scheduler" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.363722 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2547ad0-130d-4111-b34c-51a4a9b09ac6" containerName="nova-scheduler-scheduler" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.364123 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2547ad0-130d-4111-b34c-51a4a9b09ac6" containerName="nova-scheduler-scheduler" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.367549 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.371690 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.373203 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.560463 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wplt6\" (UniqueName: \"kubernetes.io/projected/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-kube-api-access-wplt6\") pod \"nova-scheduler-0\" (UID: \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\") " pod="openstack/nova-scheduler-0" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.560797 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-config-data\") pod \"nova-scheduler-0\" (UID: \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\") " pod="openstack/nova-scheduler-0" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.560911 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\") " pod="openstack/nova-scheduler-0" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.663771 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wplt6\" (UniqueName: \"kubernetes.io/projected/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-kube-api-access-wplt6\") pod \"nova-scheduler-0\" (UID: \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\") " pod="openstack/nova-scheduler-0" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.664127 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-config-data\") pod \"nova-scheduler-0\" (UID: \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\") " pod="openstack/nova-scheduler-0" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.664262 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\") " pod="openstack/nova-scheduler-0" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.669631 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\") " pod="openstack/nova-scheduler-0" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.676781 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-config-data\") pod \"nova-scheduler-0\" (UID: \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\") " pod="openstack/nova-scheduler-0" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.697507 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wplt6\" (UniqueName: \"kubernetes.io/projected/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-kube-api-access-wplt6\") pod \"nova-scheduler-0\" (UID: \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\") " pod="openstack/nova-scheduler-0" Jan 29 08:19:13 crc kubenswrapper[4826]: I0129 08:19:13.989835 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 08:19:14 crc kubenswrapper[4826]: I0129 08:19:14.759973 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 08:19:14 crc kubenswrapper[4826]: I0129 08:19:14.828741 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2547ad0-130d-4111-b34c-51a4a9b09ac6" path="/var/lib/kubelet/pods/b2547ad0-130d-4111-b34c-51a4a9b09ac6/volumes" Jan 29 08:19:15 crc kubenswrapper[4826]: I0129 08:19:15.308812 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0","Type":"ContainerStarted","Data":"ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f"} Jan 29 08:19:15 crc kubenswrapper[4826]: I0129 08:19:15.309150 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0","Type":"ContainerStarted","Data":"247cdd108fcf03fc8f55751310aec1176dc60d7aa9b07854a1dfb3f81df509ec"} Jan 29 08:19:15 crc kubenswrapper[4826]: I0129 08:19:15.340893 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.340874447 podStartE2EDuration="2.340874447s" podCreationTimestamp="2026-01-29 08:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:19:15.324925717 +0000 UTC m=+5739.186718806" watchObservedRunningTime="2026-01-29 08:19:15.340874447 +0000 UTC m=+5739.202667516" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.525186 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.535795 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.537650 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.541154 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.541238 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.541566 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.541623 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.547510 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.548925 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.746010 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74d5b89cc-k22tf"] Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.747912 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.755941 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-config\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.755990 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-ovsdbserver-nb\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.756117 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-ovsdbserver-sb\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.756162 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-dns-svc\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.756207 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d42nq\" (UniqueName: \"kubernetes.io/projected/08eec373-3907-4e7f-999e-10bec4fa374d-kube-api-access-d42nq\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.763009 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74d5b89cc-k22tf"] Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.857231 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-dns-svc\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.857280 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d42nq\" (UniqueName: \"kubernetes.io/projected/08eec373-3907-4e7f-999e-10bec4fa374d-kube-api-access-d42nq\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.857379 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-config\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.857398 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-ovsdbserver-nb\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.857473 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-ovsdbserver-sb\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.858391 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-ovsdbserver-sb\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.859601 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-dns-svc\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.865209 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-ovsdbserver-nb\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.865976 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-config\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:17 crc kubenswrapper[4826]: I0129 08:19:17.881088 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d42nq\" (UniqueName: \"kubernetes.io/projected/08eec373-3907-4e7f-999e-10bec4fa374d-kube-api-access-d42nq\") pod \"dnsmasq-dns-74d5b89cc-k22tf\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:18 crc kubenswrapper[4826]: I0129 08:19:18.068227 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:18 crc kubenswrapper[4826]: I0129 08:19:18.336815 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 08:19:18 crc kubenswrapper[4826]: I0129 08:19:18.532887 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74d5b89cc-k22tf"] Jan 29 08:19:18 crc kubenswrapper[4826]: W0129 08:19:18.535021 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08eec373_3907_4e7f_999e_10bec4fa374d.slice/crio-aadba11c8bddfc3bee96e5939fa4ad7393bc9c87ceb349e6eb8e7ca263707582 WatchSource:0}: Error finding container aadba11c8bddfc3bee96e5939fa4ad7393bc9c87ceb349e6eb8e7ca263707582: Status 404 returned error can't find the container with id aadba11c8bddfc3bee96e5939fa4ad7393bc9c87ceb349e6eb8e7ca263707582 Jan 29 08:19:18 crc kubenswrapper[4826]: I0129 08:19:18.990845 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 08:19:19 crc kubenswrapper[4826]: I0129 08:19:19.340625 4826 generic.go:334] "Generic (PLEG): container finished" podID="08eec373-3907-4e7f-999e-10bec4fa374d" containerID="d4a0a239394cfe4d1a436f5f0f6c2e48e1be8011071cfdf07f6b684d831443c9" exitCode=0 Jan 29 08:19:19 crc kubenswrapper[4826]: I0129 08:19:19.340758 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" event={"ID":"08eec373-3907-4e7f-999e-10bec4fa374d","Type":"ContainerDied","Data":"d4a0a239394cfe4d1a436f5f0f6c2e48e1be8011071cfdf07f6b684d831443c9"} Jan 29 08:19:19 crc kubenswrapper[4826]: I0129 08:19:19.340843 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" event={"ID":"08eec373-3907-4e7f-999e-10bec4fa374d","Type":"ContainerStarted","Data":"aadba11c8bddfc3bee96e5939fa4ad7393bc9c87ceb349e6eb8e7ca263707582"} Jan 29 08:19:20 crc kubenswrapper[4826]: I0129 08:19:20.351575 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" event={"ID":"08eec373-3907-4e7f-999e-10bec4fa374d","Type":"ContainerStarted","Data":"f03fb4d1143aa041ed08af1f079d8cb290d684d0e2910f5000a634d0278b2be2"} Jan 29 08:19:20 crc kubenswrapper[4826]: I0129 08:19:20.351917 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:20 crc kubenswrapper[4826]: I0129 08:19:20.374794 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" podStartSLOduration=3.374771513 podStartE2EDuration="3.374771513s" podCreationTimestamp="2026-01-29 08:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:19:20.365399636 +0000 UTC m=+5744.227192715" watchObservedRunningTime="2026-01-29 08:19:20.374771513 +0000 UTC m=+5744.236564582" Jan 29 08:19:20 crc kubenswrapper[4826]: I0129 08:19:20.547647 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:19:20 crc kubenswrapper[4826]: I0129 08:19:20.548418 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ca74ca5b-bc95-440c-9d55-370533d21e38" containerName="nova-api-log" containerID="cri-o://16d878d405111033c03b9fa922678a1c4d67ba7bb966a2ee7dae41ac51c068dd" gracePeriod=30 Jan 29 08:19:20 crc kubenswrapper[4826]: I0129 08:19:20.548561 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ca74ca5b-bc95-440c-9d55-370533d21e38" containerName="nova-api-api" containerID="cri-o://6740cc0eadaf3f5a709b33822a259e2aa92336d7027ec13859b0d22c1757950a" gracePeriod=30 Jan 29 08:19:20 crc kubenswrapper[4826]: I0129 08:19:20.751318 4826 scope.go:117] "RemoveContainer" containerID="a24ae2fc987ff1fe9cd383552f2e623120b1c144e2c348d01f7ffa3cdf59db5c" Jan 29 08:19:20 crc kubenswrapper[4826]: I0129 08:19:20.776671 4826 scope.go:117] "RemoveContainer" containerID="fb795335729c52f24cf4bb223d62fdfb32d743b67f9f61d228e47503000b66e7" Jan 29 08:19:20 crc kubenswrapper[4826]: I0129 08:19:20.831721 4826 scope.go:117] "RemoveContainer" containerID="5bf889f117ff85de0f49ccce3f0b96a1728b7ac34206806117135b6d018e1826" Jan 29 08:19:20 crc kubenswrapper[4826]: I0129 08:19:20.880094 4826 scope.go:117] "RemoveContainer" containerID="f8f0e5efba82aa7a5c9f1d86b0db42fb7a60d14c1424c55901acf2deb7c3bcd5" Jan 29 08:19:21 crc kubenswrapper[4826]: I0129 08:19:21.360737 4826 generic.go:334] "Generic (PLEG): container finished" podID="ca74ca5b-bc95-440c-9d55-370533d21e38" containerID="16d878d405111033c03b9fa922678a1c4d67ba7bb966a2ee7dae41ac51c068dd" exitCode=143 Jan 29 08:19:21 crc kubenswrapper[4826]: I0129 08:19:21.360837 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ca74ca5b-bc95-440c-9d55-370533d21e38","Type":"ContainerDied","Data":"16d878d405111033c03b9fa922678a1c4d67ba7bb966a2ee7dae41ac51c068dd"} Jan 29 08:19:23 crc kubenswrapper[4826]: I0129 08:19:23.991118 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.024911 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.281607 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.387286 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca74ca5b-bc95-440c-9d55-370533d21e38-config-data\") pod \"ca74ca5b-bc95-440c-9d55-370533d21e38\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.387911 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6bgz\" (UniqueName: \"kubernetes.io/projected/ca74ca5b-bc95-440c-9d55-370533d21e38-kube-api-access-b6bgz\") pod \"ca74ca5b-bc95-440c-9d55-370533d21e38\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.388057 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca74ca5b-bc95-440c-9d55-370533d21e38-logs\") pod \"ca74ca5b-bc95-440c-9d55-370533d21e38\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.388132 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca74ca5b-bc95-440c-9d55-370533d21e38-combined-ca-bundle\") pod \"ca74ca5b-bc95-440c-9d55-370533d21e38\" (UID: \"ca74ca5b-bc95-440c-9d55-370533d21e38\") " Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.391786 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca74ca5b-bc95-440c-9d55-370533d21e38-logs" (OuterVolumeSpecName: "logs") pod "ca74ca5b-bc95-440c-9d55-370533d21e38" (UID: "ca74ca5b-bc95-440c-9d55-370533d21e38"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.416772 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca74ca5b-bc95-440c-9d55-370533d21e38-kube-api-access-b6bgz" (OuterVolumeSpecName: "kube-api-access-b6bgz") pod "ca74ca5b-bc95-440c-9d55-370533d21e38" (UID: "ca74ca5b-bc95-440c-9d55-370533d21e38"). InnerVolumeSpecName "kube-api-access-b6bgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.442895 4826 generic.go:334] "Generic (PLEG): container finished" podID="ca74ca5b-bc95-440c-9d55-370533d21e38" containerID="6740cc0eadaf3f5a709b33822a259e2aa92336d7027ec13859b0d22c1757950a" exitCode=0 Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.444670 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca74ca5b-bc95-440c-9d55-370533d21e38-config-data" (OuterVolumeSpecName: "config-data") pod "ca74ca5b-bc95-440c-9d55-370533d21e38" (UID: "ca74ca5b-bc95-440c-9d55-370533d21e38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.444820 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.444855 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ca74ca5b-bc95-440c-9d55-370533d21e38","Type":"ContainerDied","Data":"6740cc0eadaf3f5a709b33822a259e2aa92336d7027ec13859b0d22c1757950a"} Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.444888 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ca74ca5b-bc95-440c-9d55-370533d21e38","Type":"ContainerDied","Data":"54d838d6a9b3b08e683738cdac5169b5d8bdf3bbb6a12d4c846b684ccf002b97"} Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.444907 4826 scope.go:117] "RemoveContainer" containerID="6740cc0eadaf3f5a709b33822a259e2aa92336d7027ec13859b0d22c1757950a" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.483473 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca74ca5b-bc95-440c-9d55-370533d21e38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca74ca5b-bc95-440c-9d55-370533d21e38" (UID: "ca74ca5b-bc95-440c-9d55-370533d21e38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.490790 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca74ca5b-bc95-440c-9d55-370533d21e38-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.490815 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6bgz\" (UniqueName: \"kubernetes.io/projected/ca74ca5b-bc95-440c-9d55-370533d21e38-kube-api-access-b6bgz\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.490825 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca74ca5b-bc95-440c-9d55-370533d21e38-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.490833 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca74ca5b-bc95-440c-9d55-370533d21e38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.499545 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.510605 4826 scope.go:117] "RemoveContainer" containerID="16d878d405111033c03b9fa922678a1c4d67ba7bb966a2ee7dae41ac51c068dd" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.530448 4826 scope.go:117] "RemoveContainer" containerID="6740cc0eadaf3f5a709b33822a259e2aa92336d7027ec13859b0d22c1757950a" Jan 29 08:19:24 crc kubenswrapper[4826]: E0129 08:19:24.530998 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6740cc0eadaf3f5a709b33822a259e2aa92336d7027ec13859b0d22c1757950a\": container with ID starting with 6740cc0eadaf3f5a709b33822a259e2aa92336d7027ec13859b0d22c1757950a not found: ID does not exist" containerID="6740cc0eadaf3f5a709b33822a259e2aa92336d7027ec13859b0d22c1757950a" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.531034 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6740cc0eadaf3f5a709b33822a259e2aa92336d7027ec13859b0d22c1757950a"} err="failed to get container status \"6740cc0eadaf3f5a709b33822a259e2aa92336d7027ec13859b0d22c1757950a\": rpc error: code = NotFound desc = could not find container \"6740cc0eadaf3f5a709b33822a259e2aa92336d7027ec13859b0d22c1757950a\": container with ID starting with 6740cc0eadaf3f5a709b33822a259e2aa92336d7027ec13859b0d22c1757950a not found: ID does not exist" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.531057 4826 scope.go:117] "RemoveContainer" containerID="16d878d405111033c03b9fa922678a1c4d67ba7bb966a2ee7dae41ac51c068dd" Jan 29 08:19:24 crc kubenswrapper[4826]: E0129 08:19:24.531385 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16d878d405111033c03b9fa922678a1c4d67ba7bb966a2ee7dae41ac51c068dd\": container with ID starting with 16d878d405111033c03b9fa922678a1c4d67ba7bb966a2ee7dae41ac51c068dd not found: ID does not exist" containerID="16d878d405111033c03b9fa922678a1c4d67ba7bb966a2ee7dae41ac51c068dd" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.531412 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16d878d405111033c03b9fa922678a1c4d67ba7bb966a2ee7dae41ac51c068dd"} err="failed to get container status \"16d878d405111033c03b9fa922678a1c4d67ba7bb966a2ee7dae41ac51c068dd\": rpc error: code = NotFound desc = could not find container \"16d878d405111033c03b9fa922678a1c4d67ba7bb966a2ee7dae41ac51c068dd\": container with ID starting with 16d878d405111033c03b9fa922678a1c4d67ba7bb966a2ee7dae41ac51c068dd not found: ID does not exist" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.827694 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.836374 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.860276 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 08:19:24 crc kubenswrapper[4826]: E0129 08:19:24.860841 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca74ca5b-bc95-440c-9d55-370533d21e38" containerName="nova-api-api" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.860868 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca74ca5b-bc95-440c-9d55-370533d21e38" containerName="nova-api-api" Jan 29 08:19:24 crc kubenswrapper[4826]: E0129 08:19:24.860892 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca74ca5b-bc95-440c-9d55-370533d21e38" containerName="nova-api-log" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.860901 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca74ca5b-bc95-440c-9d55-370533d21e38" containerName="nova-api-log" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.861122 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca74ca5b-bc95-440c-9d55-370533d21e38" containerName="nova-api-api" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.861166 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca74ca5b-bc95-440c-9d55-370533d21e38" containerName="nova-api-log" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.862238 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.864497 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.864705 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.864827 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 08:19:24 crc kubenswrapper[4826]: I0129 08:19:24.869826 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.001455 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8hgt\" (UniqueName: \"kubernetes.io/projected/536b9f40-3e07-4874-857f-d71b27b1bdc7-kube-api-access-s8hgt\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.001532 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.001690 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-config-data\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.001772 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-public-tls-certs\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.001886 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/536b9f40-3e07-4874-857f-d71b27b1bdc7-logs\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.001941 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.103897 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-config-data\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.103941 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-public-tls-certs\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.104006 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/536b9f40-3e07-4874-857f-d71b27b1bdc7-logs\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.104047 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.104155 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8hgt\" (UniqueName: \"kubernetes.io/projected/536b9f40-3e07-4874-857f-d71b27b1bdc7-kube-api-access-s8hgt\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.104182 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.104729 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/536b9f40-3e07-4874-857f-d71b27b1bdc7-logs\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.109230 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.109257 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-public-tls-certs\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.111624 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-config-data\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.123501 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.123972 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8hgt\" (UniqueName: \"kubernetes.io/projected/536b9f40-3e07-4874-857f-d71b27b1bdc7-kube-api-access-s8hgt\") pod \"nova-api-0\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.176614 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 08:19:25 crc kubenswrapper[4826]: I0129 08:19:25.637578 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 08:19:26 crc kubenswrapper[4826]: I0129 08:19:26.471779 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"536b9f40-3e07-4874-857f-d71b27b1bdc7","Type":"ContainerStarted","Data":"2479a3585910eba55c2897c3b2d13822bc63d58b9f9f577654d2b606f8026b58"} Jan 29 08:19:26 crc kubenswrapper[4826]: I0129 08:19:26.472162 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"536b9f40-3e07-4874-857f-d71b27b1bdc7","Type":"ContainerStarted","Data":"930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b"} Jan 29 08:19:26 crc kubenswrapper[4826]: I0129 08:19:26.472176 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"536b9f40-3e07-4874-857f-d71b27b1bdc7","Type":"ContainerStarted","Data":"101d25fdae8f359d1b9c9921681e8c7675b59888c9c199a2a715bf5957c039c7"} Jan 29 08:19:26 crc kubenswrapper[4826]: I0129 08:19:26.496986 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.496957946 podStartE2EDuration="2.496957946s" podCreationTimestamp="2026-01-29 08:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:19:26.490767843 +0000 UTC m=+5750.352560952" watchObservedRunningTime="2026-01-29 08:19:26.496957946 +0000 UTC m=+5750.358751055" Jan 29 08:19:26 crc kubenswrapper[4826]: I0129 08:19:26.822375 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca74ca5b-bc95-440c-9d55-370533d21e38" path="/var/lib/kubelet/pods/ca74ca5b-bc95-440c-9d55-370533d21e38/volumes" Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.070245 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.145632 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d6c5877d9-p6q7z"] Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.145836 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" podUID="2dfdaf43-55a9-45c0-9550-5d191df8ccc7" containerName="dnsmasq-dns" containerID="cri-o://954a77c43a953a94ee25b7f656fe5d6a95bfc9558b03f7c4e7e55f2a4638f754" gracePeriod=10 Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.522225 4826 generic.go:334] "Generic (PLEG): container finished" podID="2dfdaf43-55a9-45c0-9550-5d191df8ccc7" containerID="954a77c43a953a94ee25b7f656fe5d6a95bfc9558b03f7c4e7e55f2a4638f754" exitCode=0 Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.522287 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" event={"ID":"2dfdaf43-55a9-45c0-9550-5d191df8ccc7","Type":"ContainerDied","Data":"954a77c43a953a94ee25b7f656fe5d6a95bfc9558b03f7c4e7e55f2a4638f754"} Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.638887 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.774328 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-dns-svc\") pod \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.774377 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-config\") pod \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.774477 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-ovsdbserver-nb\") pod \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.774594 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzjtr\" (UniqueName: \"kubernetes.io/projected/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-kube-api-access-lzjtr\") pod \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.775321 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-ovsdbserver-sb\") pod \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\" (UID: \"2dfdaf43-55a9-45c0-9550-5d191df8ccc7\") " Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.782711 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-kube-api-access-lzjtr" (OuterVolumeSpecName: "kube-api-access-lzjtr") pod "2dfdaf43-55a9-45c0-9550-5d191df8ccc7" (UID: "2dfdaf43-55a9-45c0-9550-5d191df8ccc7"). InnerVolumeSpecName "kube-api-access-lzjtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.829456 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2dfdaf43-55a9-45c0-9550-5d191df8ccc7" (UID: "2dfdaf43-55a9-45c0-9550-5d191df8ccc7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.830309 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-config" (OuterVolumeSpecName: "config") pod "2dfdaf43-55a9-45c0-9550-5d191df8ccc7" (UID: "2dfdaf43-55a9-45c0-9550-5d191df8ccc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.843573 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2dfdaf43-55a9-45c0-9550-5d191df8ccc7" (UID: "2dfdaf43-55a9-45c0-9550-5d191df8ccc7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.861427 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2dfdaf43-55a9-45c0-9550-5d191df8ccc7" (UID: "2dfdaf43-55a9-45c0-9550-5d191df8ccc7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.877604 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.877658 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.877671 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.877687 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzjtr\" (UniqueName: \"kubernetes.io/projected/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-kube-api-access-lzjtr\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:28 crc kubenswrapper[4826]: I0129 08:19:28.877698 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dfdaf43-55a9-45c0-9550-5d191df8ccc7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:19:29 crc kubenswrapper[4826]: I0129 08:19:29.536398 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" event={"ID":"2dfdaf43-55a9-45c0-9550-5d191df8ccc7","Type":"ContainerDied","Data":"6f41c433b7d36fb91ec0a6873c57c03d8abf84f4ad222c81cd5911a5f9bf7b48"} Jan 29 08:19:29 crc kubenswrapper[4826]: I0129 08:19:29.536704 4826 scope.go:117] "RemoveContainer" containerID="954a77c43a953a94ee25b7f656fe5d6a95bfc9558b03f7c4e7e55f2a4638f754" Jan 29 08:19:29 crc kubenswrapper[4826]: I0129 08:19:29.536637 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6c5877d9-p6q7z" Jan 29 08:19:29 crc kubenswrapper[4826]: I0129 08:19:29.564627 4826 scope.go:117] "RemoveContainer" containerID="5fc9608d5135ec98187593936ee76ffcaab7710ba8a2e12320f0603057da1315" Jan 29 08:19:29 crc kubenswrapper[4826]: I0129 08:19:29.576756 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d6c5877d9-p6q7z"] Jan 29 08:19:29 crc kubenswrapper[4826]: I0129 08:19:29.587462 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d6c5877d9-p6q7z"] Jan 29 08:19:30 crc kubenswrapper[4826]: I0129 08:19:30.828060 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dfdaf43-55a9-45c0-9550-5d191df8ccc7" path="/var/lib/kubelet/pods/2dfdaf43-55a9-45c0-9550-5d191df8ccc7/volumes" Jan 29 08:19:35 crc kubenswrapper[4826]: I0129 08:19:35.177421 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:19:35 crc kubenswrapper[4826]: I0129 08:19:35.177998 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 08:19:36 crc kubenswrapper[4826]: I0129 08:19:36.193513 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="536b9f40-3e07-4874-857f-d71b27b1bdc7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.96:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 08:19:36 crc kubenswrapper[4826]: I0129 08:19:36.194097 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="536b9f40-3e07-4874-857f-d71b27b1bdc7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.96:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 08:19:45 crc kubenswrapper[4826]: I0129 08:19:45.185750 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 08:19:45 crc kubenswrapper[4826]: I0129 08:19:45.186495 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 08:19:45 crc kubenswrapper[4826]: I0129 08:19:45.186897 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 08:19:45 crc kubenswrapper[4826]: I0129 08:19:45.186948 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 08:19:45 crc kubenswrapper[4826]: I0129 08:19:45.193705 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 08:19:45 crc kubenswrapper[4826]: I0129 08:19:45.194999 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.878427 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8664647f6f-9x6qj"] Jan 29 08:19:56 crc kubenswrapper[4826]: E0129 08:19:56.879580 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfdaf43-55a9-45c0-9550-5d191df8ccc7" containerName="init" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.879594 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfdaf43-55a9-45c0-9550-5d191df8ccc7" containerName="init" Jan 29 08:19:56 crc kubenswrapper[4826]: E0129 08:19:56.879634 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfdaf43-55a9-45c0-9550-5d191df8ccc7" containerName="dnsmasq-dns" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.879639 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfdaf43-55a9-45c0-9550-5d191df8ccc7" containerName="dnsmasq-dns" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.885269 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfdaf43-55a9-45c0-9550-5d191df8ccc7" containerName="dnsmasq-dns" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.886288 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.891965 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-h72bj" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.892107 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.892235 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.892377 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.900804 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8664647f6f-9x6qj"] Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.915491 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0693f93-d04f-4620-8cb2-cd679f0166dc-config-data\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.915543 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d0693f93-d04f-4620-8cb2-cd679f0166dc-horizon-secret-key\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.915566 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0693f93-d04f-4620-8cb2-cd679f0166dc-logs\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.915604 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsqns\" (UniqueName: \"kubernetes.io/projected/d0693f93-d04f-4620-8cb2-cd679f0166dc-kube-api-access-wsqns\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.915650 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0693f93-d04f-4620-8cb2-cd679f0166dc-scripts\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.935090 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.935748 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6107bcc9-cfe4-45d2-a776-f3633688ae3e" containerName="glance-httpd" containerID="cri-o://a82e87fef18e2a222452a365ff82b5a79b141a384906bab83ae39e7fbc039b3b" gracePeriod=30 Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.935348 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6107bcc9-cfe4-45d2-a776-f3633688ae3e" containerName="glance-log" containerID="cri-o://70efcc075e7671c1480a5e9c11bc6dfe2f862a918bef19def5d664776e4893bb" gracePeriod=30 Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.988390 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-74498b466f-q8jkt"] Jan 29 08:19:56 crc kubenswrapper[4826]: I0129 08:19:56.989840 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.005719 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74498b466f-q8jkt"] Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.017051 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsqns\" (UniqueName: \"kubernetes.io/projected/d0693f93-d04f-4620-8cb2-cd679f0166dc-kube-api-access-wsqns\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.017108 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/845b497c-75b9-4648-97e4-2a79a6c8bd2d-logs\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.017141 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0693f93-d04f-4620-8cb2-cd679f0166dc-scripts\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.017189 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/845b497c-75b9-4648-97e4-2a79a6c8bd2d-config-data\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.017239 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97twd\" (UniqueName: \"kubernetes.io/projected/845b497c-75b9-4648-97e4-2a79a6c8bd2d-kube-api-access-97twd\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.017260 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/845b497c-75b9-4648-97e4-2a79a6c8bd2d-horizon-secret-key\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.017320 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0693f93-d04f-4620-8cb2-cd679f0166dc-config-data\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.017336 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/845b497c-75b9-4648-97e4-2a79a6c8bd2d-scripts\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.017370 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d0693f93-d04f-4620-8cb2-cd679f0166dc-horizon-secret-key\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.017385 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0693f93-d04f-4620-8cb2-cd679f0166dc-logs\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.017762 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0693f93-d04f-4620-8cb2-cd679f0166dc-logs\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.018555 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0693f93-d04f-4620-8cb2-cd679f0166dc-scripts\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.022075 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0693f93-d04f-4620-8cb2-cd679f0166dc-config-data\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.029043 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d0693f93-d04f-4620-8cb2-cd679f0166dc-horizon-secret-key\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.040509 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsqns\" (UniqueName: \"kubernetes.io/projected/d0693f93-d04f-4620-8cb2-cd679f0166dc-kube-api-access-wsqns\") pod \"horizon-8664647f6f-9x6qj\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.044247 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.044481 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cb93d33e-df3f-4a16-b0c4-8e422146a2f9" containerName="glance-log" containerID="cri-o://a07837ea56a1b161e2d92732fef846dfc042ffef2170ec26a8b1cde01ac6c2d4" gracePeriod=30 Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.044903 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cb93d33e-df3f-4a16-b0c4-8e422146a2f9" containerName="glance-httpd" containerID="cri-o://deb10a6c627085733a33312d760ab4b9a740e5aae8b591a4e004914000ea21b2" gracePeriod=30 Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.119125 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/845b497c-75b9-4648-97e4-2a79a6c8bd2d-config-data\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.120178 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/845b497c-75b9-4648-97e4-2a79a6c8bd2d-config-data\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.120567 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97twd\" (UniqueName: \"kubernetes.io/projected/845b497c-75b9-4648-97e4-2a79a6c8bd2d-kube-api-access-97twd\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.120609 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/845b497c-75b9-4648-97e4-2a79a6c8bd2d-horizon-secret-key\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.120975 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/845b497c-75b9-4648-97e4-2a79a6c8bd2d-scripts\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.121057 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/845b497c-75b9-4648-97e4-2a79a6c8bd2d-logs\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.121380 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/845b497c-75b9-4648-97e4-2a79a6c8bd2d-logs\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.121576 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/845b497c-75b9-4648-97e4-2a79a6c8bd2d-scripts\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.123083 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/845b497c-75b9-4648-97e4-2a79a6c8bd2d-horizon-secret-key\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.134004 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97twd\" (UniqueName: \"kubernetes.io/projected/845b497c-75b9-4648-97e4-2a79a6c8bd2d-kube-api-access-97twd\") pod \"horizon-74498b466f-q8jkt\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.220463 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.309480 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.676322 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8664647f6f-9x6qj"] Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.809637 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74498b466f-q8jkt"] Jan 29 08:19:57 crc kubenswrapper[4826]: W0129 08:19:57.811501 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod845b497c_75b9_4648_97e4_2a79a6c8bd2d.slice/crio-d2b3c98c3ac7ecc4dc29741da765964bd1a6bce67e3ce46d568ba4cc50d86d48 WatchSource:0}: Error finding container d2b3c98c3ac7ecc4dc29741da765964bd1a6bce67e3ce46d568ba4cc50d86d48: Status 404 returned error can't find the container with id d2b3c98c3ac7ecc4dc29741da765964bd1a6bce67e3ce46d568ba4cc50d86d48 Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.885953 4826 generic.go:334] "Generic (PLEG): container finished" podID="6107bcc9-cfe4-45d2-a776-f3633688ae3e" containerID="70efcc075e7671c1480a5e9c11bc6dfe2f862a918bef19def5d664776e4893bb" exitCode=143 Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.886016 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6107bcc9-cfe4-45d2-a776-f3633688ae3e","Type":"ContainerDied","Data":"70efcc075e7671c1480a5e9c11bc6dfe2f862a918bef19def5d664776e4893bb"} Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.887608 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74498b466f-q8jkt" event={"ID":"845b497c-75b9-4648-97e4-2a79a6c8bd2d","Type":"ContainerStarted","Data":"d2b3c98c3ac7ecc4dc29741da765964bd1a6bce67e3ce46d568ba4cc50d86d48"} Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.888615 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8664647f6f-9x6qj" event={"ID":"d0693f93-d04f-4620-8cb2-cd679f0166dc","Type":"ContainerStarted","Data":"64d40e38ee4063556e17fbb15557dcb4bae97ba62a4fadb10cf15e34b96fc66b"} Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.892587 4826 generic.go:334] "Generic (PLEG): container finished" podID="cb93d33e-df3f-4a16-b0c4-8e422146a2f9" containerID="a07837ea56a1b161e2d92732fef846dfc042ffef2170ec26a8b1cde01ac6c2d4" exitCode=143 Jan 29 08:19:57 crc kubenswrapper[4826]: I0129 08:19:57.892622 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb93d33e-df3f-4a16-b0c4-8e422146a2f9","Type":"ContainerDied","Data":"a07837ea56a1b161e2d92732fef846dfc042ffef2170ec26a8b1cde01ac6c2d4"} Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.190747 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74498b466f-q8jkt"] Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.228011 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6fdb98f6d-cnddf"] Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.229907 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.233519 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.242744 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6fdb98f6d-cnddf"] Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.263903 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27cc8676-afa1-421c-a9fb-f4857de61a71-logs\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.264031 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27cc8676-afa1-421c-a9fb-f4857de61a71-config-data\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.264048 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-horizon-tls-certs\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.264068 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-combined-ca-bundle\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.264102 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzg42\" (UniqueName: \"kubernetes.io/projected/27cc8676-afa1-421c-a9fb-f4857de61a71-kube-api-access-bzg42\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.264174 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-horizon-secret-key\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.264280 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27cc8676-afa1-421c-a9fb-f4857de61a71-scripts\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.297241 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8664647f6f-9x6qj"] Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.331142 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-58dcf5df6-kbrdh"] Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.332590 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.358190 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58dcf5df6-kbrdh"] Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366493 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-horizon-secret-key\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366553 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27cc8676-afa1-421c-a9fb-f4857de61a71-scripts\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366594 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-horizon-tls-certs\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366626 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nscff\" (UniqueName: \"kubernetes.io/projected/c1e011d3-9762-44e1-8137-2432569a561e-kube-api-access-nscff\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366652 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e011d3-9762-44e1-8137-2432569a561e-config-data\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366687 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27cc8676-afa1-421c-a9fb-f4857de61a71-logs\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366741 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e011d3-9762-44e1-8137-2432569a561e-logs\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366782 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-horizon-secret-key\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366812 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-combined-ca-bundle\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366849 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27cc8676-afa1-421c-a9fb-f4857de61a71-config-data\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366874 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-horizon-tls-certs\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366900 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-combined-ca-bundle\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366926 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzg42\" (UniqueName: \"kubernetes.io/projected/27cc8676-afa1-421c-a9fb-f4857de61a71-kube-api-access-bzg42\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.366958 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e011d3-9762-44e1-8137-2432569a561e-scripts\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.368411 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27cc8676-afa1-421c-a9fb-f4857de61a71-scripts\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.369977 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27cc8676-afa1-421c-a9fb-f4857de61a71-logs\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.371356 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27cc8676-afa1-421c-a9fb-f4857de61a71-config-data\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.373027 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-horizon-tls-certs\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.373312 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-horizon-secret-key\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.392498 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-combined-ca-bundle\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.400952 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzg42\" (UniqueName: \"kubernetes.io/projected/27cc8676-afa1-421c-a9fb-f4857de61a71-kube-api-access-bzg42\") pod \"horizon-6fdb98f6d-cnddf\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.468479 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e011d3-9762-44e1-8137-2432569a561e-logs\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.468532 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-horizon-secret-key\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.468552 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-combined-ca-bundle\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.468593 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e011d3-9762-44e1-8137-2432569a561e-scripts\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.468656 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-horizon-tls-certs\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.468678 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nscff\" (UniqueName: \"kubernetes.io/projected/c1e011d3-9762-44e1-8137-2432569a561e-kube-api-access-nscff\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.468697 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e011d3-9762-44e1-8137-2432569a561e-config-data\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.469762 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e011d3-9762-44e1-8137-2432569a561e-config-data\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.469978 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e011d3-9762-44e1-8137-2432569a561e-logs\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.472432 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-horizon-secret-key\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.473592 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e011d3-9762-44e1-8137-2432569a561e-scripts\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.475972 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-combined-ca-bundle\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.480500 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-horizon-tls-certs\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.488186 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nscff\" (UniqueName: \"kubernetes.io/projected/c1e011d3-9762-44e1-8137-2432569a561e-kube-api-access-nscff\") pod \"horizon-58dcf5df6-kbrdh\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.564074 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:19:59 crc kubenswrapper[4826]: I0129 08:19:59.653248 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:20:00 crc kubenswrapper[4826]: I0129 08:20:00.102961 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6fdb98f6d-cnddf"] Jan 29 08:20:00 crc kubenswrapper[4826]: W0129 08:20:00.113796 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27cc8676_afa1_421c_a9fb_f4857de61a71.slice/crio-6780af812e8fe28c72192f27770f83dcbfb58d0f9ae3424941c2feaa142498a3 WatchSource:0}: Error finding container 6780af812e8fe28c72192f27770f83dcbfb58d0f9ae3424941c2feaa142498a3: Status 404 returned error can't find the container with id 6780af812e8fe28c72192f27770f83dcbfb58d0f9ae3424941c2feaa142498a3 Jan 29 08:20:00 crc kubenswrapper[4826]: I0129 08:20:00.188225 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58dcf5df6-kbrdh"] Jan 29 08:20:00 crc kubenswrapper[4826]: W0129 08:20:00.230473 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1e011d3_9762_44e1_8137_2432569a561e.slice/crio-14028152ea29de61ef14c755255dd798f970190acab07278934bed61b24a9376 WatchSource:0}: Error finding container 14028152ea29de61ef14c755255dd798f970190acab07278934bed61b24a9376: Status 404 returned error can't find the container with id 14028152ea29de61ef14c755255dd798f970190acab07278934bed61b24a9376 Jan 29 08:20:00 crc kubenswrapper[4826]: I0129 08:20:00.902852 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:20:00 crc kubenswrapper[4826]: I0129 08:20:00.980570 4826 generic.go:334] "Generic (PLEG): container finished" podID="cb93d33e-df3f-4a16-b0c4-8e422146a2f9" containerID="deb10a6c627085733a33312d760ab4b9a740e5aae8b591a4e004914000ea21b2" exitCode=0 Jan 29 08:20:00 crc kubenswrapper[4826]: I0129 08:20:00.980617 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:20:00 crc kubenswrapper[4826]: I0129 08:20:00.980676 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb93d33e-df3f-4a16-b0c4-8e422146a2f9","Type":"ContainerDied","Data":"deb10a6c627085733a33312d760ab4b9a740e5aae8b591a4e004914000ea21b2"} Jan 29 08:20:00 crc kubenswrapper[4826]: I0129 08:20:00.980713 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cb93d33e-df3f-4a16-b0c4-8e422146a2f9","Type":"ContainerDied","Data":"1cb2fa7c586d203a6e9955ba7bd1997b7f6c99097d81dc9e8d16b7eee008fb56"} Jan 29 08:20:00 crc kubenswrapper[4826]: I0129 08:20:00.980730 4826 scope.go:117] "RemoveContainer" containerID="deb10a6c627085733a33312d760ab4b9a740e5aae8b591a4e004914000ea21b2" Jan 29 08:20:00 crc kubenswrapper[4826]: I0129 08:20:00.982758 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fdb98f6d-cnddf" event={"ID":"27cc8676-afa1-421c-a9fb-f4857de61a71","Type":"ContainerStarted","Data":"6780af812e8fe28c72192f27770f83dcbfb58d0f9ae3424941c2feaa142498a3"} Jan 29 08:20:00 crc kubenswrapper[4826]: I0129 08:20:00.984385 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58dcf5df6-kbrdh" event={"ID":"c1e011d3-9762-44e1-8137-2432569a561e","Type":"ContainerStarted","Data":"14028152ea29de61ef14c755255dd798f970190acab07278934bed61b24a9376"} Jan 29 08:20:00 crc kubenswrapper[4826]: I0129 08:20:00.990666 4826 generic.go:334] "Generic (PLEG): container finished" podID="6107bcc9-cfe4-45d2-a776-f3633688ae3e" containerID="a82e87fef18e2a222452a365ff82b5a79b141a384906bab83ae39e7fbc039b3b" exitCode=0 Jan 29 08:20:00 crc kubenswrapper[4826]: I0129 08:20:00.990731 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6107bcc9-cfe4-45d2-a776-f3633688ae3e","Type":"ContainerDied","Data":"a82e87fef18e2a222452a365ff82b5a79b141a384906bab83ae39e7fbc039b3b"} Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.008470 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-scripts\") pod \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.008533 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s6d6\" (UniqueName: \"kubernetes.io/projected/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-kube-api-access-6s6d6\") pod \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.008600 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-internal-tls-certs\") pod \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.008636 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-config-data\") pod \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.008656 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-logs\") pod \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.008807 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-httpd-run\") pod \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.008862 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-combined-ca-bundle\") pod \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\" (UID: \"cb93d33e-df3f-4a16-b0c4-8e422146a2f9\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.010391 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-logs" (OuterVolumeSpecName: "logs") pod "cb93d33e-df3f-4a16-b0c4-8e422146a2f9" (UID: "cb93d33e-df3f-4a16-b0c4-8e422146a2f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.010914 4826 scope.go:117] "RemoveContainer" containerID="a07837ea56a1b161e2d92732fef846dfc042ffef2170ec26a8b1cde01ac6c2d4" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.011493 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cb93d33e-df3f-4a16-b0c4-8e422146a2f9" (UID: "cb93d33e-df3f-4a16-b0c4-8e422146a2f9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.029962 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-scripts" (OuterVolumeSpecName: "scripts") pod "cb93d33e-df3f-4a16-b0c4-8e422146a2f9" (UID: "cb93d33e-df3f-4a16-b0c4-8e422146a2f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.034075 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-kube-api-access-6s6d6" (OuterVolumeSpecName: "kube-api-access-6s6d6") pod "cb93d33e-df3f-4a16-b0c4-8e422146a2f9" (UID: "cb93d33e-df3f-4a16-b0c4-8e422146a2f9"). InnerVolumeSpecName "kube-api-access-6s6d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.075209 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb93d33e-df3f-4a16-b0c4-8e422146a2f9" (UID: "cb93d33e-df3f-4a16-b0c4-8e422146a2f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.076556 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cb93d33e-df3f-4a16-b0c4-8e422146a2f9" (UID: "cb93d33e-df3f-4a16-b0c4-8e422146a2f9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.088504 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-config-data" (OuterVolumeSpecName: "config-data") pod "cb93d33e-df3f-4a16-b0c4-8e422146a2f9" (UID: "cb93d33e-df3f-4a16-b0c4-8e422146a2f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.111051 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.111090 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.111106 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.111119 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s6d6\" (UniqueName: \"kubernetes.io/projected/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-kube-api-access-6s6d6\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.111156 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.111166 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.111178 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb93d33e-df3f-4a16-b0c4-8e422146a2f9-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.122216 4826 scope.go:117] "RemoveContainer" containerID="deb10a6c627085733a33312d760ab4b9a740e5aae8b591a4e004914000ea21b2" Jan 29 08:20:01 crc kubenswrapper[4826]: E0129 08:20:01.123212 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deb10a6c627085733a33312d760ab4b9a740e5aae8b591a4e004914000ea21b2\": container with ID starting with deb10a6c627085733a33312d760ab4b9a740e5aae8b591a4e004914000ea21b2 not found: ID does not exist" containerID="deb10a6c627085733a33312d760ab4b9a740e5aae8b591a4e004914000ea21b2" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.123237 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deb10a6c627085733a33312d760ab4b9a740e5aae8b591a4e004914000ea21b2"} err="failed to get container status \"deb10a6c627085733a33312d760ab4b9a740e5aae8b591a4e004914000ea21b2\": rpc error: code = NotFound desc = could not find container \"deb10a6c627085733a33312d760ab4b9a740e5aae8b591a4e004914000ea21b2\": container with ID starting with deb10a6c627085733a33312d760ab4b9a740e5aae8b591a4e004914000ea21b2 not found: ID does not exist" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.123256 4826 scope.go:117] "RemoveContainer" containerID="a07837ea56a1b161e2d92732fef846dfc042ffef2170ec26a8b1cde01ac6c2d4" Jan 29 08:20:01 crc kubenswrapper[4826]: E0129 08:20:01.123594 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07837ea56a1b161e2d92732fef846dfc042ffef2170ec26a8b1cde01ac6c2d4\": container with ID starting with a07837ea56a1b161e2d92732fef846dfc042ffef2170ec26a8b1cde01ac6c2d4 not found: ID does not exist" containerID="a07837ea56a1b161e2d92732fef846dfc042ffef2170ec26a8b1cde01ac6c2d4" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.123616 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07837ea56a1b161e2d92732fef846dfc042ffef2170ec26a8b1cde01ac6c2d4"} err="failed to get container status \"a07837ea56a1b161e2d92732fef846dfc042ffef2170ec26a8b1cde01ac6c2d4\": rpc error: code = NotFound desc = could not find container \"a07837ea56a1b161e2d92732fef846dfc042ffef2170ec26a8b1cde01ac6c2d4\": container with ID starting with a07837ea56a1b161e2d92732fef846dfc042ffef2170ec26a8b1cde01ac6c2d4 not found: ID does not exist" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.314910 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.323317 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.341557 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:20:01 crc kubenswrapper[4826]: E0129 08:20:01.342049 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb93d33e-df3f-4a16-b0c4-8e422146a2f9" containerName="glance-httpd" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.342065 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb93d33e-df3f-4a16-b0c4-8e422146a2f9" containerName="glance-httpd" Jan 29 08:20:01 crc kubenswrapper[4826]: E0129 08:20:01.342101 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb93d33e-df3f-4a16-b0c4-8e422146a2f9" containerName="glance-log" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.342109 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb93d33e-df3f-4a16-b0c4-8e422146a2f9" containerName="glance-log" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.342363 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb93d33e-df3f-4a16-b0c4-8e422146a2f9" containerName="glance-httpd" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.342397 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb93d33e-df3f-4a16-b0c4-8e422146a2f9" containerName="glance-log" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.350686 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.353004 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.353125 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.353263 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.416606 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c43019dd-767e-444a-92b8-a9c78adbcb43-logs\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.416693 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43019dd-767e-444a-92b8-a9c78adbcb43-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.416992 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43019dd-767e-444a-92b8-a9c78adbcb43-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.417087 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkxgl\" (UniqueName: \"kubernetes.io/projected/c43019dd-767e-444a-92b8-a9c78adbcb43-kube-api-access-xkxgl\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.417135 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43019dd-767e-444a-92b8-a9c78adbcb43-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.417237 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c43019dd-767e-444a-92b8-a9c78adbcb43-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.417379 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c43019dd-767e-444a-92b8-a9c78adbcb43-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.519221 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43019dd-767e-444a-92b8-a9c78adbcb43-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.519279 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkxgl\" (UniqueName: \"kubernetes.io/projected/c43019dd-767e-444a-92b8-a9c78adbcb43-kube-api-access-xkxgl\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.519365 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43019dd-767e-444a-92b8-a9c78adbcb43-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.519388 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c43019dd-767e-444a-92b8-a9c78adbcb43-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.519420 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c43019dd-767e-444a-92b8-a9c78adbcb43-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.519462 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c43019dd-767e-444a-92b8-a9c78adbcb43-logs\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.519478 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43019dd-767e-444a-92b8-a9c78adbcb43-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.520283 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c43019dd-767e-444a-92b8-a9c78adbcb43-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.522313 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c43019dd-767e-444a-92b8-a9c78adbcb43-logs\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.526599 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43019dd-767e-444a-92b8-a9c78adbcb43-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.526638 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43019dd-767e-444a-92b8-a9c78adbcb43-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.528890 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43019dd-767e-444a-92b8-a9c78adbcb43-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.536899 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c43019dd-767e-444a-92b8-a9c78adbcb43-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.544890 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkxgl\" (UniqueName: \"kubernetes.io/projected/c43019dd-767e-444a-92b8-a9c78adbcb43-kube-api-access-xkxgl\") pod \"glance-default-internal-api-0\" (UID: \"c43019dd-767e-444a-92b8-a9c78adbcb43\") " pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.740038 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.880553 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.925078 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-public-tls-certs\") pod \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.925170 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6107bcc9-cfe4-45d2-a776-f3633688ae3e-httpd-run\") pod \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.925226 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-config-data\") pod \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.925281 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t87bn\" (UniqueName: \"kubernetes.io/projected/6107bcc9-cfe4-45d2-a776-f3633688ae3e-kube-api-access-t87bn\") pod \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.925364 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-combined-ca-bundle\") pod \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.925418 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6107bcc9-cfe4-45d2-a776-f3633688ae3e-logs\") pod \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.925494 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-scripts\") pod \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.925784 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6107bcc9-cfe4-45d2-a776-f3633688ae3e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6107bcc9-cfe4-45d2-a776-f3633688ae3e" (UID: "6107bcc9-cfe4-45d2-a776-f3633688ae3e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.926105 4826 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6107bcc9-cfe4-45d2-a776-f3633688ae3e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.926664 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6107bcc9-cfe4-45d2-a776-f3633688ae3e-logs" (OuterVolumeSpecName: "logs") pod "6107bcc9-cfe4-45d2-a776-f3633688ae3e" (UID: "6107bcc9-cfe4-45d2-a776-f3633688ae3e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.931602 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-scripts" (OuterVolumeSpecName: "scripts") pod "6107bcc9-cfe4-45d2-a776-f3633688ae3e" (UID: "6107bcc9-cfe4-45d2-a776-f3633688ae3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.934452 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6107bcc9-cfe4-45d2-a776-f3633688ae3e-kube-api-access-t87bn" (OuterVolumeSpecName: "kube-api-access-t87bn") pod "6107bcc9-cfe4-45d2-a776-f3633688ae3e" (UID: "6107bcc9-cfe4-45d2-a776-f3633688ae3e"). InnerVolumeSpecName "kube-api-access-t87bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.972708 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6107bcc9-cfe4-45d2-a776-f3633688ae3e" (UID: "6107bcc9-cfe4-45d2-a776-f3633688ae3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:20:01 crc kubenswrapper[4826]: I0129 08:20:01.993376 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-config-data" (OuterVolumeSpecName: "config-data") pod "6107bcc9-cfe4-45d2-a776-f3633688ae3e" (UID: "6107bcc9-cfe4-45d2-a776-f3633688ae3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.019318 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6107bcc9-cfe4-45d2-a776-f3633688ae3e","Type":"ContainerDied","Data":"1668373bb5c59544e235378472f44e345dab45db75055c7afcc7fa3533d64925"} Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.019394 4826 scope.go:117] "RemoveContainer" containerID="a82e87fef18e2a222452a365ff82b5a79b141a384906bab83ae39e7fbc039b3b" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.020040 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.028877 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6107bcc9-cfe4-45d2-a776-f3633688ae3e" (UID: "6107bcc9-cfe4-45d2-a776-f3633688ae3e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.031120 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-public-tls-certs\") pod \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\" (UID: \"6107bcc9-cfe4-45d2-a776-f3633688ae3e\") " Jan 29 08:20:02 crc kubenswrapper[4826]: W0129 08:20:02.031248 4826 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6107bcc9-cfe4-45d2-a776-f3633688ae3e/volumes/kubernetes.io~secret/public-tls-certs Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.031268 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6107bcc9-cfe4-45d2-a776-f3633688ae3e" (UID: "6107bcc9-cfe4-45d2-a776-f3633688ae3e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.035613 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6107bcc9-cfe4-45d2-a776-f3633688ae3e-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.035632 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.038530 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.038554 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.038565 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t87bn\" (UniqueName: \"kubernetes.io/projected/6107bcc9-cfe4-45d2-a776-f3633688ae3e-kube-api-access-t87bn\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.038575 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6107bcc9-cfe4-45d2-a776-f3633688ae3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.067520 4826 scope.go:117] "RemoveContainer" containerID="70efcc075e7671c1480a5e9c11bc6dfe2f862a918bef19def5d664776e4893bb" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.331922 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.357065 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.368851 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.379197 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:20:02 crc kubenswrapper[4826]: E0129 08:20:02.379659 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6107bcc9-cfe4-45d2-a776-f3633688ae3e" containerName="glance-log" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.379672 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6107bcc9-cfe4-45d2-a776-f3633688ae3e" containerName="glance-log" Jan 29 08:20:02 crc kubenswrapper[4826]: E0129 08:20:02.379742 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6107bcc9-cfe4-45d2-a776-f3633688ae3e" containerName="glance-httpd" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.379749 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6107bcc9-cfe4-45d2-a776-f3633688ae3e" containerName="glance-httpd" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.379933 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6107bcc9-cfe4-45d2-a776-f3633688ae3e" containerName="glance-httpd" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.379948 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6107bcc9-cfe4-45d2-a776-f3633688ae3e" containerName="glance-log" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.381107 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.383726 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.383967 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.390795 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.453773 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.453859 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkrbh\" (UniqueName: \"kubernetes.io/projected/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-kube-api-access-jkrbh\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.453889 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-logs\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.453912 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.454041 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-config-data\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.454074 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-scripts\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.454124 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.557553 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-config-data\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.557611 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-scripts\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.557660 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.557687 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.557727 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkrbh\" (UniqueName: \"kubernetes.io/projected/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-kube-api-access-jkrbh\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.557749 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-logs\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.557767 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.560069 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-logs\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.560111 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.562465 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.563284 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-scripts\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.563336 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-config-data\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.566169 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.576500 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkrbh\" (UniqueName: \"kubernetes.io/projected/85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3-kube-api-access-jkrbh\") pod \"glance-default-external-api-0\" (UID: \"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3\") " pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.771176 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.828693 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6107bcc9-cfe4-45d2-a776-f3633688ae3e" path="/var/lib/kubelet/pods/6107bcc9-cfe4-45d2-a776-f3633688ae3e/volumes" Jan 29 08:20:02 crc kubenswrapper[4826]: I0129 08:20:02.829476 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb93d33e-df3f-4a16-b0c4-8e422146a2f9" path="/var/lib/kubelet/pods/cb93d33e-df3f-4a16-b0c4-8e422146a2f9/volumes" Jan 29 08:20:03 crc kubenswrapper[4826]: I0129 08:20:03.034608 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c43019dd-767e-444a-92b8-a9c78adbcb43","Type":"ContainerStarted","Data":"2507a65670dfc968c4bd9107c8ad30f4167d6d7589a505f7fa16d29ca87b33a3"} Jan 29 08:20:03 crc kubenswrapper[4826]: I0129 08:20:03.034651 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c43019dd-767e-444a-92b8-a9c78adbcb43","Type":"ContainerStarted","Data":"08774f8b3249cf053770df40fd419fc850adfaee4ccb9aa7b0d4c22c84d4bf4d"} Jan 29 08:20:03 crc kubenswrapper[4826]: I0129 08:20:03.340996 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 08:20:03 crc kubenswrapper[4826]: W0129 08:20:03.346812 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85d6d46d_c3aa_4a0e_8e2c_e6fb0a83e4b3.slice/crio-4176a3e6d7a7de81250286f7a360acf8bfe116c2d0803f10627e12e7f355ed1c WatchSource:0}: Error finding container 4176a3e6d7a7de81250286f7a360acf8bfe116c2d0803f10627e12e7f355ed1c: Status 404 returned error can't find the container with id 4176a3e6d7a7de81250286f7a360acf8bfe116c2d0803f10627e12e7f355ed1c Jan 29 08:20:04 crc kubenswrapper[4826]: I0129 08:20:04.052795 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c43019dd-767e-444a-92b8-a9c78adbcb43","Type":"ContainerStarted","Data":"b5138a70115946574dd76e5407467ca62a05414a79e8a4dfc5639316c7c58866"} Jan 29 08:20:04 crc kubenswrapper[4826]: I0129 08:20:04.055092 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3","Type":"ContainerStarted","Data":"6d38cddbc58c6bb6980811c187088e7fd3f666ab12ae1c0c2ad63f0323bae727"} Jan 29 08:20:04 crc kubenswrapper[4826]: I0129 08:20:04.055235 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3","Type":"ContainerStarted","Data":"4176a3e6d7a7de81250286f7a360acf8bfe116c2d0803f10627e12e7f355ed1c"} Jan 29 08:20:04 crc kubenswrapper[4826]: I0129 08:20:04.086019 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.085619596 podStartE2EDuration="3.085619596s" podCreationTimestamp="2026-01-29 08:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:20:04.074598146 +0000 UTC m=+5787.936391215" watchObservedRunningTime="2026-01-29 08:20:04.085619596 +0000 UTC m=+5787.947412665" Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.110730 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74498b466f-q8jkt" event={"ID":"845b497c-75b9-4648-97e4-2a79a6c8bd2d","Type":"ContainerStarted","Data":"70c0a8fd3a3a47d8fc26d06b75cfa07bd1549460c8926ec2d3c7ef64d9faf3ff"} Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.111175 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74498b466f-q8jkt" event={"ID":"845b497c-75b9-4648-97e4-2a79a6c8bd2d","Type":"ContainerStarted","Data":"5ad1308f59a2f52e08ae717f4fc230bc919093d72e2e580e5c0df6d87ef105ea"} Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.111287 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74498b466f-q8jkt" podUID="845b497c-75b9-4648-97e4-2a79a6c8bd2d" containerName="horizon-log" containerID="cri-o://5ad1308f59a2f52e08ae717f4fc230bc919093d72e2e580e5c0df6d87ef105ea" gracePeriod=30 Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.111458 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74498b466f-q8jkt" podUID="845b497c-75b9-4648-97e4-2a79a6c8bd2d" containerName="horizon" containerID="cri-o://70c0a8fd3a3a47d8fc26d06b75cfa07bd1549460c8926ec2d3c7ef64d9faf3ff" gracePeriod=30 Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.117457 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8664647f6f-9x6qj" event={"ID":"d0693f93-d04f-4620-8cb2-cd679f0166dc","Type":"ContainerStarted","Data":"140d17fb92c2375ce2d2d3da947d13e2c39ba2bbec1a920958396a852c8b7eba"} Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.117490 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8664647f6f-9x6qj" event={"ID":"d0693f93-d04f-4620-8cb2-cd679f0166dc","Type":"ContainerStarted","Data":"54d934ac6df69f9949a45cab7f908ff4be6bb304082434929eb23f26b7ab6694"} Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.117549 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8664647f6f-9x6qj" podUID="d0693f93-d04f-4620-8cb2-cd679f0166dc" containerName="horizon-log" containerID="cri-o://54d934ac6df69f9949a45cab7f908ff4be6bb304082434929eb23f26b7ab6694" gracePeriod=30 Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.117689 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8664647f6f-9x6qj" podUID="d0693f93-d04f-4620-8cb2-cd679f0166dc" containerName="horizon" containerID="cri-o://140d17fb92c2375ce2d2d3da947d13e2c39ba2bbec1a920958396a852c8b7eba" gracePeriod=30 Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.121456 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fdb98f6d-cnddf" event={"ID":"27cc8676-afa1-421c-a9fb-f4857de61a71","Type":"ContainerStarted","Data":"5f1bbe4eb9bccf7c1408cda431a930c7fea9eeb0a0dbd537035a735345b5946d"} Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.121509 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fdb98f6d-cnddf" event={"ID":"27cc8676-afa1-421c-a9fb-f4857de61a71","Type":"ContainerStarted","Data":"9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0"} Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.125974 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3","Type":"ContainerStarted","Data":"7cae00426f02165a6f4401a5aafce9eec593ac16223e775630e46366d66a382d"} Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.135481 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-74498b466f-q8jkt" podStartSLOduration=2.525022111 podStartE2EDuration="13.135464582s" podCreationTimestamp="2026-01-29 08:19:56 +0000 UTC" firstStartedPulling="2026-01-29 08:19:57.813767431 +0000 UTC m=+5781.675560500" lastFinishedPulling="2026-01-29 08:20:08.424209862 +0000 UTC m=+5792.286002971" observedRunningTime="2026-01-29 08:20:09.13237416 +0000 UTC m=+5792.994167259" watchObservedRunningTime="2026-01-29 08:20:09.135464582 +0000 UTC m=+5792.997257651" Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.135820 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58dcf5df6-kbrdh" event={"ID":"c1e011d3-9762-44e1-8137-2432569a561e","Type":"ContainerStarted","Data":"4ca5abf2dc71f9e1fa8cf8c6a06a22c2fa47f5037111388105ce6cdef610d6a2"} Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.135876 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58dcf5df6-kbrdh" event={"ID":"c1e011d3-9762-44e1-8137-2432569a561e","Type":"ContainerStarted","Data":"2db8c1b361235872c80df05f35d4eb1e837fe0e7db7c8c165b46d313440e896c"} Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.150044 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.150031375 podStartE2EDuration="7.150031375s" podCreationTimestamp="2026-01-29 08:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:20:09.14907373 +0000 UTC m=+5793.010866799" watchObservedRunningTime="2026-01-29 08:20:09.150031375 +0000 UTC m=+5793.011824444" Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.173461 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8664647f6f-9x6qj" podStartSLOduration=2.435191355 podStartE2EDuration="13.173441602s" podCreationTimestamp="2026-01-29 08:19:56 +0000 UTC" firstStartedPulling="2026-01-29 08:19:57.683659185 +0000 UTC m=+5781.545452254" lastFinishedPulling="2026-01-29 08:20:08.421909432 +0000 UTC m=+5792.283702501" observedRunningTime="2026-01-29 08:20:09.172717893 +0000 UTC m=+5793.034510962" watchObservedRunningTime="2026-01-29 08:20:09.173441602 +0000 UTC m=+5793.035234671" Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.195442 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6fdb98f6d-cnddf" podStartSLOduration=1.886258179 podStartE2EDuration="10.195421461s" podCreationTimestamp="2026-01-29 08:19:59 +0000 UTC" firstStartedPulling="2026-01-29 08:20:00.11618362 +0000 UTC m=+5783.977976689" lastFinishedPulling="2026-01-29 08:20:08.425346902 +0000 UTC m=+5792.287139971" observedRunningTime="2026-01-29 08:20:09.187999555 +0000 UTC m=+5793.049792644" watchObservedRunningTime="2026-01-29 08:20:09.195421461 +0000 UTC m=+5793.057214530" Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.214090 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-58dcf5df6-kbrdh" podStartSLOduration=1.9447140680000001 podStartE2EDuration="10.214068282s" podCreationTimestamp="2026-01-29 08:19:59 +0000 UTC" firstStartedPulling="2026-01-29 08:20:00.23314092 +0000 UTC m=+5784.094933989" lastFinishedPulling="2026-01-29 08:20:08.502495124 +0000 UTC m=+5792.364288203" observedRunningTime="2026-01-29 08:20:09.206489592 +0000 UTC m=+5793.068282681" watchObservedRunningTime="2026-01-29 08:20:09.214068282 +0000 UTC m=+5793.075861351" Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.564515 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.564834 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.653910 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:20:09 crc kubenswrapper[4826]: I0129 08:20:09.654117 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:20:11 crc kubenswrapper[4826]: I0129 08:20:11.741436 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 08:20:11 crc kubenswrapper[4826]: I0129 08:20:11.743682 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 08:20:11 crc kubenswrapper[4826]: I0129 08:20:11.794192 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 08:20:11 crc kubenswrapper[4826]: I0129 08:20:11.805879 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 08:20:12 crc kubenswrapper[4826]: I0129 08:20:12.174601 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 08:20:12 crc kubenswrapper[4826]: I0129 08:20:12.174678 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 08:20:12 crc kubenswrapper[4826]: I0129 08:20:12.771649 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 08:20:12 crc kubenswrapper[4826]: I0129 08:20:12.771707 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 08:20:12 crc kubenswrapper[4826]: I0129 08:20:12.823391 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 08:20:12 crc kubenswrapper[4826]: I0129 08:20:12.823838 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 08:20:13 crc kubenswrapper[4826]: I0129 08:20:13.181440 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 08:20:13 crc kubenswrapper[4826]: I0129 08:20:13.181840 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 08:20:14 crc kubenswrapper[4826]: I0129 08:20:14.044485 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 08:20:14 crc kubenswrapper[4826]: I0129 08:20:14.045042 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 08:20:15 crc kubenswrapper[4826]: I0129 08:20:15.035166 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 08:20:16 crc kubenswrapper[4826]: I0129 08:20:16.048560 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 08:20:17 crc kubenswrapper[4826]: I0129 08:20:17.221231 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:20:17 crc kubenswrapper[4826]: I0129 08:20:17.310096 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:20:19 crc kubenswrapper[4826]: I0129 08:20:19.566989 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6fdb98f6d-cnddf" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.99:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.99:8443: connect: connection refused" Jan 29 08:20:19 crc kubenswrapper[4826]: I0129 08:20:19.656353 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-58dcf5df6-kbrdh" podUID="c1e011d3-9762-44e1-8137-2432569a561e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.100:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.100:8443: connect: connection refused" Jan 29 08:20:21 crc kubenswrapper[4826]: I0129 08:20:21.075148 4826 scope.go:117] "RemoveContainer" containerID="a23b11b5195ecb2c16d1a75f0e0331c18390b977388899c99a5794552e1c33ce" Jan 29 08:20:21 crc kubenswrapper[4826]: I0129 08:20:21.244697 4826 scope.go:117] "RemoveContainer" containerID="dbf3c8d5b2831a9b7498341b24066fce99e156430d18a3d622e2c7a484ef26d5" Jan 29 08:20:21 crc kubenswrapper[4826]: I0129 08:20:21.287312 4826 scope.go:117] "RemoveContainer" containerID="142ca1786f2190dfe2ac3f2f6fa81375822cdcd8e32ea92635b1af7915b9109d" Jan 29 08:20:31 crc kubenswrapper[4826]: I0129 08:20:31.316917 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:20:31 crc kubenswrapper[4826]: I0129 08:20:31.408678 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:20:32 crc kubenswrapper[4826]: I0129 08:20:32.965665 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:20:33 crc kubenswrapper[4826]: I0129 08:20:33.051583 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6fdb98f6d-cnddf"] Jan 29 08:20:33 crc kubenswrapper[4826]: I0129 08:20:33.051775 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6fdb98f6d-cnddf" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon-log" containerID="cri-o://9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0" gracePeriod=30 Jan 29 08:20:33 crc kubenswrapper[4826]: I0129 08:20:33.053618 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6fdb98f6d-cnddf" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon" containerID="cri-o://5f1bbe4eb9bccf7c1408cda431a930c7fea9eeb0a0dbd537035a735345b5946d" gracePeriod=30 Jan 29 08:20:33 crc kubenswrapper[4826]: I0129 08:20:33.066006 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6fdb98f6d-cnddf" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.99:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 29 08:20:35 crc kubenswrapper[4826]: I0129 08:20:35.656622 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:20:35 crc kubenswrapper[4826]: I0129 08:20:35.657008 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:20:36 crc kubenswrapper[4826]: I0129 08:20:36.474883 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6fdb98f6d-cnddf" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.99:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:37646->10.217.1.99:8443: read: connection reset by peer" Jan 29 08:20:37 crc kubenswrapper[4826]: I0129 08:20:37.482478 4826 generic.go:334] "Generic (PLEG): container finished" podID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerID="5f1bbe4eb9bccf7c1408cda431a930c7fea9eeb0a0dbd537035a735345b5946d" exitCode=0 Jan 29 08:20:37 crc kubenswrapper[4826]: I0129 08:20:37.482565 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fdb98f6d-cnddf" event={"ID":"27cc8676-afa1-421c-a9fb-f4857de61a71","Type":"ContainerDied","Data":"5f1bbe4eb9bccf7c1408cda431a930c7fea9eeb0a0dbd537035a735345b5946d"} Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.538229 4826 generic.go:334] "Generic (PLEG): container finished" podID="d0693f93-d04f-4620-8cb2-cd679f0166dc" containerID="140d17fb92c2375ce2d2d3da947d13e2c39ba2bbec1a920958396a852c8b7eba" exitCode=137 Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.539256 4826 generic.go:334] "Generic (PLEG): container finished" podID="d0693f93-d04f-4620-8cb2-cd679f0166dc" containerID="54d934ac6df69f9949a45cab7f908ff4be6bb304082434929eb23f26b7ab6694" exitCode=137 Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.538412 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8664647f6f-9x6qj" event={"ID":"d0693f93-d04f-4620-8cb2-cd679f0166dc","Type":"ContainerDied","Data":"140d17fb92c2375ce2d2d3da947d13e2c39ba2bbec1a920958396a852c8b7eba"} Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.539444 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8664647f6f-9x6qj" event={"ID":"d0693f93-d04f-4620-8cb2-cd679f0166dc","Type":"ContainerDied","Data":"54d934ac6df69f9949a45cab7f908ff4be6bb304082434929eb23f26b7ab6694"} Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.543329 4826 generic.go:334] "Generic (PLEG): container finished" podID="845b497c-75b9-4648-97e4-2a79a6c8bd2d" containerID="70c0a8fd3a3a47d8fc26d06b75cfa07bd1549460c8926ec2d3c7ef64d9faf3ff" exitCode=137 Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.543369 4826 generic.go:334] "Generic (PLEG): container finished" podID="845b497c-75b9-4648-97e4-2a79a6c8bd2d" containerID="5ad1308f59a2f52e08ae717f4fc230bc919093d72e2e580e5c0df6d87ef105ea" exitCode=137 Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.543442 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74498b466f-q8jkt" event={"ID":"845b497c-75b9-4648-97e4-2a79a6c8bd2d","Type":"ContainerDied","Data":"70c0a8fd3a3a47d8fc26d06b75cfa07bd1549460c8926ec2d3c7ef64d9faf3ff"} Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.543481 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74498b466f-q8jkt" event={"ID":"845b497c-75b9-4648-97e4-2a79a6c8bd2d","Type":"ContainerDied","Data":"5ad1308f59a2f52e08ae717f4fc230bc919093d72e2e580e5c0df6d87ef105ea"} Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.564865 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6fdb98f6d-cnddf" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.99:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.99:8443: connect: connection refused" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.669333 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.677502 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.832090 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/845b497c-75b9-4648-97e4-2a79a6c8bd2d-scripts\") pod \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.832163 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0693f93-d04f-4620-8cb2-cd679f0166dc-config-data\") pod \"d0693f93-d04f-4620-8cb2-cd679f0166dc\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.832204 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/845b497c-75b9-4648-97e4-2a79a6c8bd2d-config-data\") pod \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.832227 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0693f93-d04f-4620-8cb2-cd679f0166dc-scripts\") pod \"d0693f93-d04f-4620-8cb2-cd679f0166dc\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.832288 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/845b497c-75b9-4648-97e4-2a79a6c8bd2d-logs\") pod \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.832341 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97twd\" (UniqueName: \"kubernetes.io/projected/845b497c-75b9-4648-97e4-2a79a6c8bd2d-kube-api-access-97twd\") pod \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.832367 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/845b497c-75b9-4648-97e4-2a79a6c8bd2d-horizon-secret-key\") pod \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\" (UID: \"845b497c-75b9-4648-97e4-2a79a6c8bd2d\") " Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.832405 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d0693f93-d04f-4620-8cb2-cd679f0166dc-horizon-secret-key\") pod \"d0693f93-d04f-4620-8cb2-cd679f0166dc\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.832441 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsqns\" (UniqueName: \"kubernetes.io/projected/d0693f93-d04f-4620-8cb2-cd679f0166dc-kube-api-access-wsqns\") pod \"d0693f93-d04f-4620-8cb2-cd679f0166dc\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.832538 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0693f93-d04f-4620-8cb2-cd679f0166dc-logs\") pod \"d0693f93-d04f-4620-8cb2-cd679f0166dc\" (UID: \"d0693f93-d04f-4620-8cb2-cd679f0166dc\") " Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.833225 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0693f93-d04f-4620-8cb2-cd679f0166dc-logs" (OuterVolumeSpecName: "logs") pod "d0693f93-d04f-4620-8cb2-cd679f0166dc" (UID: "d0693f93-d04f-4620-8cb2-cd679f0166dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.839003 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/845b497c-75b9-4648-97e4-2a79a6c8bd2d-logs" (OuterVolumeSpecName: "logs") pod "845b497c-75b9-4648-97e4-2a79a6c8bd2d" (UID: "845b497c-75b9-4648-97e4-2a79a6c8bd2d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.862463 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845b497c-75b9-4648-97e4-2a79a6c8bd2d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "845b497c-75b9-4648-97e4-2a79a6c8bd2d" (UID: "845b497c-75b9-4648-97e4-2a79a6c8bd2d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.862593 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0693f93-d04f-4620-8cb2-cd679f0166dc-kube-api-access-wsqns" (OuterVolumeSpecName: "kube-api-access-wsqns") pod "d0693f93-d04f-4620-8cb2-cd679f0166dc" (UID: "d0693f93-d04f-4620-8cb2-cd679f0166dc"). InnerVolumeSpecName "kube-api-access-wsqns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.867799 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/845b497c-75b9-4648-97e4-2a79a6c8bd2d-kube-api-access-97twd" (OuterVolumeSpecName: "kube-api-access-97twd") pod "845b497c-75b9-4648-97e4-2a79a6c8bd2d" (UID: "845b497c-75b9-4648-97e4-2a79a6c8bd2d"). InnerVolumeSpecName "kube-api-access-97twd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.874614 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0693f93-d04f-4620-8cb2-cd679f0166dc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d0693f93-d04f-4620-8cb2-cd679f0166dc" (UID: "d0693f93-d04f-4620-8cb2-cd679f0166dc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.901114 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0693f93-d04f-4620-8cb2-cd679f0166dc-config-data" (OuterVolumeSpecName: "config-data") pod "d0693f93-d04f-4620-8cb2-cd679f0166dc" (UID: "d0693f93-d04f-4620-8cb2-cd679f0166dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.901755 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845b497c-75b9-4648-97e4-2a79a6c8bd2d-scripts" (OuterVolumeSpecName: "scripts") pod "845b497c-75b9-4648-97e4-2a79a6c8bd2d" (UID: "845b497c-75b9-4648-97e4-2a79a6c8bd2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.906627 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0693f93-d04f-4620-8cb2-cd679f0166dc-scripts" (OuterVolumeSpecName: "scripts") pod "d0693f93-d04f-4620-8cb2-cd679f0166dc" (UID: "d0693f93-d04f-4620-8cb2-cd679f0166dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.911149 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845b497c-75b9-4648-97e4-2a79a6c8bd2d-config-data" (OuterVolumeSpecName: "config-data") pod "845b497c-75b9-4648-97e4-2a79a6c8bd2d" (UID: "845b497c-75b9-4648-97e4-2a79a6c8bd2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.937075 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/845b497c-75b9-4648-97e4-2a79a6c8bd2d-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.937108 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97twd\" (UniqueName: \"kubernetes.io/projected/845b497c-75b9-4648-97e4-2a79a6c8bd2d-kube-api-access-97twd\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.937120 4826 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/845b497c-75b9-4648-97e4-2a79a6c8bd2d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.937129 4826 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d0693f93-d04f-4620-8cb2-cd679f0166dc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.937137 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsqns\" (UniqueName: \"kubernetes.io/projected/d0693f93-d04f-4620-8cb2-cd679f0166dc-kube-api-access-wsqns\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.937147 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0693f93-d04f-4620-8cb2-cd679f0166dc-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.937155 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/845b497c-75b9-4648-97e4-2a79a6c8bd2d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.937163 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0693f93-d04f-4620-8cb2-cd679f0166dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.937171 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/845b497c-75b9-4648-97e4-2a79a6c8bd2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:39 crc kubenswrapper[4826]: I0129 08:20:39.937180 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0693f93-d04f-4620-8cb2-cd679f0166dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:20:40 crc kubenswrapper[4826]: I0129 08:20:40.562408 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74498b466f-q8jkt" event={"ID":"845b497c-75b9-4648-97e4-2a79a6c8bd2d","Type":"ContainerDied","Data":"d2b3c98c3ac7ecc4dc29741da765964bd1a6bce67e3ce46d568ba4cc50d86d48"} Jan 29 08:20:40 crc kubenswrapper[4826]: I0129 08:20:40.562483 4826 scope.go:117] "RemoveContainer" containerID="70c0a8fd3a3a47d8fc26d06b75cfa07bd1549460c8926ec2d3c7ef64d9faf3ff" Jan 29 08:20:40 crc kubenswrapper[4826]: I0129 08:20:40.562669 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74498b466f-q8jkt" Jan 29 08:20:40 crc kubenswrapper[4826]: I0129 08:20:40.578818 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8664647f6f-9x6qj" event={"ID":"d0693f93-d04f-4620-8cb2-cd679f0166dc","Type":"ContainerDied","Data":"64d40e38ee4063556e17fbb15557dcb4bae97ba62a4fadb10cf15e34b96fc66b"} Jan 29 08:20:40 crc kubenswrapper[4826]: I0129 08:20:40.578947 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8664647f6f-9x6qj" Jan 29 08:20:40 crc kubenswrapper[4826]: I0129 08:20:40.620153 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74498b466f-q8jkt"] Jan 29 08:20:40 crc kubenswrapper[4826]: I0129 08:20:40.636440 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-74498b466f-q8jkt"] Jan 29 08:20:40 crc kubenswrapper[4826]: I0129 08:20:40.647791 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8664647f6f-9x6qj"] Jan 29 08:20:40 crc kubenswrapper[4826]: I0129 08:20:40.656255 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8664647f6f-9x6qj"] Jan 29 08:20:40 crc kubenswrapper[4826]: I0129 08:20:40.787726 4826 scope.go:117] "RemoveContainer" containerID="5ad1308f59a2f52e08ae717f4fc230bc919093d72e2e580e5c0df6d87ef105ea" Jan 29 08:20:40 crc kubenswrapper[4826]: I0129 08:20:40.817668 4826 scope.go:117] "RemoveContainer" containerID="140d17fb92c2375ce2d2d3da947d13e2c39ba2bbec1a920958396a852c8b7eba" Jan 29 08:20:40 crc kubenswrapper[4826]: I0129 08:20:40.822623 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="845b497c-75b9-4648-97e4-2a79a6c8bd2d" path="/var/lib/kubelet/pods/845b497c-75b9-4648-97e4-2a79a6c8bd2d/volumes" Jan 29 08:20:40 crc kubenswrapper[4826]: I0129 08:20:40.823647 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0693f93-d04f-4620-8cb2-cd679f0166dc" path="/var/lib/kubelet/pods/d0693f93-d04f-4620-8cb2-cd679f0166dc/volumes" Jan 29 08:20:41 crc kubenswrapper[4826]: I0129 08:20:41.007454 4826 scope.go:117] "RemoveContainer" containerID="54d934ac6df69f9949a45cab7f908ff4be6bb304082434929eb23f26b7ab6694" Jan 29 08:20:49 crc kubenswrapper[4826]: I0129 08:20:49.565215 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6fdb98f6d-cnddf" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.99:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.99:8443: connect: connection refused" Jan 29 08:20:59 crc kubenswrapper[4826]: I0129 08:20:59.565384 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6fdb98f6d-cnddf" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.99:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.99:8443: connect: connection refused" Jan 29 08:21:03 crc kubenswrapper[4826]: E0129 08:21:03.309869 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27cc8676_afa1_421c_a9fb_f4857de61a71.slice/crio-conmon-9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0.scope\": RecentStats: unable to find data in memory cache]" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.413479 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.541444 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27cc8676-afa1-421c-a9fb-f4857de61a71-logs\") pod \"27cc8676-afa1-421c-a9fb-f4857de61a71\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.541641 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-horizon-secret-key\") pod \"27cc8676-afa1-421c-a9fb-f4857de61a71\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.541692 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27cc8676-afa1-421c-a9fb-f4857de61a71-scripts\") pod \"27cc8676-afa1-421c-a9fb-f4857de61a71\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.541730 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27cc8676-afa1-421c-a9fb-f4857de61a71-config-data\") pod \"27cc8676-afa1-421c-a9fb-f4857de61a71\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.541821 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-combined-ca-bundle\") pod \"27cc8676-afa1-421c-a9fb-f4857de61a71\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.541881 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzg42\" (UniqueName: \"kubernetes.io/projected/27cc8676-afa1-421c-a9fb-f4857de61a71-kube-api-access-bzg42\") pod \"27cc8676-afa1-421c-a9fb-f4857de61a71\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.542018 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27cc8676-afa1-421c-a9fb-f4857de61a71-logs" (OuterVolumeSpecName: "logs") pod "27cc8676-afa1-421c-a9fb-f4857de61a71" (UID: "27cc8676-afa1-421c-a9fb-f4857de61a71"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.542504 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-horizon-tls-certs\") pod \"27cc8676-afa1-421c-a9fb-f4857de61a71\" (UID: \"27cc8676-afa1-421c-a9fb-f4857de61a71\") " Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.542915 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27cc8676-afa1-421c-a9fb-f4857de61a71-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.547204 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "27cc8676-afa1-421c-a9fb-f4857de61a71" (UID: "27cc8676-afa1-421c-a9fb-f4857de61a71"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.547381 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27cc8676-afa1-421c-a9fb-f4857de61a71-kube-api-access-bzg42" (OuterVolumeSpecName: "kube-api-access-bzg42") pod "27cc8676-afa1-421c-a9fb-f4857de61a71" (UID: "27cc8676-afa1-421c-a9fb-f4857de61a71"). InnerVolumeSpecName "kube-api-access-bzg42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.568882 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27cc8676-afa1-421c-a9fb-f4857de61a71-scripts" (OuterVolumeSpecName: "scripts") pod "27cc8676-afa1-421c-a9fb-f4857de61a71" (UID: "27cc8676-afa1-421c-a9fb-f4857de61a71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.584965 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27cc8676-afa1-421c-a9fb-f4857de61a71" (UID: "27cc8676-afa1-421c-a9fb-f4857de61a71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.587360 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27cc8676-afa1-421c-a9fb-f4857de61a71-config-data" (OuterVolumeSpecName: "config-data") pod "27cc8676-afa1-421c-a9fb-f4857de61a71" (UID: "27cc8676-afa1-421c-a9fb-f4857de61a71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.609363 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "27cc8676-afa1-421c-a9fb-f4857de61a71" (UID: "27cc8676-afa1-421c-a9fb-f4857de61a71"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.644758 4826 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.644801 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27cc8676-afa1-421c-a9fb-f4857de61a71-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.644814 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27cc8676-afa1-421c-a9fb-f4857de61a71-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.644825 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.644838 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzg42\" (UniqueName: \"kubernetes.io/projected/27cc8676-afa1-421c-a9fb-f4857de61a71-kube-api-access-bzg42\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.644851 4826 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/27cc8676-afa1-421c-a9fb-f4857de61a71-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.822572 4826 generic.go:334] "Generic (PLEG): container finished" podID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerID="9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0" exitCode=137 Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.822651 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fdb98f6d-cnddf" event={"ID":"27cc8676-afa1-421c-a9fb-f4857de61a71","Type":"ContainerDied","Data":"9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0"} Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.822700 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fdb98f6d-cnddf" event={"ID":"27cc8676-afa1-421c-a9fb-f4857de61a71","Type":"ContainerDied","Data":"6780af812e8fe28c72192f27770f83dcbfb58d0f9ae3424941c2feaa142498a3"} Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.822738 4826 scope.go:117] "RemoveContainer" containerID="5f1bbe4eb9bccf7c1408cda431a930c7fea9eeb0a0dbd537035a735345b5946d" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.822957 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fdb98f6d-cnddf" Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.882118 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6fdb98f6d-cnddf"] Jan 29 08:21:03 crc kubenswrapper[4826]: I0129 08:21:03.894615 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6fdb98f6d-cnddf"] Jan 29 08:21:04 crc kubenswrapper[4826]: I0129 08:21:04.037166 4826 scope.go:117] "RemoveContainer" containerID="9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0" Jan 29 08:21:04 crc kubenswrapper[4826]: I0129 08:21:04.055349 4826 scope.go:117] "RemoveContainer" containerID="5f1bbe4eb9bccf7c1408cda431a930c7fea9eeb0a0dbd537035a735345b5946d" Jan 29 08:21:04 crc kubenswrapper[4826]: E0129 08:21:04.055783 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f1bbe4eb9bccf7c1408cda431a930c7fea9eeb0a0dbd537035a735345b5946d\": container with ID starting with 5f1bbe4eb9bccf7c1408cda431a930c7fea9eeb0a0dbd537035a735345b5946d not found: ID does not exist" containerID="5f1bbe4eb9bccf7c1408cda431a930c7fea9eeb0a0dbd537035a735345b5946d" Jan 29 08:21:04 crc kubenswrapper[4826]: I0129 08:21:04.055830 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f1bbe4eb9bccf7c1408cda431a930c7fea9eeb0a0dbd537035a735345b5946d"} err="failed to get container status \"5f1bbe4eb9bccf7c1408cda431a930c7fea9eeb0a0dbd537035a735345b5946d\": rpc error: code = NotFound desc = could not find container \"5f1bbe4eb9bccf7c1408cda431a930c7fea9eeb0a0dbd537035a735345b5946d\": container with ID starting with 5f1bbe4eb9bccf7c1408cda431a930c7fea9eeb0a0dbd537035a735345b5946d not found: ID does not exist" Jan 29 08:21:04 crc kubenswrapper[4826]: I0129 08:21:04.055862 4826 scope.go:117] "RemoveContainer" containerID="9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0" Jan 29 08:21:04 crc kubenswrapper[4826]: E0129 08:21:04.056135 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0\": container with ID starting with 9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0 not found: ID does not exist" containerID="9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0" Jan 29 08:21:04 crc kubenswrapper[4826]: I0129 08:21:04.056175 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0"} err="failed to get container status \"9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0\": rpc error: code = NotFound desc = could not find container \"9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0\": container with ID starting with 9704a6c824e22b43c86c4227d4c795b69c0842f61fd97452b5315c3889ec99f0 not found: ID does not exist" Jan 29 08:21:04 crc kubenswrapper[4826]: I0129 08:21:04.827487 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" path="/var/lib/kubelet/pods/27cc8676-afa1-421c-a9fb-f4857de61a71/volumes" Jan 29 08:21:05 crc kubenswrapper[4826]: I0129 08:21:05.656993 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:21:05 crc kubenswrapper[4826]: I0129 08:21:05.657095 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.799515 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6fc96bc4d8-55gpp"] Jan 29 08:21:13 crc kubenswrapper[4826]: E0129 08:21:13.800333 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon-log" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.800345 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon-log" Jan 29 08:21:13 crc kubenswrapper[4826]: E0129 08:21:13.800371 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.800378 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon" Jan 29 08:21:13 crc kubenswrapper[4826]: E0129 08:21:13.800386 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0693f93-d04f-4620-8cb2-cd679f0166dc" containerName="horizon" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.800393 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0693f93-d04f-4620-8cb2-cd679f0166dc" containerName="horizon" Jan 29 08:21:13 crc kubenswrapper[4826]: E0129 08:21:13.800400 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845b497c-75b9-4648-97e4-2a79a6c8bd2d" containerName="horizon-log" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.800406 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="845b497c-75b9-4648-97e4-2a79a6c8bd2d" containerName="horizon-log" Jan 29 08:21:13 crc kubenswrapper[4826]: E0129 08:21:13.800414 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845b497c-75b9-4648-97e4-2a79a6c8bd2d" containerName="horizon" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.800420 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="845b497c-75b9-4648-97e4-2a79a6c8bd2d" containerName="horizon" Jan 29 08:21:13 crc kubenswrapper[4826]: E0129 08:21:13.800435 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0693f93-d04f-4620-8cb2-cd679f0166dc" containerName="horizon-log" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.800442 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0693f93-d04f-4620-8cb2-cd679f0166dc" containerName="horizon-log" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.800603 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon-log" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.800617 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="845b497c-75b9-4648-97e4-2a79a6c8bd2d" containerName="horizon-log" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.800634 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="845b497c-75b9-4648-97e4-2a79a6c8bd2d" containerName="horizon" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.800641 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0693f93-d04f-4620-8cb2-cd679f0166dc" containerName="horizon-log" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.800653 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="27cc8676-afa1-421c-a9fb-f4857de61a71" containerName="horizon" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.800661 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0693f93-d04f-4620-8cb2-cd679f0166dc" containerName="horizon" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.801628 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.820025 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6fc96bc4d8-55gpp"] Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.861285 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e716106f-e229-4e00-9ee5-278b120741a6-config-data\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.861553 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e716106f-e229-4e00-9ee5-278b120741a6-combined-ca-bundle\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.861571 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e716106f-e229-4e00-9ee5-278b120741a6-logs\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.861606 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e716106f-e229-4e00-9ee5-278b120741a6-horizon-secret-key\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.861621 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e716106f-e229-4e00-9ee5-278b120741a6-scripts\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.861642 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e716106f-e229-4e00-9ee5-278b120741a6-horizon-tls-certs\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.862017 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8swwh\" (UniqueName: \"kubernetes.io/projected/e716106f-e229-4e00-9ee5-278b120741a6-kube-api-access-8swwh\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.964263 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8swwh\" (UniqueName: \"kubernetes.io/projected/e716106f-e229-4e00-9ee5-278b120741a6-kube-api-access-8swwh\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.964356 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e716106f-e229-4e00-9ee5-278b120741a6-config-data\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.964380 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e716106f-e229-4e00-9ee5-278b120741a6-logs\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.964396 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e716106f-e229-4e00-9ee5-278b120741a6-combined-ca-bundle\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.964425 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e716106f-e229-4e00-9ee5-278b120741a6-horizon-secret-key\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.964439 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e716106f-e229-4e00-9ee5-278b120741a6-scripts\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.964461 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e716106f-e229-4e00-9ee5-278b120741a6-horizon-tls-certs\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.964967 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e716106f-e229-4e00-9ee5-278b120741a6-logs\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.965777 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e716106f-e229-4e00-9ee5-278b120741a6-scripts\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.966160 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e716106f-e229-4e00-9ee5-278b120741a6-config-data\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.970383 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e716106f-e229-4e00-9ee5-278b120741a6-combined-ca-bundle\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.970571 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e716106f-e229-4e00-9ee5-278b120741a6-horizon-secret-key\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.971992 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e716106f-e229-4e00-9ee5-278b120741a6-horizon-tls-certs\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:13 crc kubenswrapper[4826]: I0129 08:21:13.995236 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8swwh\" (UniqueName: \"kubernetes.io/projected/e716106f-e229-4e00-9ee5-278b120741a6-kube-api-access-8swwh\") pod \"horizon-6fc96bc4d8-55gpp\" (UID: \"e716106f-e229-4e00-9ee5-278b120741a6\") " pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:14 crc kubenswrapper[4826]: I0129 08:21:14.121413 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:14 crc kubenswrapper[4826]: I0129 08:21:14.727200 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6fc96bc4d8-55gpp"] Jan 29 08:21:14 crc kubenswrapper[4826]: I0129 08:21:14.942287 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fc96bc4d8-55gpp" event={"ID":"e716106f-e229-4e00-9ee5-278b120741a6","Type":"ContainerStarted","Data":"6d537633a06305bd02304e84496d79a1b0fd8bad94ff4c8e880c5eab776140eb"} Jan 29 08:21:14 crc kubenswrapper[4826]: I0129 08:21:14.942592 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fc96bc4d8-55gpp" event={"ID":"e716106f-e229-4e00-9ee5-278b120741a6","Type":"ContainerStarted","Data":"23a9fc39c1c96bcbe27d9775dc9dbce7189c88f05b2be9449157e1e5b15cc41d"} Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.461172 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-7smxw"] Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.462726 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7smxw" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.481489 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-7smxw"] Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.494142 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756-operator-scripts\") pod \"heat-db-create-7smxw\" (UID: \"bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756\") " pod="openstack/heat-db-create-7smxw" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.494411 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96j84\" (UniqueName: \"kubernetes.io/projected/bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756-kube-api-access-96j84\") pod \"heat-db-create-7smxw\" (UID: \"bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756\") " pod="openstack/heat-db-create-7smxw" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.496434 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-4d83-account-create-update-wmdrs"] Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.497924 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4d83-account-create-update-wmdrs" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.501598 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.514498 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-4d83-account-create-update-wmdrs"] Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.596122 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ab6460e-b058-40f7-acda-fc4f6302f922-operator-scripts\") pod \"heat-4d83-account-create-update-wmdrs\" (UID: \"8ab6460e-b058-40f7-acda-fc4f6302f922\") " pod="openstack/heat-4d83-account-create-update-wmdrs" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.596210 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96j84\" (UniqueName: \"kubernetes.io/projected/bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756-kube-api-access-96j84\") pod \"heat-db-create-7smxw\" (UID: \"bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756\") " pod="openstack/heat-db-create-7smxw" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.596317 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zxxt\" (UniqueName: \"kubernetes.io/projected/8ab6460e-b058-40f7-acda-fc4f6302f922-kube-api-access-4zxxt\") pod \"heat-4d83-account-create-update-wmdrs\" (UID: \"8ab6460e-b058-40f7-acda-fc4f6302f922\") " pod="openstack/heat-4d83-account-create-update-wmdrs" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.596455 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756-operator-scripts\") pod \"heat-db-create-7smxw\" (UID: \"bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756\") " pod="openstack/heat-db-create-7smxw" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.597319 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756-operator-scripts\") pod \"heat-db-create-7smxw\" (UID: \"bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756\") " pod="openstack/heat-db-create-7smxw" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.620190 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96j84\" (UniqueName: \"kubernetes.io/projected/bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756-kube-api-access-96j84\") pod \"heat-db-create-7smxw\" (UID: \"bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756\") " pod="openstack/heat-db-create-7smxw" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.698382 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zxxt\" (UniqueName: \"kubernetes.io/projected/8ab6460e-b058-40f7-acda-fc4f6302f922-kube-api-access-4zxxt\") pod \"heat-4d83-account-create-update-wmdrs\" (UID: \"8ab6460e-b058-40f7-acda-fc4f6302f922\") " pod="openstack/heat-4d83-account-create-update-wmdrs" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.698585 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ab6460e-b058-40f7-acda-fc4f6302f922-operator-scripts\") pod \"heat-4d83-account-create-update-wmdrs\" (UID: \"8ab6460e-b058-40f7-acda-fc4f6302f922\") " pod="openstack/heat-4d83-account-create-update-wmdrs" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.699384 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ab6460e-b058-40f7-acda-fc4f6302f922-operator-scripts\") pod \"heat-4d83-account-create-update-wmdrs\" (UID: \"8ab6460e-b058-40f7-acda-fc4f6302f922\") " pod="openstack/heat-4d83-account-create-update-wmdrs" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.718095 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zxxt\" (UniqueName: \"kubernetes.io/projected/8ab6460e-b058-40f7-acda-fc4f6302f922-kube-api-access-4zxxt\") pod \"heat-4d83-account-create-update-wmdrs\" (UID: \"8ab6460e-b058-40f7-acda-fc4f6302f922\") " pod="openstack/heat-4d83-account-create-update-wmdrs" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.794592 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7smxw" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.814506 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4d83-account-create-update-wmdrs" Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.956432 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6fc96bc4d8-55gpp" event={"ID":"e716106f-e229-4e00-9ee5-278b120741a6","Type":"ContainerStarted","Data":"3f59a4eba953f56aac869e44c09991bff799b4ad86425d3e2934b9ddacda20e1"} Jan 29 08:21:15 crc kubenswrapper[4826]: I0129 08:21:15.986734 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6fc96bc4d8-55gpp" podStartSLOduration=2.986715314 podStartE2EDuration="2.986715314s" podCreationTimestamp="2026-01-29 08:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:21:15.98462589 +0000 UTC m=+5859.846418959" watchObservedRunningTime="2026-01-29 08:21:15.986715314 +0000 UTC m=+5859.848508383" Jan 29 08:21:16 crc kubenswrapper[4826]: I0129 08:21:16.313968 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-7smxw"] Jan 29 08:21:16 crc kubenswrapper[4826]: I0129 08:21:16.431140 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-4d83-account-create-update-wmdrs"] Jan 29 08:21:16 crc kubenswrapper[4826]: I0129 08:21:16.976151 4826 generic.go:334] "Generic (PLEG): container finished" podID="8ab6460e-b058-40f7-acda-fc4f6302f922" containerID="5d508854bb6d61318f765e6daf31d7364bc882911143b03420aaa6b9d37b531d" exitCode=0 Jan 29 08:21:16 crc kubenswrapper[4826]: I0129 08:21:16.976211 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4d83-account-create-update-wmdrs" event={"ID":"8ab6460e-b058-40f7-acda-fc4f6302f922","Type":"ContainerDied","Data":"5d508854bb6d61318f765e6daf31d7364bc882911143b03420aaa6b9d37b531d"} Jan 29 08:21:16 crc kubenswrapper[4826]: I0129 08:21:16.976235 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4d83-account-create-update-wmdrs" event={"ID":"8ab6460e-b058-40f7-acda-fc4f6302f922","Type":"ContainerStarted","Data":"f65d356ec343517ba3402382d02184581ea68923b919a587f6a888901b3b15a0"} Jan 29 08:21:16 crc kubenswrapper[4826]: I0129 08:21:16.978264 4826 generic.go:334] "Generic (PLEG): container finished" podID="bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756" containerID="f1163b3372d444c05f120979d2c9b4406feefb1396311e9c1a40a2c7ea6fa08e" exitCode=0 Jan 29 08:21:16 crc kubenswrapper[4826]: I0129 08:21:16.978352 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7smxw" event={"ID":"bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756","Type":"ContainerDied","Data":"f1163b3372d444c05f120979d2c9b4406feefb1396311e9c1a40a2c7ea6fa08e"} Jan 29 08:21:16 crc kubenswrapper[4826]: I0129 08:21:16.978388 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7smxw" event={"ID":"bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756","Type":"ContainerStarted","Data":"a2f252278792ee4e1f0549d5cb59f1ed1045bc7561272d33fc283827bcd55718"} Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.436915 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4d83-account-create-update-wmdrs" Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.448506 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7smxw" Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.478961 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zxxt\" (UniqueName: \"kubernetes.io/projected/8ab6460e-b058-40f7-acda-fc4f6302f922-kube-api-access-4zxxt\") pod \"8ab6460e-b058-40f7-acda-fc4f6302f922\" (UID: \"8ab6460e-b058-40f7-acda-fc4f6302f922\") " Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.479114 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ab6460e-b058-40f7-acda-fc4f6302f922-operator-scripts\") pod \"8ab6460e-b058-40f7-acda-fc4f6302f922\" (UID: \"8ab6460e-b058-40f7-acda-fc4f6302f922\") " Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.480249 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab6460e-b058-40f7-acda-fc4f6302f922-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ab6460e-b058-40f7-acda-fc4f6302f922" (UID: "8ab6460e-b058-40f7-acda-fc4f6302f922"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.492358 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab6460e-b058-40f7-acda-fc4f6302f922-kube-api-access-4zxxt" (OuterVolumeSpecName: "kube-api-access-4zxxt") pod "8ab6460e-b058-40f7-acda-fc4f6302f922" (UID: "8ab6460e-b058-40f7-acda-fc4f6302f922"). InnerVolumeSpecName "kube-api-access-4zxxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.581127 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96j84\" (UniqueName: \"kubernetes.io/projected/bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756-kube-api-access-96j84\") pod \"bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756\" (UID: \"bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756\") " Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.581400 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756-operator-scripts\") pod \"bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756\" (UID: \"bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756\") " Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.581844 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756" (UID: "bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.582404 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zxxt\" (UniqueName: \"kubernetes.io/projected/8ab6460e-b058-40f7-acda-fc4f6302f922-kube-api-access-4zxxt\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.582525 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.582733 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ab6460e-b058-40f7-acda-fc4f6302f922-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.584317 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756-kube-api-access-96j84" (OuterVolumeSpecName: "kube-api-access-96j84") pod "bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756" (UID: "bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756"). InnerVolumeSpecName "kube-api-access-96j84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:21:18 crc kubenswrapper[4826]: I0129 08:21:18.685091 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96j84\" (UniqueName: \"kubernetes.io/projected/bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756-kube-api-access-96j84\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:19 crc kubenswrapper[4826]: I0129 08:21:19.002606 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4d83-account-create-update-wmdrs" event={"ID":"8ab6460e-b058-40f7-acda-fc4f6302f922","Type":"ContainerDied","Data":"f65d356ec343517ba3402382d02184581ea68923b919a587f6a888901b3b15a0"} Jan 29 08:21:19 crc kubenswrapper[4826]: I0129 08:21:19.002656 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f65d356ec343517ba3402382d02184581ea68923b919a587f6a888901b3b15a0" Jan 29 08:21:19 crc kubenswrapper[4826]: I0129 08:21:19.002668 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4d83-account-create-update-wmdrs" Jan 29 08:21:19 crc kubenswrapper[4826]: I0129 08:21:19.008943 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7smxw" event={"ID":"bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756","Type":"ContainerDied","Data":"a2f252278792ee4e1f0549d5cb59f1ed1045bc7561272d33fc283827bcd55718"} Jan 29 08:21:19 crc kubenswrapper[4826]: I0129 08:21:19.009335 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2f252278792ee4e1f0549d5cb59f1ed1045bc7561272d33fc283827bcd55718" Jan 29 08:21:19 crc kubenswrapper[4826]: I0129 08:21:19.008985 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7smxw" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.658360 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-djgr7"] Jan 29 08:21:20 crc kubenswrapper[4826]: E0129 08:21:20.658755 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab6460e-b058-40f7-acda-fc4f6302f922" containerName="mariadb-account-create-update" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.658769 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab6460e-b058-40f7-acda-fc4f6302f922" containerName="mariadb-account-create-update" Jan 29 08:21:20 crc kubenswrapper[4826]: E0129 08:21:20.658804 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756" containerName="mariadb-database-create" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.658810 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756" containerName="mariadb-database-create" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.658983 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756" containerName="mariadb-database-create" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.659003 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ab6460e-b058-40f7-acda-fc4f6302f922" containerName="mariadb-account-create-update" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.659642 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-djgr7" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.666983 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.667017 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-q2x57" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.676503 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-djgr7"] Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.729914 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-combined-ca-bundle\") pod \"heat-db-sync-djgr7\" (UID: \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\") " pod="openstack/heat-db-sync-djgr7" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.730011 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s89w\" (UniqueName: \"kubernetes.io/projected/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-kube-api-access-9s89w\") pod \"heat-db-sync-djgr7\" (UID: \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\") " pod="openstack/heat-db-sync-djgr7" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.730141 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-config-data\") pod \"heat-db-sync-djgr7\" (UID: \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\") " pod="openstack/heat-db-sync-djgr7" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.832274 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s89w\" (UniqueName: \"kubernetes.io/projected/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-kube-api-access-9s89w\") pod \"heat-db-sync-djgr7\" (UID: \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\") " pod="openstack/heat-db-sync-djgr7" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.832614 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-config-data\") pod \"heat-db-sync-djgr7\" (UID: \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\") " pod="openstack/heat-db-sync-djgr7" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.832687 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-combined-ca-bundle\") pod \"heat-db-sync-djgr7\" (UID: \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\") " pod="openstack/heat-db-sync-djgr7" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.841527 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-config-data\") pod \"heat-db-sync-djgr7\" (UID: \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\") " pod="openstack/heat-db-sync-djgr7" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.856884 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-combined-ca-bundle\") pod \"heat-db-sync-djgr7\" (UID: \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\") " pod="openstack/heat-db-sync-djgr7" Jan 29 08:21:20 crc kubenswrapper[4826]: I0129 08:21:20.910867 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s89w\" (UniqueName: \"kubernetes.io/projected/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-kube-api-access-9s89w\") pod \"heat-db-sync-djgr7\" (UID: \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\") " pod="openstack/heat-db-sync-djgr7" Jan 29 08:21:21 crc kubenswrapper[4826]: I0129 08:21:21.000794 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-djgr7" Jan 29 08:21:21 crc kubenswrapper[4826]: I0129 08:21:21.527131 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-djgr7"] Jan 29 08:21:21 crc kubenswrapper[4826]: W0129 08:21:21.538376 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc52d7b11_c4b1_4dc3_9ac7_ae8bf7131e46.slice/crio-c5f81b36e0a9592d6f2472c212095264f9ebe5fe3c6d4df8cdaea1693cb70ca6 WatchSource:0}: Error finding container c5f81b36e0a9592d6f2472c212095264f9ebe5fe3c6d4df8cdaea1693cb70ca6: Status 404 returned error can't find the container with id c5f81b36e0a9592d6f2472c212095264f9ebe5fe3c6d4df8cdaea1693cb70ca6 Jan 29 08:21:21 crc kubenswrapper[4826]: I0129 08:21:21.541023 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:21:22 crc kubenswrapper[4826]: I0129 08:21:22.036383 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-djgr7" event={"ID":"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46","Type":"ContainerStarted","Data":"c5f81b36e0a9592d6f2472c212095264f9ebe5fe3c6d4df8cdaea1693cb70ca6"} Jan 29 08:21:24 crc kubenswrapper[4826]: I0129 08:21:24.122593 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:24 crc kubenswrapper[4826]: I0129 08:21:24.124608 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:26 crc kubenswrapper[4826]: I0129 08:21:26.079922 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-275b-account-create-update-r6hnm"] Jan 29 08:21:26 crc kubenswrapper[4826]: I0129 08:21:26.089866 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-m9xx7"] Jan 29 08:21:26 crc kubenswrapper[4826]: I0129 08:21:26.098862 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-m9xx7"] Jan 29 08:21:26 crc kubenswrapper[4826]: I0129 08:21:26.107937 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-275b-account-create-update-r6hnm"] Jan 29 08:21:26 crc kubenswrapper[4826]: I0129 08:21:26.822662 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="415b6ea2-bd07-49eb-adc3-832640e78058" path="/var/lib/kubelet/pods/415b6ea2-bd07-49eb-adc3-832640e78058/volumes" Jan 29 08:21:26 crc kubenswrapper[4826]: I0129 08:21:26.824081 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6785afcf-73dd-42b9-9984-0c73d4883b53" path="/var/lib/kubelet/pods/6785afcf-73dd-42b9-9984-0c73d4883b53/volumes" Jan 29 08:21:31 crc kubenswrapper[4826]: I0129 08:21:31.130253 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-djgr7" event={"ID":"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46","Type":"ContainerStarted","Data":"7ff02862014c3127f16f72177272d257717af626a4254ea5161357961db2d9cb"} Jan 29 08:21:33 crc kubenswrapper[4826]: I0129 08:21:33.164264 4826 generic.go:334] "Generic (PLEG): container finished" podID="c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46" containerID="7ff02862014c3127f16f72177272d257717af626a4254ea5161357961db2d9cb" exitCode=0 Jan 29 08:21:33 crc kubenswrapper[4826]: I0129 08:21:33.164327 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-djgr7" event={"ID":"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46","Type":"ContainerDied","Data":"7ff02862014c3127f16f72177272d257717af626a4254ea5161357961db2d9cb"} Jan 29 08:21:34 crc kubenswrapper[4826]: I0129 08:21:34.587844 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-djgr7" Jan 29 08:21:34 crc kubenswrapper[4826]: I0129 08:21:34.687608 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-config-data\") pod \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\" (UID: \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\") " Jan 29 08:21:34 crc kubenswrapper[4826]: I0129 08:21:34.687704 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s89w\" (UniqueName: \"kubernetes.io/projected/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-kube-api-access-9s89w\") pod \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\" (UID: \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\") " Jan 29 08:21:34 crc kubenswrapper[4826]: I0129 08:21:34.687825 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-combined-ca-bundle\") pod \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\" (UID: \"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46\") " Jan 29 08:21:34 crc kubenswrapper[4826]: I0129 08:21:34.695318 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-kube-api-access-9s89w" (OuterVolumeSpecName: "kube-api-access-9s89w") pod "c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46" (UID: "c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46"). InnerVolumeSpecName "kube-api-access-9s89w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:21:34 crc kubenswrapper[4826]: I0129 08:21:34.763332 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46" (UID: "c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:34 crc kubenswrapper[4826]: I0129 08:21:34.790592 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s89w\" (UniqueName: \"kubernetes.io/projected/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-kube-api-access-9s89w\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:34 crc kubenswrapper[4826]: I0129 08:21:34.790628 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:34 crc kubenswrapper[4826]: I0129 08:21:34.793463 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-config-data" (OuterVolumeSpecName: "config-data") pod "c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46" (UID: "c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:34 crc kubenswrapper[4826]: I0129 08:21:34.891835 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:35 crc kubenswrapper[4826]: I0129 08:21:35.189601 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-djgr7" event={"ID":"c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46","Type":"ContainerDied","Data":"c5f81b36e0a9592d6f2472c212095264f9ebe5fe3c6d4df8cdaea1693cb70ca6"} Jan 29 08:21:35 crc kubenswrapper[4826]: I0129 08:21:35.189954 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5f81b36e0a9592d6f2472c212095264f9ebe5fe3c6d4df8cdaea1693cb70ca6" Jan 29 08:21:35 crc kubenswrapper[4826]: I0129 08:21:35.190057 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-djgr7" Jan 29 08:21:35 crc kubenswrapper[4826]: I0129 08:21:35.656279 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:21:35 crc kubenswrapper[4826]: I0129 08:21:35.656388 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:21:35 crc kubenswrapper[4826]: I0129 08:21:35.656455 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 08:21:35 crc kubenswrapper[4826]: I0129 08:21:35.657632 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:21:35 crc kubenswrapper[4826]: I0129 08:21:35.657754 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" gracePeriod=600 Jan 29 08:21:35 crc kubenswrapper[4826]: E0129 08:21:35.791445 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:21:35 crc kubenswrapper[4826]: I0129 08:21:35.814512 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.038232 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-v87n9"] Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.053211 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-v87n9"] Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.202677 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" exitCode=0 Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.202759 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570"} Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.202823 4826 scope.go:117] "RemoveContainer" containerID="fa58273e324a815b3443a21ebdc2b51cbff96c021a9b9a81653f9dc2de446316" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.204061 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:21:36 crc kubenswrapper[4826]: E0129 08:21:36.204705 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.366169 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-98bbfb9d-vk4s6"] Jan 29 08:21:36 crc kubenswrapper[4826]: E0129 08:21:36.371420 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46" containerName="heat-db-sync" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.371632 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46" containerName="heat-db-sync" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.372048 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46" containerName="heat-db-sync" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.372793 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.385145 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.385393 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-q2x57" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.385512 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.391153 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-98bbfb9d-vk4s6"] Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.489772 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-54948dbc74-zblvv"] Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.491409 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.497068 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.518994 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-54948dbc74-zblvv"] Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.529683 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb6w8\" (UniqueName: \"kubernetes.io/projected/b24cad90-178f-44c8-ac99-398d08215783-kube-api-access-bb6w8\") pod \"heat-engine-98bbfb9d-vk4s6\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.529892 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-config-data\") pod \"heat-engine-98bbfb9d-vk4s6\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.529966 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-config-data\") pod \"heat-cfnapi-54948dbc74-zblvv\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.530057 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kfmw\" (UniqueName: \"kubernetes.io/projected/4824add4-25e9-4822-81fb-090d9ea91628-kube-api-access-2kfmw\") pod \"heat-cfnapi-54948dbc74-zblvv\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.530190 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-combined-ca-bundle\") pod \"heat-engine-98bbfb9d-vk4s6\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.530291 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-config-data-custom\") pod \"heat-engine-98bbfb9d-vk4s6\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.530541 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-combined-ca-bundle\") pod \"heat-cfnapi-54948dbc74-zblvv\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.530619 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-config-data-custom\") pod \"heat-cfnapi-54948dbc74-zblvv\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.536146 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-74496849b9-d2xs7"] Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.537761 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.544562 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.551140 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-74496849b9-d2xs7"] Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.632689 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-combined-ca-bundle\") pod \"heat-cfnapi-54948dbc74-zblvv\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.632735 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-config-data-custom\") pod \"heat-cfnapi-54948dbc74-zblvv\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.632768 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-config-data-custom\") pod \"heat-api-74496849b9-d2xs7\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.632795 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb6w8\" (UniqueName: \"kubernetes.io/projected/b24cad90-178f-44c8-ac99-398d08215783-kube-api-access-bb6w8\") pod \"heat-engine-98bbfb9d-vk4s6\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.632819 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-config-data\") pod \"heat-engine-98bbfb9d-vk4s6\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.632836 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-combined-ca-bundle\") pod \"heat-api-74496849b9-d2xs7\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.632859 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-config-data\") pod \"heat-cfnapi-54948dbc74-zblvv\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.632872 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kfmw\" (UniqueName: \"kubernetes.io/projected/4824add4-25e9-4822-81fb-090d9ea91628-kube-api-access-2kfmw\") pod \"heat-cfnapi-54948dbc74-zblvv\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.632915 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-config-data\") pod \"heat-api-74496849b9-d2xs7\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.632965 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-combined-ca-bundle\") pod \"heat-engine-98bbfb9d-vk4s6\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.633006 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-config-data-custom\") pod \"heat-engine-98bbfb9d-vk4s6\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.633026 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwgwk\" (UniqueName: \"kubernetes.io/projected/0baeb655-f0ce-41c8-a318-ab09ef75c097-kube-api-access-vwgwk\") pod \"heat-api-74496849b9-d2xs7\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.640614 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-config-data\") pod \"heat-cfnapi-54948dbc74-zblvv\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.642156 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-combined-ca-bundle\") pod \"heat-cfnapi-54948dbc74-zblvv\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.642271 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-config-data-custom\") pod \"heat-cfnapi-54948dbc74-zblvv\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.643429 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-config-data\") pod \"heat-engine-98bbfb9d-vk4s6\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.646846 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-combined-ca-bundle\") pod \"heat-engine-98bbfb9d-vk4s6\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.651544 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb6w8\" (UniqueName: \"kubernetes.io/projected/b24cad90-178f-44c8-ac99-398d08215783-kube-api-access-bb6w8\") pod \"heat-engine-98bbfb9d-vk4s6\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.656910 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kfmw\" (UniqueName: \"kubernetes.io/projected/4824add4-25e9-4822-81fb-090d9ea91628-kube-api-access-2kfmw\") pod \"heat-cfnapi-54948dbc74-zblvv\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.659369 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-config-data-custom\") pod \"heat-engine-98bbfb9d-vk4s6\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.727601 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.734198 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-config-data-custom\") pod \"heat-api-74496849b9-d2xs7\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.734267 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-combined-ca-bundle\") pod \"heat-api-74496849b9-d2xs7\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.734352 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-config-data\") pod \"heat-api-74496849b9-d2xs7\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.734456 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwgwk\" (UniqueName: \"kubernetes.io/projected/0baeb655-f0ce-41c8-a318-ab09ef75c097-kube-api-access-vwgwk\") pod \"heat-api-74496849b9-d2xs7\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.738921 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-config-data-custom\") pod \"heat-api-74496849b9-d2xs7\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.740449 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-combined-ca-bundle\") pod \"heat-api-74496849b9-d2xs7\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.743410 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-config-data\") pod \"heat-api-74496849b9-d2xs7\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.753109 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwgwk\" (UniqueName: \"kubernetes.io/projected/0baeb655-f0ce-41c8-a318-ab09ef75c097-kube-api-access-vwgwk\") pod \"heat-api-74496849b9-d2xs7\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.820795 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.867859 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6023eba8-a02a-4d6d-8975-41e2a8c3e771" path="/var/lib/kubelet/pods/6023eba8-a02a-4d6d-8975-41e2a8c3e771/volumes" Jan 29 08:21:36 crc kubenswrapper[4826]: I0129 08:21:36.871636 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:37 crc kubenswrapper[4826]: I0129 08:21:37.439998 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-98bbfb9d-vk4s6"] Jan 29 08:21:37 crc kubenswrapper[4826]: W0129 08:21:37.524040 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0baeb655_f0ce_41c8_a318_ab09ef75c097.slice/crio-00b1013f0dca57516bbbbe82400212f2171cfcc8e2fb649e4a0d7e204ea17f5e WatchSource:0}: Error finding container 00b1013f0dca57516bbbbe82400212f2171cfcc8e2fb649e4a0d7e204ea17f5e: Status 404 returned error can't find the container with id 00b1013f0dca57516bbbbe82400212f2171cfcc8e2fb649e4a0d7e204ea17f5e Jan 29 08:21:37 crc kubenswrapper[4826]: I0129 08:21:37.531444 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-74496849b9-d2xs7"] Jan 29 08:21:37 crc kubenswrapper[4826]: I0129 08:21:37.614347 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-54948dbc74-zblvv"] Jan 29 08:21:37 crc kubenswrapper[4826]: W0129 08:21:37.621084 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4824add4_25e9_4822_81fb_090d9ea91628.slice/crio-ea8c005cc9aa2d1721f9eec6a54bb92d88eeee295e2d696767ed759271684082 WatchSource:0}: Error finding container ea8c005cc9aa2d1721f9eec6a54bb92d88eeee295e2d696767ed759271684082: Status 404 returned error can't find the container with id ea8c005cc9aa2d1721f9eec6a54bb92d88eeee295e2d696767ed759271684082 Jan 29 08:21:38 crc kubenswrapper[4826]: I0129 08:21:38.011404 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6fc96bc4d8-55gpp" Jan 29 08:21:38 crc kubenswrapper[4826]: I0129 08:21:38.083704 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58dcf5df6-kbrdh"] Jan 29 08:21:38 crc kubenswrapper[4826]: I0129 08:21:38.083932 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-58dcf5df6-kbrdh" podUID="c1e011d3-9762-44e1-8137-2432569a561e" containerName="horizon-log" containerID="cri-o://2db8c1b361235872c80df05f35d4eb1e837fe0e7db7c8c165b46d313440e896c" gracePeriod=30 Jan 29 08:21:38 crc kubenswrapper[4826]: I0129 08:21:38.084326 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-58dcf5df6-kbrdh" podUID="c1e011d3-9762-44e1-8137-2432569a561e" containerName="horizon" containerID="cri-o://4ca5abf2dc71f9e1fa8cf8c6a06a22c2fa47f5037111388105ce6cdef610d6a2" gracePeriod=30 Jan 29 08:21:38 crc kubenswrapper[4826]: I0129 08:21:38.235379 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54948dbc74-zblvv" event={"ID":"4824add4-25e9-4822-81fb-090d9ea91628","Type":"ContainerStarted","Data":"ea8c005cc9aa2d1721f9eec6a54bb92d88eeee295e2d696767ed759271684082"} Jan 29 08:21:38 crc kubenswrapper[4826]: I0129 08:21:38.237920 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-98bbfb9d-vk4s6" event={"ID":"b24cad90-178f-44c8-ac99-398d08215783","Type":"ContainerStarted","Data":"06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57"} Jan 29 08:21:38 crc kubenswrapper[4826]: I0129 08:21:38.237944 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-98bbfb9d-vk4s6" event={"ID":"b24cad90-178f-44c8-ac99-398d08215783","Type":"ContainerStarted","Data":"1ef3e52743df5a3ff942d8da86928546c4acd2e3be03d15c0617fc7c696a59a4"} Jan 29 08:21:38 crc kubenswrapper[4826]: I0129 08:21:38.237973 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:38 crc kubenswrapper[4826]: I0129 08:21:38.239651 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74496849b9-d2xs7" event={"ID":"0baeb655-f0ce-41c8-a318-ab09ef75c097","Type":"ContainerStarted","Data":"00b1013f0dca57516bbbbe82400212f2171cfcc8e2fb649e4a0d7e204ea17f5e"} Jan 29 08:21:38 crc kubenswrapper[4826]: I0129 08:21:38.254778 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-98bbfb9d-vk4s6" podStartSLOduration=2.254759671 podStartE2EDuration="2.254759671s" podCreationTimestamp="2026-01-29 08:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:21:38.251248398 +0000 UTC m=+5882.113041467" watchObservedRunningTime="2026-01-29 08:21:38.254759671 +0000 UTC m=+5882.116552740" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.255937 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74496849b9-d2xs7" event={"ID":"0baeb655-f0ce-41c8-a318-ab09ef75c097","Type":"ContainerStarted","Data":"41b07b57079261f6d210e4ceae3845d246486f2cf359f120fc7f45e8ce2b4251"} Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.256393 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.257216 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54948dbc74-zblvv" event={"ID":"4824add4-25e9-4822-81fb-090d9ea91628","Type":"ContainerStarted","Data":"0597ad867550fd950c0ebf8966da743cc270d7a7ca88e69c23d836c8d3a87f63"} Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.273630 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-74496849b9-d2xs7" podStartSLOduration=2.579580815 podStartE2EDuration="4.273612173s" podCreationTimestamp="2026-01-29 08:21:36 +0000 UTC" firstStartedPulling="2026-01-29 08:21:37.529229916 +0000 UTC m=+5881.391022985" lastFinishedPulling="2026-01-29 08:21:39.223261274 +0000 UTC m=+5883.085054343" observedRunningTime="2026-01-29 08:21:40.270787289 +0000 UTC m=+5884.132580348" watchObservedRunningTime="2026-01-29 08:21:40.273612173 +0000 UTC m=+5884.135405242" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.322229 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-54948dbc74-zblvv" podStartSLOduration=2.719670614 podStartE2EDuration="4.322212303s" podCreationTimestamp="2026-01-29 08:21:36 +0000 UTC" firstStartedPulling="2026-01-29 08:21:37.623484638 +0000 UTC m=+5881.485277707" lastFinishedPulling="2026-01-29 08:21:39.226026327 +0000 UTC m=+5883.087819396" observedRunningTime="2026-01-29 08:21:40.293860576 +0000 UTC m=+5884.155653645" watchObservedRunningTime="2026-01-29 08:21:40.322212303 +0000 UTC m=+5884.184005372" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.324525 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vmq5z"] Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.326544 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.365442 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmq5z"] Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.530146 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dea41b4-345f-4a40-94a3-b10ac12aa343-utilities\") pod \"community-operators-vmq5z\" (UID: \"6dea41b4-345f-4a40-94a3-b10ac12aa343\") " pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.530624 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5b6l\" (UniqueName: \"kubernetes.io/projected/6dea41b4-345f-4a40-94a3-b10ac12aa343-kube-api-access-h5b6l\") pod \"community-operators-vmq5z\" (UID: \"6dea41b4-345f-4a40-94a3-b10ac12aa343\") " pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.530691 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dea41b4-345f-4a40-94a3-b10ac12aa343-catalog-content\") pod \"community-operators-vmq5z\" (UID: \"6dea41b4-345f-4a40-94a3-b10ac12aa343\") " pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.631914 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dea41b4-345f-4a40-94a3-b10ac12aa343-utilities\") pod \"community-operators-vmq5z\" (UID: \"6dea41b4-345f-4a40-94a3-b10ac12aa343\") " pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.632045 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5b6l\" (UniqueName: \"kubernetes.io/projected/6dea41b4-345f-4a40-94a3-b10ac12aa343-kube-api-access-h5b6l\") pod \"community-operators-vmq5z\" (UID: \"6dea41b4-345f-4a40-94a3-b10ac12aa343\") " pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.632095 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dea41b4-345f-4a40-94a3-b10ac12aa343-catalog-content\") pod \"community-operators-vmq5z\" (UID: \"6dea41b4-345f-4a40-94a3-b10ac12aa343\") " pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.632506 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dea41b4-345f-4a40-94a3-b10ac12aa343-utilities\") pod \"community-operators-vmq5z\" (UID: \"6dea41b4-345f-4a40-94a3-b10ac12aa343\") " pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.632527 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dea41b4-345f-4a40-94a3-b10ac12aa343-catalog-content\") pod \"community-operators-vmq5z\" (UID: \"6dea41b4-345f-4a40-94a3-b10ac12aa343\") " pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.652988 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5b6l\" (UniqueName: \"kubernetes.io/projected/6dea41b4-345f-4a40-94a3-b10ac12aa343-kube-api-access-h5b6l\") pod \"community-operators-vmq5z\" (UID: \"6dea41b4-345f-4a40-94a3-b10ac12aa343\") " pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:40 crc kubenswrapper[4826]: I0129 08:21:40.943143 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:41 crc kubenswrapper[4826]: I0129 08:21:41.233194 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-58dcf5df6-kbrdh" podUID="c1e011d3-9762-44e1-8137-2432569a561e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.100:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:52646->10.217.1.100:8443: read: connection reset by peer" Jan 29 08:21:41 crc kubenswrapper[4826]: I0129 08:21:41.277860 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:41 crc kubenswrapper[4826]: I0129 08:21:41.381931 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmq5z"] Jan 29 08:21:41 crc kubenswrapper[4826]: W0129 08:21:41.384462 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dea41b4_345f_4a40_94a3_b10ac12aa343.slice/crio-cf20a1e23d43b29359d13a9dba7ab1af5d338c11bdc1d978e43b7dd6806b0f30 WatchSource:0}: Error finding container cf20a1e23d43b29359d13a9dba7ab1af5d338c11bdc1d978e43b7dd6806b0f30: Status 404 returned error can't find the container with id cf20a1e23d43b29359d13a9dba7ab1af5d338c11bdc1d978e43b7dd6806b0f30 Jan 29 08:21:42 crc kubenswrapper[4826]: I0129 08:21:42.285482 4826 generic.go:334] "Generic (PLEG): container finished" podID="6dea41b4-345f-4a40-94a3-b10ac12aa343" containerID="fe62af21907dda08cf858e76076a6bb0397ed1410bdacad2bd6170d3c5c298a2" exitCode=0 Jan 29 08:21:42 crc kubenswrapper[4826]: I0129 08:21:42.285546 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmq5z" event={"ID":"6dea41b4-345f-4a40-94a3-b10ac12aa343","Type":"ContainerDied","Data":"fe62af21907dda08cf858e76076a6bb0397ed1410bdacad2bd6170d3c5c298a2"} Jan 29 08:21:42 crc kubenswrapper[4826]: I0129 08:21:42.285822 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmq5z" event={"ID":"6dea41b4-345f-4a40-94a3-b10ac12aa343","Type":"ContainerStarted","Data":"cf20a1e23d43b29359d13a9dba7ab1af5d338c11bdc1d978e43b7dd6806b0f30"} Jan 29 08:21:42 crc kubenswrapper[4826]: I0129 08:21:42.287915 4826 generic.go:334] "Generic (PLEG): container finished" podID="c1e011d3-9762-44e1-8137-2432569a561e" containerID="4ca5abf2dc71f9e1fa8cf8c6a06a22c2fa47f5037111388105ce6cdef610d6a2" exitCode=0 Jan 29 08:21:42 crc kubenswrapper[4826]: I0129 08:21:42.288001 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58dcf5df6-kbrdh" event={"ID":"c1e011d3-9762-44e1-8137-2432569a561e","Type":"ContainerDied","Data":"4ca5abf2dc71f9e1fa8cf8c6a06a22c2fa47f5037111388105ce6cdef610d6a2"} Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.314716 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmq5z" event={"ID":"6dea41b4-345f-4a40-94a3-b10ac12aa343","Type":"ContainerStarted","Data":"cab2edb2b84981011d8653c564ef43d1ca05299d2b889ad9a8dd6a1e4ace4dcd"} Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.620139 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-745776875c-xh5h5"] Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.621886 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.633166 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7b4cbb95d6-p5lg5"] Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.635040 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.645903 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b4cbb95d6-p5lg5"] Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.663758 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-745776875c-xh5h5"] Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.670902 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7c4bc4dbfc-r9gdr"] Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.672378 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.692074 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c4bc4dbfc-r9gdr"] Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.800726 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf33c20-4fec-4155-8a2f-6f9dd59b12ab-combined-ca-bundle\") pod \"heat-engine-745776875c-xh5h5\" (UID: \"edf33c20-4fec-4155-8a2f-6f9dd59b12ab\") " pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.800780 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edf33c20-4fec-4155-8a2f-6f9dd59b12ab-config-data-custom\") pod \"heat-engine-745776875c-xh5h5\" (UID: \"edf33c20-4fec-4155-8a2f-6f9dd59b12ab\") " pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.800797 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-config-data\") pod \"heat-cfnapi-7b4cbb95d6-p5lg5\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.801507 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-config-data-custom\") pod \"heat-cfnapi-7b4cbb95d6-p5lg5\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.801887 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxxlj\" (UniqueName: \"kubernetes.io/projected/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-kube-api-access-rxxlj\") pod \"heat-cfnapi-7b4cbb95d6-p5lg5\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.801943 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfnl9\" (UniqueName: \"kubernetes.io/projected/edf33c20-4fec-4155-8a2f-6f9dd59b12ab-kube-api-access-xfnl9\") pod \"heat-engine-745776875c-xh5h5\" (UID: \"edf33c20-4fec-4155-8a2f-6f9dd59b12ab\") " pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.802018 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8stfr\" (UniqueName: \"kubernetes.io/projected/f98753e2-8d12-459d-a3ce-c9f2582225a5-kube-api-access-8stfr\") pod \"heat-api-7c4bc4dbfc-r9gdr\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.802079 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-config-data-custom\") pod \"heat-api-7c4bc4dbfc-r9gdr\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.802114 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-config-data\") pod \"heat-api-7c4bc4dbfc-r9gdr\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.802159 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-combined-ca-bundle\") pod \"heat-api-7c4bc4dbfc-r9gdr\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.802249 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-combined-ca-bundle\") pod \"heat-cfnapi-7b4cbb95d6-p5lg5\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.802348 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf33c20-4fec-4155-8a2f-6f9dd59b12ab-config-data\") pod \"heat-engine-745776875c-xh5h5\" (UID: \"edf33c20-4fec-4155-8a2f-6f9dd59b12ab\") " pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.904012 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxxlj\" (UniqueName: \"kubernetes.io/projected/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-kube-api-access-rxxlj\") pod \"heat-cfnapi-7b4cbb95d6-p5lg5\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.904076 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfnl9\" (UniqueName: \"kubernetes.io/projected/edf33c20-4fec-4155-8a2f-6f9dd59b12ab-kube-api-access-xfnl9\") pod \"heat-engine-745776875c-xh5h5\" (UID: \"edf33c20-4fec-4155-8a2f-6f9dd59b12ab\") " pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.904551 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8stfr\" (UniqueName: \"kubernetes.io/projected/f98753e2-8d12-459d-a3ce-c9f2582225a5-kube-api-access-8stfr\") pod \"heat-api-7c4bc4dbfc-r9gdr\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.904617 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-config-data-custom\") pod \"heat-api-7c4bc4dbfc-r9gdr\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.904650 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-config-data\") pod \"heat-api-7c4bc4dbfc-r9gdr\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.904704 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-combined-ca-bundle\") pod \"heat-api-7c4bc4dbfc-r9gdr\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.904782 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-combined-ca-bundle\") pod \"heat-cfnapi-7b4cbb95d6-p5lg5\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.904850 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf33c20-4fec-4155-8a2f-6f9dd59b12ab-config-data\") pod \"heat-engine-745776875c-xh5h5\" (UID: \"edf33c20-4fec-4155-8a2f-6f9dd59b12ab\") " pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.904976 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf33c20-4fec-4155-8a2f-6f9dd59b12ab-combined-ca-bundle\") pod \"heat-engine-745776875c-xh5h5\" (UID: \"edf33c20-4fec-4155-8a2f-6f9dd59b12ab\") " pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.905009 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-config-data\") pod \"heat-cfnapi-7b4cbb95d6-p5lg5\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.905045 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edf33c20-4fec-4155-8a2f-6f9dd59b12ab-config-data-custom\") pod \"heat-engine-745776875c-xh5h5\" (UID: \"edf33c20-4fec-4155-8a2f-6f9dd59b12ab\") " pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.905089 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-config-data-custom\") pod \"heat-cfnapi-7b4cbb95d6-p5lg5\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.911416 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-combined-ca-bundle\") pod \"heat-api-7c4bc4dbfc-r9gdr\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.912464 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edf33c20-4fec-4155-8a2f-6f9dd59b12ab-config-data-custom\") pod \"heat-engine-745776875c-xh5h5\" (UID: \"edf33c20-4fec-4155-8a2f-6f9dd59b12ab\") " pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.912646 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-config-data-custom\") pod \"heat-api-7c4bc4dbfc-r9gdr\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.913193 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf33c20-4fec-4155-8a2f-6f9dd59b12ab-combined-ca-bundle\") pod \"heat-engine-745776875c-xh5h5\" (UID: \"edf33c20-4fec-4155-8a2f-6f9dd59b12ab\") " pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.917048 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-config-data\") pod \"heat-cfnapi-7b4cbb95d6-p5lg5\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.919042 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-combined-ca-bundle\") pod \"heat-cfnapi-7b4cbb95d6-p5lg5\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.921337 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-config-data-custom\") pod \"heat-cfnapi-7b4cbb95d6-p5lg5\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.928064 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxxlj\" (UniqueName: \"kubernetes.io/projected/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-kube-api-access-rxxlj\") pod \"heat-cfnapi-7b4cbb95d6-p5lg5\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.928518 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8stfr\" (UniqueName: \"kubernetes.io/projected/f98753e2-8d12-459d-a3ce-c9f2582225a5-kube-api-access-8stfr\") pod \"heat-api-7c4bc4dbfc-r9gdr\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.928668 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-config-data\") pod \"heat-api-7c4bc4dbfc-r9gdr\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.940882 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf33c20-4fec-4155-8a2f-6f9dd59b12ab-config-data\") pod \"heat-engine-745776875c-xh5h5\" (UID: \"edf33c20-4fec-4155-8a2f-6f9dd59b12ab\") " pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.941067 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfnl9\" (UniqueName: \"kubernetes.io/projected/edf33c20-4fec-4155-8a2f-6f9dd59b12ab-kube-api-access-xfnl9\") pod \"heat-engine-745776875c-xh5h5\" (UID: \"edf33c20-4fec-4155-8a2f-6f9dd59b12ab\") " pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.945571 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.956281 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:43 crc kubenswrapper[4826]: I0129 08:21:43.994108 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.360682 4826 generic.go:334] "Generic (PLEG): container finished" podID="6dea41b4-345f-4a40-94a3-b10ac12aa343" containerID="cab2edb2b84981011d8653c564ef43d1ca05299d2b889ad9a8dd6a1e4ace4dcd" exitCode=0 Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.360966 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmq5z" event={"ID":"6dea41b4-345f-4a40-94a3-b10ac12aa343","Type":"ContainerDied","Data":"cab2edb2b84981011d8653c564ef43d1ca05299d2b889ad9a8dd6a1e4ace4dcd"} Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.528170 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-745776875c-xh5h5"] Jan 29 08:21:44 crc kubenswrapper[4826]: W0129 08:21:44.533407 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedf33c20_4fec_4155_8a2f_6f9dd59b12ab.slice/crio-3f46b5b7dc5dc02b28be389165f618a56c7e77efcb9fac29bb7d1b2232e4eba5 WatchSource:0}: Error finding container 3f46b5b7dc5dc02b28be389165f618a56c7e77efcb9fac29bb7d1b2232e4eba5: Status 404 returned error can't find the container with id 3f46b5b7dc5dc02b28be389165f618a56c7e77efcb9fac29bb7d1b2232e4eba5 Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.649711 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b4cbb95d6-p5lg5"] Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.660624 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c4bc4dbfc-r9gdr"] Jan 29 08:21:44 crc kubenswrapper[4826]: W0129 08:21:44.668530 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfc4a5ba_ae41_49da_a38f_3e2cb6afe124.slice/crio-22971a235719452f2ca12902a25998e2d4ebce372f567b0350d3c41786278955 WatchSource:0}: Error finding container 22971a235719452f2ca12902a25998e2d4ebce372f567b0350d3c41786278955: Status 404 returned error can't find the container with id 22971a235719452f2ca12902a25998e2d4ebce372f567b0350d3c41786278955 Jan 29 08:21:44 crc kubenswrapper[4826]: W0129 08:21:44.669445 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf98753e2_8d12_459d_a3ce_c9f2582225a5.slice/crio-490b217f51379ce86998540d50951d024086970dacf6f32e23b0473e9aeaa899 WatchSource:0}: Error finding container 490b217f51379ce86998540d50951d024086970dacf6f32e23b0473e9aeaa899: Status 404 returned error can't find the container with id 490b217f51379ce86998540d50951d024086970dacf6f32e23b0473e9aeaa899 Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.929838 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-74496849b9-d2xs7"] Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.930311 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-74496849b9-d2xs7" podUID="0baeb655-f0ce-41c8-a318-ab09ef75c097" containerName="heat-api" containerID="cri-o://41b07b57079261f6d210e4ceae3845d246486f2cf359f120fc7f45e8ce2b4251" gracePeriod=60 Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.942040 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-74496849b9-d2xs7" podUID="0baeb655-f0ce-41c8-a318-ab09ef75c097" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.109:8004/healthcheck\": EOF" Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.947268 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-54948dbc74-zblvv"] Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.947537 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-54948dbc74-zblvv" podUID="4824add4-25e9-4822-81fb-090d9ea91628" containerName="heat-cfnapi" containerID="cri-o://0597ad867550fd950c0ebf8966da743cc270d7a7ca88e69c23d836c8d3a87f63" gracePeriod=60 Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.965748 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7f4c4fc79c-mthpn"] Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.967076 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.975071 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-54948dbc74-zblvv" podUID="4824add4-25e9-4822-81fb-090d9ea91628" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.108:8000/healthcheck\": EOF" Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.987699 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.987883 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.996508 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5db8584866-xvf8s"] Jan 29 08:21:44 crc kubenswrapper[4826]: I0129 08:21:44.998378 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.002722 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.003007 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.038211 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7f4c4fc79c-mthpn"] Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.064062 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-internal-tls-certs\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.064161 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-config-data\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.064213 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsk55\" (UniqueName: \"kubernetes.io/projected/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-kube-api-access-fsk55\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.064279 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-combined-ca-bundle\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.064364 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-public-tls-certs\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.064465 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-config-data-custom\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.080364 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5db8584866-xvf8s"] Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.166226 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqbdk\" (UniqueName: \"kubernetes.io/projected/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-kube-api-access-bqbdk\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.166286 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-internal-tls-certs\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.166400 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-config-data-custom\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.166426 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-config-data\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.166453 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-public-tls-certs\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.166559 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-internal-tls-certs\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.166716 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-config-data\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.166751 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-config-data-custom\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.166833 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsk55\" (UniqueName: \"kubernetes.io/projected/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-kube-api-access-fsk55\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.167056 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-combined-ca-bundle\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.167086 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-combined-ca-bundle\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.167228 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-public-tls-certs\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.178158 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-config-data-custom\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.179829 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-internal-tls-certs\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.180370 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-combined-ca-bundle\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.189988 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-public-tls-certs\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.195417 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-config-data\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.199849 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsk55\" (UniqueName: \"kubernetes.io/projected/0cfd816e-cd1a-4072-9c0c-d25633a5bcf1-kube-api-access-fsk55\") pod \"heat-api-7f4c4fc79c-mthpn\" (UID: \"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1\") " pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.271540 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqbdk\" (UniqueName: \"kubernetes.io/projected/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-kube-api-access-bqbdk\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.271607 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-internal-tls-certs\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.271647 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-config-data\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.271687 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-public-tls-certs\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.271739 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-config-data-custom\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.271824 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-combined-ca-bundle\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.275950 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-combined-ca-bundle\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.286159 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-config-data-custom\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.290974 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-public-tls-certs\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.291288 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-config-data\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.295484 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-internal-tls-certs\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.309944 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqbdk\" (UniqueName: \"kubernetes.io/projected/8f9f671c-8f28-4bfa-b31b-05f06322cbb4-kube-api-access-bqbdk\") pod \"heat-cfnapi-5db8584866-xvf8s\" (UID: \"8f9f671c-8f28-4bfa-b31b-05f06322cbb4\") " pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.358704 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.377559 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.414666 4826 generic.go:334] "Generic (PLEG): container finished" podID="f98753e2-8d12-459d-a3ce-c9f2582225a5" containerID="5bdf90f1cf90373253bc44ba99810477b3215e73fd71ee15c296ab52afa7e0cf" exitCode=1 Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.414744 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" event={"ID":"f98753e2-8d12-459d-a3ce-c9f2582225a5","Type":"ContainerDied","Data":"5bdf90f1cf90373253bc44ba99810477b3215e73fd71ee15c296ab52afa7e0cf"} Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.414775 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" event={"ID":"f98753e2-8d12-459d-a3ce-c9f2582225a5","Type":"ContainerStarted","Data":"490b217f51379ce86998540d50951d024086970dacf6f32e23b0473e9aeaa899"} Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.415346 4826 scope.go:117] "RemoveContainer" containerID="5bdf90f1cf90373253bc44ba99810477b3215e73fd71ee15c296ab52afa7e0cf" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.425687 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-745776875c-xh5h5" event={"ID":"edf33c20-4fec-4155-8a2f-6f9dd59b12ab","Type":"ContainerStarted","Data":"b34658e3c69bb9ccbfa843a6f69b5ef3c9bb3f3bfbf5405b3f6d95d7b71dd569"} Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.425731 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-745776875c-xh5h5" event={"ID":"edf33c20-4fec-4155-8a2f-6f9dd59b12ab","Type":"ContainerStarted","Data":"3f46b5b7dc5dc02b28be389165f618a56c7e77efcb9fac29bb7d1b2232e4eba5"} Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.426218 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.431704 4826 generic.go:334] "Generic (PLEG): container finished" podID="cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" containerID="0c4410bf0b8629f71aff2bc972ec16d29b7d82693f9265d004e189e995006011" exitCode=1 Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.431750 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" event={"ID":"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124","Type":"ContainerDied","Data":"0c4410bf0b8629f71aff2bc972ec16d29b7d82693f9265d004e189e995006011"} Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.431769 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" event={"ID":"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124","Type":"ContainerStarted","Data":"22971a235719452f2ca12902a25998e2d4ebce372f567b0350d3c41786278955"} Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.432375 4826 scope.go:117] "RemoveContainer" containerID="0c4410bf0b8629f71aff2bc972ec16d29b7d82693f9265d004e189e995006011" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.450710 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmq5z" event={"ID":"6dea41b4-345f-4a40-94a3-b10ac12aa343","Type":"ContainerStarted","Data":"fc09b54557f446d9ecd229bb6e2f245098ab11e55d277076b00a87345d448cad"} Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.570253 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-745776875c-xh5h5" podStartSLOduration=2.570234657 podStartE2EDuration="2.570234657s" podCreationTimestamp="2026-01-29 08:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:21:45.480585576 +0000 UTC m=+5889.342378635" watchObservedRunningTime="2026-01-29 08:21:45.570234657 +0000 UTC m=+5889.432027726" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.572385 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vmq5z" podStartSLOduration=3.031233457 podStartE2EDuration="5.572377923s" podCreationTimestamp="2026-01-29 08:21:40 +0000 UTC" firstStartedPulling="2026-01-29 08:21:42.287885924 +0000 UTC m=+5886.149678983" lastFinishedPulling="2026-01-29 08:21:44.82903039 +0000 UTC m=+5888.690823449" observedRunningTime="2026-01-29 08:21:45.523498796 +0000 UTC m=+5889.385291865" watchObservedRunningTime="2026-01-29 08:21:45.572377923 +0000 UTC m=+5889.434170992" Jan 29 08:21:45 crc kubenswrapper[4826]: I0129 08:21:45.982638 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7f4c4fc79c-mthpn"] Jan 29 08:21:46 crc kubenswrapper[4826]: I0129 08:21:46.161861 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5db8584866-xvf8s"] Jan 29 08:21:46 crc kubenswrapper[4826]: I0129 08:21:46.464429 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" event={"ID":"f98753e2-8d12-459d-a3ce-c9f2582225a5","Type":"ContainerStarted","Data":"4b88d75a76d988458772a0e572fcc0f1841cc1c1424bce3a7af61a8d8ac206fe"} Jan 29 08:21:46 crc kubenswrapper[4826]: I0129 08:21:46.464743 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:46 crc kubenswrapper[4826]: I0129 08:21:46.466934 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7f4c4fc79c-mthpn" event={"ID":"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1","Type":"ContainerStarted","Data":"0460752aa2243b3345dc568c7e5c7f43bfc51654c4b8beb2f0709454e3ff7611"} Jan 29 08:21:46 crc kubenswrapper[4826]: I0129 08:21:46.468223 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5db8584866-xvf8s" event={"ID":"8f9f671c-8f28-4bfa-b31b-05f06322cbb4","Type":"ContainerStarted","Data":"d1cf837582b47a1c99a72f2cfed633fe218ed9e5464e54bfdbe71e75f574ef90"} Jan 29 08:21:46 crc kubenswrapper[4826]: I0129 08:21:46.470834 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" event={"ID":"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124","Type":"ContainerStarted","Data":"79433b422dfa7a20c73c7f222c07e76f1f834132f2c484a5ecc4822e73e17e44"} Jan 29 08:21:46 crc kubenswrapper[4826]: I0129 08:21:46.471448 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:46 crc kubenswrapper[4826]: I0129 08:21:46.490616 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" podStartSLOduration=3.490599693 podStartE2EDuration="3.490599693s" podCreationTimestamp="2026-01-29 08:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:21:46.483934217 +0000 UTC m=+5890.345727286" watchObservedRunningTime="2026-01-29 08:21:46.490599693 +0000 UTC m=+5890.352392762" Jan 29 08:21:46 crc kubenswrapper[4826]: I0129 08:21:46.522876 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" podStartSLOduration=3.522853882 podStartE2EDuration="3.522853882s" podCreationTimestamp="2026-01-29 08:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:21:46.515289753 +0000 UTC m=+5890.377082822" watchObservedRunningTime="2026-01-29 08:21:46.522853882 +0000 UTC m=+5890.384646951" Jan 29 08:21:46 crc kubenswrapper[4826]: I0129 08:21:46.816029 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:21:46 crc kubenswrapper[4826]: E0129 08:21:46.816238 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.482363 4826 generic.go:334] "Generic (PLEG): container finished" podID="f98753e2-8d12-459d-a3ce-c9f2582225a5" containerID="4b88d75a76d988458772a0e572fcc0f1841cc1c1424bce3a7af61a8d8ac206fe" exitCode=1 Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.482478 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" event={"ID":"f98753e2-8d12-459d-a3ce-c9f2582225a5","Type":"ContainerDied","Data":"4b88d75a76d988458772a0e572fcc0f1841cc1c1424bce3a7af61a8d8ac206fe"} Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.482787 4826 scope.go:117] "RemoveContainer" containerID="5bdf90f1cf90373253bc44ba99810477b3215e73fd71ee15c296ab52afa7e0cf" Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.483151 4826 scope.go:117] "RemoveContainer" containerID="4b88d75a76d988458772a0e572fcc0f1841cc1c1424bce3a7af61a8d8ac206fe" Jan 29 08:21:47 crc kubenswrapper[4826]: E0129 08:21:47.483511 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7c4bc4dbfc-r9gdr_openstack(f98753e2-8d12-459d-a3ce-c9f2582225a5)\"" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" podUID="f98753e2-8d12-459d-a3ce-c9f2582225a5" Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.484813 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7f4c4fc79c-mthpn" event={"ID":"0cfd816e-cd1a-4072-9c0c-d25633a5bcf1","Type":"ContainerStarted","Data":"120910e71f9ff649acd338323976ae00c5e72248c16b9ced5428343823b7c8bc"} Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.485544 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.511836 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5db8584866-xvf8s" event={"ID":"8f9f671c-8f28-4bfa-b31b-05f06322cbb4","Type":"ContainerStarted","Data":"f78ed810bd2c368227a30352574441ff7dc84b6963c0cfa7d9516a263658feb3"} Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.512046 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.532068 4826 generic.go:334] "Generic (PLEG): container finished" podID="cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" containerID="79433b422dfa7a20c73c7f222c07e76f1f834132f2c484a5ecc4822e73e17e44" exitCode=1 Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.532201 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" event={"ID":"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124","Type":"ContainerDied","Data":"79433b422dfa7a20c73c7f222c07e76f1f834132f2c484a5ecc4822e73e17e44"} Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.532894 4826 scope.go:117] "RemoveContainer" containerID="79433b422dfa7a20c73c7f222c07e76f1f834132f2c484a5ecc4822e73e17e44" Jan 29 08:21:47 crc kubenswrapper[4826]: E0129 08:21:47.533196 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7b4cbb95d6-p5lg5_openstack(cfc4a5ba-ae41-49da-a38f-3e2cb6afe124)\"" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" podUID="cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.546639 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5db8584866-xvf8s" podStartSLOduration=3.546624891 podStartE2EDuration="3.546624891s" podCreationTimestamp="2026-01-29 08:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:21:47.54163631 +0000 UTC m=+5891.403429379" watchObservedRunningTime="2026-01-29 08:21:47.546624891 +0000 UTC m=+5891.408417950" Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.548840 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7f4c4fc79c-mthpn" podStartSLOduration=3.548835289 podStartE2EDuration="3.548835289s" podCreationTimestamp="2026-01-29 08:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:21:47.524641392 +0000 UTC m=+5891.386434501" watchObservedRunningTime="2026-01-29 08:21:47.548835289 +0000 UTC m=+5891.410628358" Jan 29 08:21:47 crc kubenswrapper[4826]: I0129 08:21:47.565279 4826 scope.go:117] "RemoveContainer" containerID="0c4410bf0b8629f71aff2bc972ec16d29b7d82693f9265d004e189e995006011" Jan 29 08:21:48 crc kubenswrapper[4826]: I0129 08:21:48.541361 4826 scope.go:117] "RemoveContainer" containerID="79433b422dfa7a20c73c7f222c07e76f1f834132f2c484a5ecc4822e73e17e44" Jan 29 08:21:48 crc kubenswrapper[4826]: E0129 08:21:48.541928 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7b4cbb95d6-p5lg5_openstack(cfc4a5ba-ae41-49da-a38f-3e2cb6afe124)\"" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" podUID="cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" Jan 29 08:21:48 crc kubenswrapper[4826]: I0129 08:21:48.543065 4826 scope.go:117] "RemoveContainer" containerID="4b88d75a76d988458772a0e572fcc0f1841cc1c1424bce3a7af61a8d8ac206fe" Jan 29 08:21:48 crc kubenswrapper[4826]: E0129 08:21:48.543557 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7c4bc4dbfc-r9gdr_openstack(f98753e2-8d12-459d-a3ce-c9f2582225a5)\"" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" podUID="f98753e2-8d12-459d-a3ce-c9f2582225a5" Jan 29 08:21:48 crc kubenswrapper[4826]: I0129 08:21:48.957325 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:48 crc kubenswrapper[4826]: I0129 08:21:48.995041 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:49 crc kubenswrapper[4826]: I0129 08:21:49.551908 4826 scope.go:117] "RemoveContainer" containerID="79433b422dfa7a20c73c7f222c07e76f1f834132f2c484a5ecc4822e73e17e44" Jan 29 08:21:49 crc kubenswrapper[4826]: I0129 08:21:49.552036 4826 scope.go:117] "RemoveContainer" containerID="4b88d75a76d988458772a0e572fcc0f1841cc1c1424bce3a7af61a8d8ac206fe" Jan 29 08:21:49 crc kubenswrapper[4826]: E0129 08:21:49.552234 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7b4cbb95d6-p5lg5_openstack(cfc4a5ba-ae41-49da-a38f-3e2cb6afe124)\"" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" podUID="cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" Jan 29 08:21:49 crc kubenswrapper[4826]: E0129 08:21:49.552275 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7c4bc4dbfc-r9gdr_openstack(f98753e2-8d12-459d-a3ce-c9f2582225a5)\"" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" podUID="f98753e2-8d12-459d-a3ce-c9f2582225a5" Jan 29 08:21:49 crc kubenswrapper[4826]: I0129 08:21:49.655037 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-58dcf5df6-kbrdh" podUID="c1e011d3-9762-44e1-8137-2432569a561e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.100:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.100:8443: connect: connection refused" Jan 29 08:21:50 crc kubenswrapper[4826]: I0129 08:21:50.408372 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-54948dbc74-zblvv" podUID="4824add4-25e9-4822-81fb-090d9ea91628" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.108:8000/healthcheck\": read tcp 10.217.0.2:50584->10.217.1.108:8000: read: connection reset by peer" Jan 29 08:21:50 crc kubenswrapper[4826]: I0129 08:21:50.431518 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-74496849b9-d2xs7" podUID="0baeb655-f0ce-41c8-a318-ab09ef75c097" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.109:8004/healthcheck\": read tcp 10.217.0.2:40550->10.217.1.109:8004: read: connection reset by peer" Jan 29 08:21:50 crc kubenswrapper[4826]: I0129 08:21:50.566260 4826 generic.go:334] "Generic (PLEG): container finished" podID="0baeb655-f0ce-41c8-a318-ab09ef75c097" containerID="41b07b57079261f6d210e4ceae3845d246486f2cf359f120fc7f45e8ce2b4251" exitCode=0 Jan 29 08:21:50 crc kubenswrapper[4826]: I0129 08:21:50.566405 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74496849b9-d2xs7" event={"ID":"0baeb655-f0ce-41c8-a318-ab09ef75c097","Type":"ContainerDied","Data":"41b07b57079261f6d210e4ceae3845d246486f2cf359f120fc7f45e8ce2b4251"} Jan 29 08:21:50 crc kubenswrapper[4826]: I0129 08:21:50.568831 4826 generic.go:334] "Generic (PLEG): container finished" podID="4824add4-25e9-4822-81fb-090d9ea91628" containerID="0597ad867550fd950c0ebf8966da743cc270d7a7ca88e69c23d836c8d3a87f63" exitCode=0 Jan 29 08:21:50 crc kubenswrapper[4826]: I0129 08:21:50.568860 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54948dbc74-zblvv" event={"ID":"4824add4-25e9-4822-81fb-090d9ea91628","Type":"ContainerDied","Data":"0597ad867550fd950c0ebf8966da743cc270d7a7ca88e69c23d836c8d3a87f63"} Jan 29 08:21:50 crc kubenswrapper[4826]: I0129 08:21:50.943671 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:50 crc kubenswrapper[4826]: I0129 08:21:50.943834 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.010730 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.058131 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.066232 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.222986 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kfmw\" (UniqueName: \"kubernetes.io/projected/4824add4-25e9-4822-81fb-090d9ea91628-kube-api-access-2kfmw\") pod \"4824add4-25e9-4822-81fb-090d9ea91628\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.223365 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-config-data\") pod \"0baeb655-f0ce-41c8-a318-ab09ef75c097\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.223546 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-config-data\") pod \"4824add4-25e9-4822-81fb-090d9ea91628\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.223683 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-combined-ca-bundle\") pod \"0baeb655-f0ce-41c8-a318-ab09ef75c097\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.223823 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-combined-ca-bundle\") pod \"4824add4-25e9-4822-81fb-090d9ea91628\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.223969 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-config-data-custom\") pod \"0baeb655-f0ce-41c8-a318-ab09ef75c097\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.224162 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-config-data-custom\") pod \"4824add4-25e9-4822-81fb-090d9ea91628\" (UID: \"4824add4-25e9-4822-81fb-090d9ea91628\") " Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.224287 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwgwk\" (UniqueName: \"kubernetes.io/projected/0baeb655-f0ce-41c8-a318-ab09ef75c097-kube-api-access-vwgwk\") pod \"0baeb655-f0ce-41c8-a318-ab09ef75c097\" (UID: \"0baeb655-f0ce-41c8-a318-ab09ef75c097\") " Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.228974 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0baeb655-f0ce-41c8-a318-ab09ef75c097-kube-api-access-vwgwk" (OuterVolumeSpecName: "kube-api-access-vwgwk") pod "0baeb655-f0ce-41c8-a318-ab09ef75c097" (UID: "0baeb655-f0ce-41c8-a318-ab09ef75c097"). InnerVolumeSpecName "kube-api-access-vwgwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.230612 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0baeb655-f0ce-41c8-a318-ab09ef75c097" (UID: "0baeb655-f0ce-41c8-a318-ab09ef75c097"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.230702 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4824add4-25e9-4822-81fb-090d9ea91628" (UID: "4824add4-25e9-4822-81fb-090d9ea91628"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.231846 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4824add4-25e9-4822-81fb-090d9ea91628-kube-api-access-2kfmw" (OuterVolumeSpecName: "kube-api-access-2kfmw") pod "4824add4-25e9-4822-81fb-090d9ea91628" (UID: "4824add4-25e9-4822-81fb-090d9ea91628"). InnerVolumeSpecName "kube-api-access-2kfmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.275953 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0baeb655-f0ce-41c8-a318-ab09ef75c097" (UID: "0baeb655-f0ce-41c8-a318-ab09ef75c097"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.277621 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-config-data" (OuterVolumeSpecName: "config-data") pod "0baeb655-f0ce-41c8-a318-ab09ef75c097" (UID: "0baeb655-f0ce-41c8-a318-ab09ef75c097"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.279862 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-config-data" (OuterVolumeSpecName: "config-data") pod "4824add4-25e9-4822-81fb-090d9ea91628" (UID: "4824add4-25e9-4822-81fb-090d9ea91628"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.287238 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4824add4-25e9-4822-81fb-090d9ea91628" (UID: "4824add4-25e9-4822-81fb-090d9ea91628"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.326973 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.327013 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.327026 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwgwk\" (UniqueName: \"kubernetes.io/projected/0baeb655-f0ce-41c8-a318-ab09ef75c097-kube-api-access-vwgwk\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.327060 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kfmw\" (UniqueName: \"kubernetes.io/projected/4824add4-25e9-4822-81fb-090d9ea91628-kube-api-access-2kfmw\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.327074 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.327085 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.327096 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0baeb655-f0ce-41c8-a318-ab09ef75c097-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.327107 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4824add4-25e9-4822-81fb-090d9ea91628-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.577795 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74496849b9-d2xs7" event={"ID":"0baeb655-f0ce-41c8-a318-ab09ef75c097","Type":"ContainerDied","Data":"00b1013f0dca57516bbbbe82400212f2171cfcc8e2fb649e4a0d7e204ea17f5e"} Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.577882 4826 scope.go:117] "RemoveContainer" containerID="41b07b57079261f6d210e4ceae3845d246486f2cf359f120fc7f45e8ce2b4251" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.577872 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74496849b9-d2xs7" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.580524 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54948dbc74-zblvv" event={"ID":"4824add4-25e9-4822-81fb-090d9ea91628","Type":"ContainerDied","Data":"ea8c005cc9aa2d1721f9eec6a54bb92d88eeee295e2d696767ed759271684082"} Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.580526 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54948dbc74-zblvv" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.600601 4826 scope.go:117] "RemoveContainer" containerID="0597ad867550fd950c0ebf8966da743cc270d7a7ca88e69c23d836c8d3a87f63" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.637364 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-54948dbc74-zblvv"] Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.637954 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.651212 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-54948dbc74-zblvv"] Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.661343 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-74496849b9-d2xs7"] Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.672692 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-74496849b9-d2xs7"] Jan 29 08:21:51 crc kubenswrapper[4826]: I0129 08:21:51.691539 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmq5z"] Jan 29 08:21:52 crc kubenswrapper[4826]: I0129 08:21:52.831193 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0baeb655-f0ce-41c8-a318-ab09ef75c097" path="/var/lib/kubelet/pods/0baeb655-f0ce-41c8-a318-ab09ef75c097/volumes" Jan 29 08:21:52 crc kubenswrapper[4826]: I0129 08:21:52.832370 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4824add4-25e9-4822-81fb-090d9ea91628" path="/var/lib/kubelet/pods/4824add4-25e9-4822-81fb-090d9ea91628/volumes" Jan 29 08:21:53 crc kubenswrapper[4826]: I0129 08:21:53.605125 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vmq5z" podUID="6dea41b4-345f-4a40-94a3-b10ac12aa343" containerName="registry-server" containerID="cri-o://fc09b54557f446d9ecd229bb6e2f245098ab11e55d277076b00a87345d448cad" gracePeriod=2 Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.144857 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.290140 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dea41b4-345f-4a40-94a3-b10ac12aa343-catalog-content\") pod \"6dea41b4-345f-4a40-94a3-b10ac12aa343\" (UID: \"6dea41b4-345f-4a40-94a3-b10ac12aa343\") " Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.290240 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dea41b4-345f-4a40-94a3-b10ac12aa343-utilities\") pod \"6dea41b4-345f-4a40-94a3-b10ac12aa343\" (UID: \"6dea41b4-345f-4a40-94a3-b10ac12aa343\") " Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.290321 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5b6l\" (UniqueName: \"kubernetes.io/projected/6dea41b4-345f-4a40-94a3-b10ac12aa343-kube-api-access-h5b6l\") pod \"6dea41b4-345f-4a40-94a3-b10ac12aa343\" (UID: \"6dea41b4-345f-4a40-94a3-b10ac12aa343\") " Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.291375 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dea41b4-345f-4a40-94a3-b10ac12aa343-utilities" (OuterVolumeSpecName: "utilities") pod "6dea41b4-345f-4a40-94a3-b10ac12aa343" (UID: "6dea41b4-345f-4a40-94a3-b10ac12aa343"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.316707 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dea41b4-345f-4a40-94a3-b10ac12aa343-kube-api-access-h5b6l" (OuterVolumeSpecName: "kube-api-access-h5b6l") pod "6dea41b4-345f-4a40-94a3-b10ac12aa343" (UID: "6dea41b4-345f-4a40-94a3-b10ac12aa343"). InnerVolumeSpecName "kube-api-access-h5b6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.367259 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dea41b4-345f-4a40-94a3-b10ac12aa343-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dea41b4-345f-4a40-94a3-b10ac12aa343" (UID: "6dea41b4-345f-4a40-94a3-b10ac12aa343"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.398733 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dea41b4-345f-4a40-94a3-b10ac12aa343-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.398756 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5b6l\" (UniqueName: \"kubernetes.io/projected/6dea41b4-345f-4a40-94a3-b10ac12aa343-kube-api-access-h5b6l\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.398771 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dea41b4-345f-4a40-94a3-b10ac12aa343-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.626192 4826 generic.go:334] "Generic (PLEG): container finished" podID="6dea41b4-345f-4a40-94a3-b10ac12aa343" containerID="fc09b54557f446d9ecd229bb6e2f245098ab11e55d277076b00a87345d448cad" exitCode=0 Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.626237 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmq5z" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.626231 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmq5z" event={"ID":"6dea41b4-345f-4a40-94a3-b10ac12aa343","Type":"ContainerDied","Data":"fc09b54557f446d9ecd229bb6e2f245098ab11e55d277076b00a87345d448cad"} Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.626346 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmq5z" event={"ID":"6dea41b4-345f-4a40-94a3-b10ac12aa343","Type":"ContainerDied","Data":"cf20a1e23d43b29359d13a9dba7ab1af5d338c11bdc1d978e43b7dd6806b0f30"} Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.626366 4826 scope.go:117] "RemoveContainer" containerID="fc09b54557f446d9ecd229bb6e2f245098ab11e55d277076b00a87345d448cad" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.657982 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmq5z"] Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.658972 4826 scope.go:117] "RemoveContainer" containerID="cab2edb2b84981011d8653c564ef43d1ca05299d2b889ad9a8dd6a1e4ace4dcd" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.672535 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vmq5z"] Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.686884 4826 scope.go:117] "RemoveContainer" containerID="fe62af21907dda08cf858e76076a6bb0397ed1410bdacad2bd6170d3c5c298a2" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.717352 4826 scope.go:117] "RemoveContainer" containerID="fc09b54557f446d9ecd229bb6e2f245098ab11e55d277076b00a87345d448cad" Jan 29 08:21:54 crc kubenswrapper[4826]: E0129 08:21:54.717693 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc09b54557f446d9ecd229bb6e2f245098ab11e55d277076b00a87345d448cad\": container with ID starting with fc09b54557f446d9ecd229bb6e2f245098ab11e55d277076b00a87345d448cad not found: ID does not exist" containerID="fc09b54557f446d9ecd229bb6e2f245098ab11e55d277076b00a87345d448cad" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.717730 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc09b54557f446d9ecd229bb6e2f245098ab11e55d277076b00a87345d448cad"} err="failed to get container status \"fc09b54557f446d9ecd229bb6e2f245098ab11e55d277076b00a87345d448cad\": rpc error: code = NotFound desc = could not find container \"fc09b54557f446d9ecd229bb6e2f245098ab11e55d277076b00a87345d448cad\": container with ID starting with fc09b54557f446d9ecd229bb6e2f245098ab11e55d277076b00a87345d448cad not found: ID does not exist" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.717755 4826 scope.go:117] "RemoveContainer" containerID="cab2edb2b84981011d8653c564ef43d1ca05299d2b889ad9a8dd6a1e4ace4dcd" Jan 29 08:21:54 crc kubenswrapper[4826]: E0129 08:21:54.717958 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cab2edb2b84981011d8653c564ef43d1ca05299d2b889ad9a8dd6a1e4ace4dcd\": container with ID starting with cab2edb2b84981011d8653c564ef43d1ca05299d2b889ad9a8dd6a1e4ace4dcd not found: ID does not exist" containerID="cab2edb2b84981011d8653c564ef43d1ca05299d2b889ad9a8dd6a1e4ace4dcd" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.717981 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cab2edb2b84981011d8653c564ef43d1ca05299d2b889ad9a8dd6a1e4ace4dcd"} err="failed to get container status \"cab2edb2b84981011d8653c564ef43d1ca05299d2b889ad9a8dd6a1e4ace4dcd\": rpc error: code = NotFound desc = could not find container \"cab2edb2b84981011d8653c564ef43d1ca05299d2b889ad9a8dd6a1e4ace4dcd\": container with ID starting with cab2edb2b84981011d8653c564ef43d1ca05299d2b889ad9a8dd6a1e4ace4dcd not found: ID does not exist" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.717992 4826 scope.go:117] "RemoveContainer" containerID="fe62af21907dda08cf858e76076a6bb0397ed1410bdacad2bd6170d3c5c298a2" Jan 29 08:21:54 crc kubenswrapper[4826]: E0129 08:21:54.718243 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe62af21907dda08cf858e76076a6bb0397ed1410bdacad2bd6170d3c5c298a2\": container with ID starting with fe62af21907dda08cf858e76076a6bb0397ed1410bdacad2bd6170d3c5c298a2 not found: ID does not exist" containerID="fe62af21907dda08cf858e76076a6bb0397ed1410bdacad2bd6170d3c5c298a2" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.718264 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe62af21907dda08cf858e76076a6bb0397ed1410bdacad2bd6170d3c5c298a2"} err="failed to get container status \"fe62af21907dda08cf858e76076a6bb0397ed1410bdacad2bd6170d3c5c298a2\": rpc error: code = NotFound desc = could not find container \"fe62af21907dda08cf858e76076a6bb0397ed1410bdacad2bd6170d3c5c298a2\": container with ID starting with fe62af21907dda08cf858e76076a6bb0397ed1410bdacad2bd6170d3c5c298a2 not found: ID does not exist" Jan 29 08:21:54 crc kubenswrapper[4826]: I0129 08:21:54.820794 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dea41b4-345f-4a40-94a3-b10ac12aa343" path="/var/lib/kubelet/pods/6dea41b4-345f-4a40-94a3-b10ac12aa343/volumes" Jan 29 08:21:56 crc kubenswrapper[4826]: I0129 08:21:56.552973 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5db8584866-xvf8s" Jan 29 08:21:56 crc kubenswrapper[4826]: I0129 08:21:56.603441 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7f4c4fc79c-mthpn" Jan 29 08:21:56 crc kubenswrapper[4826]: I0129 08:21:56.612224 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b4cbb95d6-p5lg5"] Jan 29 08:21:56 crc kubenswrapper[4826]: I0129 08:21:56.718310 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7c4bc4dbfc-r9gdr"] Jan 29 08:21:56 crc kubenswrapper[4826]: I0129 08:21:56.781523 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.133640 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.138988 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.272870 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-combined-ca-bundle\") pod \"f98753e2-8d12-459d-a3ce-c9f2582225a5\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.272948 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-config-data-custom\") pod \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.273013 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-config-data\") pod \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.273092 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-combined-ca-bundle\") pod \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.273125 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8stfr\" (UniqueName: \"kubernetes.io/projected/f98753e2-8d12-459d-a3ce-c9f2582225a5-kube-api-access-8stfr\") pod \"f98753e2-8d12-459d-a3ce-c9f2582225a5\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.273162 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-config-data-custom\") pod \"f98753e2-8d12-459d-a3ce-c9f2582225a5\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.273180 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-config-data\") pod \"f98753e2-8d12-459d-a3ce-c9f2582225a5\" (UID: \"f98753e2-8d12-459d-a3ce-c9f2582225a5\") " Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.273243 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxxlj\" (UniqueName: \"kubernetes.io/projected/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-kube-api-access-rxxlj\") pod \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\" (UID: \"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124\") " Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.278882 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-kube-api-access-rxxlj" (OuterVolumeSpecName: "kube-api-access-rxxlj") pod "cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" (UID: "cfc4a5ba-ae41-49da-a38f-3e2cb6afe124"). InnerVolumeSpecName "kube-api-access-rxxlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.280953 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f98753e2-8d12-459d-a3ce-c9f2582225a5" (UID: "f98753e2-8d12-459d-a3ce-c9f2582225a5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.281635 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" (UID: "cfc4a5ba-ae41-49da-a38f-3e2cb6afe124"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.291582 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98753e2-8d12-459d-a3ce-c9f2582225a5-kube-api-access-8stfr" (OuterVolumeSpecName: "kube-api-access-8stfr") pod "f98753e2-8d12-459d-a3ce-c9f2582225a5" (UID: "f98753e2-8d12-459d-a3ce-c9f2582225a5"). InnerVolumeSpecName "kube-api-access-8stfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.303611 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" (UID: "cfc4a5ba-ae41-49da-a38f-3e2cb6afe124"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.304809 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f98753e2-8d12-459d-a3ce-c9f2582225a5" (UID: "f98753e2-8d12-459d-a3ce-c9f2582225a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.332922 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-config-data" (OuterVolumeSpecName: "config-data") pod "f98753e2-8d12-459d-a3ce-c9f2582225a5" (UID: "f98753e2-8d12-459d-a3ce-c9f2582225a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.351954 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-config-data" (OuterVolumeSpecName: "config-data") pod "cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" (UID: "cfc4a5ba-ae41-49da-a38f-3e2cb6afe124"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.377689 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxxlj\" (UniqueName: \"kubernetes.io/projected/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-kube-api-access-rxxlj\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.377727 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.377738 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.377750 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.377761 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.377771 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8stfr\" (UniqueName: \"kubernetes.io/projected/f98753e2-8d12-459d-a3ce-c9f2582225a5-kube-api-access-8stfr\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.377782 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.377793 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98753e2-8d12-459d-a3ce-c9f2582225a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.724912 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" event={"ID":"cfc4a5ba-ae41-49da-a38f-3e2cb6afe124","Type":"ContainerDied","Data":"22971a235719452f2ca12902a25998e2d4ebce372f567b0350d3c41786278955"} Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.724960 4826 scope.go:117] "RemoveContainer" containerID="79433b422dfa7a20c73c7f222c07e76f1f834132f2c484a5ecc4822e73e17e44" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.725037 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b4cbb95d6-p5lg5" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.737530 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" event={"ID":"f98753e2-8d12-459d-a3ce-c9f2582225a5","Type":"ContainerDied","Data":"490b217f51379ce86998540d50951d024086970dacf6f32e23b0473e9aeaa899"} Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.737603 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c4bc4dbfc-r9gdr" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.780751 4826 scope.go:117] "RemoveContainer" containerID="4b88d75a76d988458772a0e572fcc0f1841cc1c1424bce3a7af61a8d8ac206fe" Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.782555 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b4cbb95d6-p5lg5"] Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.790369 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7b4cbb95d6-p5lg5"] Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.799379 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7c4bc4dbfc-r9gdr"] Jan 29 08:21:57 crc kubenswrapper[4826]: I0129 08:21:57.817189 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7c4bc4dbfc-r9gdr"] Jan 29 08:21:58 crc kubenswrapper[4826]: I0129 08:21:58.824120 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" path="/var/lib/kubelet/pods/cfc4a5ba-ae41-49da-a38f-3e2cb6afe124/volumes" Jan 29 08:21:58 crc kubenswrapper[4826]: I0129 08:21:58.824887 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f98753e2-8d12-459d-a3ce-c9f2582225a5" path="/var/lib/kubelet/pods/f98753e2-8d12-459d-a3ce-c9f2582225a5/volumes" Jan 29 08:21:59 crc kubenswrapper[4826]: I0129 08:21:59.654920 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-58dcf5df6-kbrdh" podUID="c1e011d3-9762-44e1-8137-2432569a561e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.100:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.100:8443: connect: connection refused" Jan 29 08:21:59 crc kubenswrapper[4826]: I0129 08:21:59.655065 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:22:00 crc kubenswrapper[4826]: I0129 08:22:00.808914 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:22:00 crc kubenswrapper[4826]: E0129 08:22:00.809537 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:22:02 crc kubenswrapper[4826]: I0129 08:22:02.070854 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-krk5b"] Jan 29 08:22:02 crc kubenswrapper[4826]: I0129 08:22:02.087847 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dbd0-account-create-update-qbplg"] Jan 29 08:22:02 crc kubenswrapper[4826]: I0129 08:22:02.098233 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dbd0-account-create-update-qbplg"] Jan 29 08:22:02 crc kubenswrapper[4826]: I0129 08:22:02.107742 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-krk5b"] Jan 29 08:22:02 crc kubenswrapper[4826]: I0129 08:22:02.830036 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12acfe9d-3c59-4b09-bc73-511763da0f97" path="/var/lib/kubelet/pods/12acfe9d-3c59-4b09-bc73-511763da0f97/volumes" Jan 29 08:22:02 crc kubenswrapper[4826]: I0129 08:22:02.831570 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86b70399-c3a4-4543-b284-fad06b2dff50" path="/var/lib/kubelet/pods/86b70399-c3a4-4543-b284-fad06b2dff50/volumes" Jan 29 08:22:03 crc kubenswrapper[4826]: I0129 08:22:03.998706 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-745776875c-xh5h5" Jan 29 08:22:04 crc kubenswrapper[4826]: I0129 08:22:04.073680 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-98bbfb9d-vk4s6"] Jan 29 08:22:04 crc kubenswrapper[4826]: I0129 08:22:04.073935 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-98bbfb9d-vk4s6" podUID="b24cad90-178f-44c8-ac99-398d08215783" containerName="heat-engine" containerID="cri-o://06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57" gracePeriod=60 Jan 29 08:22:06 crc kubenswrapper[4826]: E0129 08:22:06.731879 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 29 08:22:06 crc kubenswrapper[4826]: E0129 08:22:06.734231 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 29 08:22:06 crc kubenswrapper[4826]: E0129 08:22:06.736215 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 29 08:22:06 crc kubenswrapper[4826]: E0129 08:22:06.736314 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-98bbfb9d-vk4s6" podUID="b24cad90-178f-44c8-ac99-398d08215783" containerName="heat-engine" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.498268 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.630242 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-horizon-secret-key\") pod \"c1e011d3-9762-44e1-8137-2432569a561e\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.630344 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-horizon-tls-certs\") pod \"c1e011d3-9762-44e1-8137-2432569a561e\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.630399 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e011d3-9762-44e1-8137-2432569a561e-logs\") pod \"c1e011d3-9762-44e1-8137-2432569a561e\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.630515 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nscff\" (UniqueName: \"kubernetes.io/projected/c1e011d3-9762-44e1-8137-2432569a561e-kube-api-access-nscff\") pod \"c1e011d3-9762-44e1-8137-2432569a561e\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.630573 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-combined-ca-bundle\") pod \"c1e011d3-9762-44e1-8137-2432569a561e\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.630596 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e011d3-9762-44e1-8137-2432569a561e-scripts\") pod \"c1e011d3-9762-44e1-8137-2432569a561e\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.630622 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e011d3-9762-44e1-8137-2432569a561e-config-data\") pod \"c1e011d3-9762-44e1-8137-2432569a561e\" (UID: \"c1e011d3-9762-44e1-8137-2432569a561e\") " Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.631468 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e011d3-9762-44e1-8137-2432569a561e-logs" (OuterVolumeSpecName: "logs") pod "c1e011d3-9762-44e1-8137-2432569a561e" (UID: "c1e011d3-9762-44e1-8137-2432569a561e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.631703 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e011d3-9762-44e1-8137-2432569a561e-logs\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.636383 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c1e011d3-9762-44e1-8137-2432569a561e" (UID: "c1e011d3-9762-44e1-8137-2432569a561e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.638192 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e011d3-9762-44e1-8137-2432569a561e-kube-api-access-nscff" (OuterVolumeSpecName: "kube-api-access-nscff") pod "c1e011d3-9762-44e1-8137-2432569a561e" (UID: "c1e011d3-9762-44e1-8137-2432569a561e"). InnerVolumeSpecName "kube-api-access-nscff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.659050 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e011d3-9762-44e1-8137-2432569a561e-config-data" (OuterVolumeSpecName: "config-data") pod "c1e011d3-9762-44e1-8137-2432569a561e" (UID: "c1e011d3-9762-44e1-8137-2432569a561e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.662795 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e011d3-9762-44e1-8137-2432569a561e-scripts" (OuterVolumeSpecName: "scripts") pod "c1e011d3-9762-44e1-8137-2432569a561e" (UID: "c1e011d3-9762-44e1-8137-2432569a561e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.668999 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1e011d3-9762-44e1-8137-2432569a561e" (UID: "c1e011d3-9762-44e1-8137-2432569a561e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.699432 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "c1e011d3-9762-44e1-8137-2432569a561e" (UID: "c1e011d3-9762-44e1-8137-2432569a561e"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.733832 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nscff\" (UniqueName: \"kubernetes.io/projected/c1e011d3-9762-44e1-8137-2432569a561e-kube-api-access-nscff\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.733867 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.733877 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e011d3-9762-44e1-8137-2432569a561e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.733888 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e011d3-9762-44e1-8137-2432569a561e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.733897 4826 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.733904 4826 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1e011d3-9762-44e1-8137-2432569a561e-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.868433 4826 generic.go:334] "Generic (PLEG): container finished" podID="c1e011d3-9762-44e1-8137-2432569a561e" containerID="2db8c1b361235872c80df05f35d4eb1e837fe0e7db7c8c165b46d313440e896c" exitCode=137 Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.868482 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58dcf5df6-kbrdh" event={"ID":"c1e011d3-9762-44e1-8137-2432569a561e","Type":"ContainerDied","Data":"2db8c1b361235872c80df05f35d4eb1e837fe0e7db7c8c165b46d313440e896c"} Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.868516 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58dcf5df6-kbrdh" event={"ID":"c1e011d3-9762-44e1-8137-2432569a561e","Type":"ContainerDied","Data":"14028152ea29de61ef14c755255dd798f970190acab07278934bed61b24a9376"} Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.868537 4826 scope.go:117] "RemoveContainer" containerID="4ca5abf2dc71f9e1fa8cf8c6a06a22c2fa47f5037111388105ce6cdef610d6a2" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.868695 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58dcf5df6-kbrdh" Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.896411 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58dcf5df6-kbrdh"] Jan 29 08:22:08 crc kubenswrapper[4826]: I0129 08:22:08.910981 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-58dcf5df6-kbrdh"] Jan 29 08:22:09 crc kubenswrapper[4826]: I0129 08:22:09.106237 4826 scope.go:117] "RemoveContainer" containerID="2db8c1b361235872c80df05f35d4eb1e837fe0e7db7c8c165b46d313440e896c" Jan 29 08:22:09 crc kubenswrapper[4826]: I0129 08:22:09.132755 4826 scope.go:117] "RemoveContainer" containerID="4ca5abf2dc71f9e1fa8cf8c6a06a22c2fa47f5037111388105ce6cdef610d6a2" Jan 29 08:22:09 crc kubenswrapper[4826]: E0129 08:22:09.133426 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ca5abf2dc71f9e1fa8cf8c6a06a22c2fa47f5037111388105ce6cdef610d6a2\": container with ID starting with 4ca5abf2dc71f9e1fa8cf8c6a06a22c2fa47f5037111388105ce6cdef610d6a2 not found: ID does not exist" containerID="4ca5abf2dc71f9e1fa8cf8c6a06a22c2fa47f5037111388105ce6cdef610d6a2" Jan 29 08:22:09 crc kubenswrapper[4826]: I0129 08:22:09.133485 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca5abf2dc71f9e1fa8cf8c6a06a22c2fa47f5037111388105ce6cdef610d6a2"} err="failed to get container status \"4ca5abf2dc71f9e1fa8cf8c6a06a22c2fa47f5037111388105ce6cdef610d6a2\": rpc error: code = NotFound desc = could not find container \"4ca5abf2dc71f9e1fa8cf8c6a06a22c2fa47f5037111388105ce6cdef610d6a2\": container with ID starting with 4ca5abf2dc71f9e1fa8cf8c6a06a22c2fa47f5037111388105ce6cdef610d6a2 not found: ID does not exist" Jan 29 08:22:09 crc kubenswrapper[4826]: I0129 08:22:09.133518 4826 scope.go:117] "RemoveContainer" containerID="2db8c1b361235872c80df05f35d4eb1e837fe0e7db7c8c165b46d313440e896c" Jan 29 08:22:09 crc kubenswrapper[4826]: E0129 08:22:09.134039 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2db8c1b361235872c80df05f35d4eb1e837fe0e7db7c8c165b46d313440e896c\": container with ID starting with 2db8c1b361235872c80df05f35d4eb1e837fe0e7db7c8c165b46d313440e896c not found: ID does not exist" containerID="2db8c1b361235872c80df05f35d4eb1e837fe0e7db7c8c165b46d313440e896c" Jan 29 08:22:09 crc kubenswrapper[4826]: I0129 08:22:09.134109 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2db8c1b361235872c80df05f35d4eb1e837fe0e7db7c8c165b46d313440e896c"} err="failed to get container status \"2db8c1b361235872c80df05f35d4eb1e837fe0e7db7c8c165b46d313440e896c\": rpc error: code = NotFound desc = could not find container \"2db8c1b361235872c80df05f35d4eb1e837fe0e7db7c8c165b46d313440e896c\": container with ID starting with 2db8c1b361235872c80df05f35d4eb1e837fe0e7db7c8c165b46d313440e896c not found: ID does not exist" Jan 29 08:22:10 crc kubenswrapper[4826]: I0129 08:22:10.041506 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qhcwq"] Jan 29 08:22:10 crc kubenswrapper[4826]: I0129 08:22:10.052181 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qhcwq"] Jan 29 08:22:10 crc kubenswrapper[4826]: I0129 08:22:10.824047 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e011d3-9762-44e1-8137-2432569a561e" path="/var/lib/kubelet/pods/c1e011d3-9762-44e1-8137-2432569a561e/volumes" Jan 29 08:22:10 crc kubenswrapper[4826]: I0129 08:22:10.825437 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed258084-deca-4148-bc0a-5a56182f4a7e" path="/var/lib/kubelet/pods/ed258084-deca-4148-bc0a-5a56182f4a7e/volumes" Jan 29 08:22:14 crc kubenswrapper[4826]: I0129 08:22:14.808812 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:22:14 crc kubenswrapper[4826]: E0129 08:22:14.809581 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:22:16 crc kubenswrapper[4826]: E0129 08:22:16.731589 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 29 08:22:16 crc kubenswrapper[4826]: E0129 08:22:16.734194 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 29 08:22:16 crc kubenswrapper[4826]: E0129 08:22:16.736510 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 29 08:22:16 crc kubenswrapper[4826]: E0129 08:22:16.736679 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-98bbfb9d-vk4s6" podUID="b24cad90-178f-44c8-ac99-398d08215783" containerName="heat-engine" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.506534 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.544760 4826 scope.go:117] "RemoveContainer" containerID="6a7463cdc73974e7ead8520ecca2579f50d1ada3808b30335d7f16bfb17ac155" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.569156 4826 scope.go:117] "RemoveContainer" containerID="98d355213920248f787fd4090c2ab02a4988299129c22853b1b19f36f0dba021" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.612682 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-config-data-custom\") pod \"b24cad90-178f-44c8-ac99-398d08215783\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.612822 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-config-data\") pod \"b24cad90-178f-44c8-ac99-398d08215783\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.612865 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-combined-ca-bundle\") pod \"b24cad90-178f-44c8-ac99-398d08215783\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.612969 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb6w8\" (UniqueName: \"kubernetes.io/projected/b24cad90-178f-44c8-ac99-398d08215783-kube-api-access-bb6w8\") pod \"b24cad90-178f-44c8-ac99-398d08215783\" (UID: \"b24cad90-178f-44c8-ac99-398d08215783\") " Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.619032 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b24cad90-178f-44c8-ac99-398d08215783" (UID: "b24cad90-178f-44c8-ac99-398d08215783"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.619455 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24cad90-178f-44c8-ac99-398d08215783-kube-api-access-bb6w8" (OuterVolumeSpecName: "kube-api-access-bb6w8") pod "b24cad90-178f-44c8-ac99-398d08215783" (UID: "b24cad90-178f-44c8-ac99-398d08215783"). InnerVolumeSpecName "kube-api-access-bb6w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.649427 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b24cad90-178f-44c8-ac99-398d08215783" (UID: "b24cad90-178f-44c8-ac99-398d08215783"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.667118 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-config-data" (OuterVolumeSpecName: "config-data") pod "b24cad90-178f-44c8-ac99-398d08215783" (UID: "b24cad90-178f-44c8-ac99-398d08215783"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.678645 4826 scope.go:117] "RemoveContainer" containerID="bb0484b1cb95cf1207588598d902edf20d859084ec3bb2584317baf8e27a8c1a" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.715692 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.715723 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb6w8\" (UniqueName: \"kubernetes.io/projected/b24cad90-178f-44c8-ac99-398d08215783-kube-api-access-bb6w8\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.715732 4826 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.715743 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24cad90-178f-44c8-ac99-398d08215783-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.716853 4826 scope.go:117] "RemoveContainer" containerID="721c203f56bc7b697ba97fa4ded2f30d628d034c75837ea8f22e5af5b24fb9a2" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.741704 4826 scope.go:117] "RemoveContainer" containerID="d5a6de88ebe70994f81bb40c776b10407595642ef804a4c446bafefa7b6bef49" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.763848 4826 scope.go:117] "RemoveContainer" containerID="7a4523570412e58db500b5b4a18db5fc4f9b3c5156e6dfc98ec2e63057d9e15d" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.785966 4826 scope.go:117] "RemoveContainer" containerID="0726c8bc8e1c1165698bc5a921b4b7c82a090dc6f3bb6842f46a017255eb995e" Jan 29 08:22:21 crc kubenswrapper[4826]: I0129 08:22:21.806006 4826 scope.go:117] "RemoveContainer" containerID="44cf12dd79ca811ab09dbcfc6aaa2a06a2bad41207f7297e6a40ae2ce208af38" Jan 29 08:22:22 crc kubenswrapper[4826]: I0129 08:22:22.032381 4826 generic.go:334] "Generic (PLEG): container finished" podID="b24cad90-178f-44c8-ac99-398d08215783" containerID="06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57" exitCode=0 Jan 29 08:22:22 crc kubenswrapper[4826]: I0129 08:22:22.032465 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-98bbfb9d-vk4s6" Jan 29 08:22:22 crc kubenswrapper[4826]: I0129 08:22:22.032484 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-98bbfb9d-vk4s6" event={"ID":"b24cad90-178f-44c8-ac99-398d08215783","Type":"ContainerDied","Data":"06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57"} Jan 29 08:22:22 crc kubenswrapper[4826]: I0129 08:22:22.032522 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-98bbfb9d-vk4s6" event={"ID":"b24cad90-178f-44c8-ac99-398d08215783","Type":"ContainerDied","Data":"1ef3e52743df5a3ff942d8da86928546c4acd2e3be03d15c0617fc7c696a59a4"} Jan 29 08:22:22 crc kubenswrapper[4826]: I0129 08:22:22.032556 4826 scope.go:117] "RemoveContainer" containerID="06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57" Jan 29 08:22:22 crc kubenswrapper[4826]: I0129 08:22:22.077319 4826 scope.go:117] "RemoveContainer" containerID="06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57" Jan 29 08:22:22 crc kubenswrapper[4826]: E0129 08:22:22.078439 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57\": container with ID starting with 06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57 not found: ID does not exist" containerID="06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57" Jan 29 08:22:22 crc kubenswrapper[4826]: I0129 08:22:22.078492 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57"} err="failed to get container status \"06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57\": rpc error: code = NotFound desc = could not find container \"06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57\": container with ID starting with 06220657fe4a1c8b3765cd2347b67f70f26231feea13c7222b8bd5147e3f8f57 not found: ID does not exist" Jan 29 08:22:22 crc kubenswrapper[4826]: I0129 08:22:22.102755 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-98bbfb9d-vk4s6"] Jan 29 08:22:22 crc kubenswrapper[4826]: I0129 08:22:22.112213 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-98bbfb9d-vk4s6"] Jan 29 08:22:22 crc kubenswrapper[4826]: I0129 08:22:22.838985 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24cad90-178f-44c8-ac99-398d08215783" path="/var/lib/kubelet/pods/b24cad90-178f-44c8-ac99-398d08215783/volumes" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.736170 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr"] Jan 29 08:22:25 crc kubenswrapper[4826]: E0129 08:22:25.737822 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24cad90-178f-44c8-ac99-398d08215783" containerName="heat-engine" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.737853 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24cad90-178f-44c8-ac99-398d08215783" containerName="heat-engine" Jan 29 08:22:25 crc kubenswrapper[4826]: E0129 08:22:25.737896 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dea41b4-345f-4a40-94a3-b10ac12aa343" containerName="registry-server" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.737914 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dea41b4-345f-4a40-94a3-b10ac12aa343" containerName="registry-server" Jan 29 08:22:25 crc kubenswrapper[4826]: E0129 08:22:25.737944 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0baeb655-f0ce-41c8-a318-ab09ef75c097" containerName="heat-api" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.737961 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0baeb655-f0ce-41c8-a318-ab09ef75c097" containerName="heat-api" Jan 29 08:22:25 crc kubenswrapper[4826]: E0129 08:22:25.737998 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" containerName="heat-cfnapi" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738013 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" containerName="heat-cfnapi" Jan 29 08:22:25 crc kubenswrapper[4826]: E0129 08:22:25.738030 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98753e2-8d12-459d-a3ce-c9f2582225a5" containerName="heat-api" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738045 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98753e2-8d12-459d-a3ce-c9f2582225a5" containerName="heat-api" Jan 29 08:22:25 crc kubenswrapper[4826]: E0129 08:22:25.738068 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e011d3-9762-44e1-8137-2432569a561e" containerName="horizon" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738083 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e011d3-9762-44e1-8137-2432569a561e" containerName="horizon" Jan 29 08:22:25 crc kubenswrapper[4826]: E0129 08:22:25.738109 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4824add4-25e9-4822-81fb-090d9ea91628" containerName="heat-cfnapi" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738123 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4824add4-25e9-4822-81fb-090d9ea91628" containerName="heat-cfnapi" Jan 29 08:22:25 crc kubenswrapper[4826]: E0129 08:22:25.738147 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dea41b4-345f-4a40-94a3-b10ac12aa343" containerName="extract-content" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738163 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dea41b4-345f-4a40-94a3-b10ac12aa343" containerName="extract-content" Jan 29 08:22:25 crc kubenswrapper[4826]: E0129 08:22:25.738208 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dea41b4-345f-4a40-94a3-b10ac12aa343" containerName="extract-utilities" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738224 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dea41b4-345f-4a40-94a3-b10ac12aa343" containerName="extract-utilities" Jan 29 08:22:25 crc kubenswrapper[4826]: E0129 08:22:25.738325 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e011d3-9762-44e1-8137-2432569a561e" containerName="horizon-log" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738343 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e011d3-9762-44e1-8137-2432569a561e" containerName="horizon-log" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738746 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0baeb655-f0ce-41c8-a318-ab09ef75c097" containerName="heat-api" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738794 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24cad90-178f-44c8-ac99-398d08215783" containerName="heat-engine" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738821 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98753e2-8d12-459d-a3ce-c9f2582225a5" containerName="heat-api" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738847 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dea41b4-345f-4a40-94a3-b10ac12aa343" containerName="registry-server" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738866 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e011d3-9762-44e1-8137-2432569a561e" containerName="horizon-log" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738885 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" containerName="heat-cfnapi" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738912 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e011d3-9762-44e1-8137-2432569a561e" containerName="horizon" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738943 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98753e2-8d12-459d-a3ce-c9f2582225a5" containerName="heat-api" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.738971 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4824add4-25e9-4822-81fb-090d9ea91628" containerName="heat-cfnapi" Jan 29 08:22:25 crc kubenswrapper[4826]: E0129 08:22:25.739429 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" containerName="heat-cfnapi" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.739450 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" containerName="heat-cfnapi" Jan 29 08:22:25 crc kubenswrapper[4826]: E0129 08:22:25.739467 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98753e2-8d12-459d-a3ce-c9f2582225a5" containerName="heat-api" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.739484 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98753e2-8d12-459d-a3ce-c9f2582225a5" containerName="heat-api" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.739845 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc4a5ba-ae41-49da-a38f-3e2cb6afe124" containerName="heat-cfnapi" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.741849 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.744456 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.746936 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr"] Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.938431 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/357050f5-02bf-4697-9e17-3d7389a90a6d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr\" (UID: \"357050f5-02bf-4697-9e17-3d7389a90a6d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.938517 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/357050f5-02bf-4697-9e17-3d7389a90a6d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr\" (UID: \"357050f5-02bf-4697-9e17-3d7389a90a6d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" Jan 29 08:22:25 crc kubenswrapper[4826]: I0129 08:22:25.938754 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmlwl\" (UniqueName: \"kubernetes.io/projected/357050f5-02bf-4697-9e17-3d7389a90a6d-kube-api-access-jmlwl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr\" (UID: \"357050f5-02bf-4697-9e17-3d7389a90a6d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" Jan 29 08:22:26 crc kubenswrapper[4826]: I0129 08:22:26.040779 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmlwl\" (UniqueName: \"kubernetes.io/projected/357050f5-02bf-4697-9e17-3d7389a90a6d-kube-api-access-jmlwl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr\" (UID: \"357050f5-02bf-4697-9e17-3d7389a90a6d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" Jan 29 08:22:26 crc kubenswrapper[4826]: I0129 08:22:26.040885 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/357050f5-02bf-4697-9e17-3d7389a90a6d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr\" (UID: \"357050f5-02bf-4697-9e17-3d7389a90a6d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" Jan 29 08:22:26 crc kubenswrapper[4826]: I0129 08:22:26.040939 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/357050f5-02bf-4697-9e17-3d7389a90a6d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr\" (UID: \"357050f5-02bf-4697-9e17-3d7389a90a6d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" Jan 29 08:22:26 crc kubenswrapper[4826]: I0129 08:22:26.041664 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/357050f5-02bf-4697-9e17-3d7389a90a6d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr\" (UID: \"357050f5-02bf-4697-9e17-3d7389a90a6d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" Jan 29 08:22:26 crc kubenswrapper[4826]: I0129 08:22:26.041950 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/357050f5-02bf-4697-9e17-3d7389a90a6d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr\" (UID: \"357050f5-02bf-4697-9e17-3d7389a90a6d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" Jan 29 08:22:26 crc kubenswrapper[4826]: I0129 08:22:26.071772 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmlwl\" (UniqueName: \"kubernetes.io/projected/357050f5-02bf-4697-9e17-3d7389a90a6d-kube-api-access-jmlwl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr\" (UID: \"357050f5-02bf-4697-9e17-3d7389a90a6d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" Jan 29 08:22:26 crc kubenswrapper[4826]: I0129 08:22:26.077729 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" Jan 29 08:22:26 crc kubenswrapper[4826]: I0129 08:22:26.382372 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr"] Jan 29 08:22:26 crc kubenswrapper[4826]: I0129 08:22:26.817502 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:22:26 crc kubenswrapper[4826]: E0129 08:22:26.817832 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:22:27 crc kubenswrapper[4826]: I0129 08:22:27.091993 4826 generic.go:334] "Generic (PLEG): container finished" podID="357050f5-02bf-4697-9e17-3d7389a90a6d" containerID="3738aac064327f1feed693d3f0a4604d9c893ec5483224f762e69ee89fe9d190" exitCode=0 Jan 29 08:22:27 crc kubenswrapper[4826]: I0129 08:22:27.092065 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" event={"ID":"357050f5-02bf-4697-9e17-3d7389a90a6d","Type":"ContainerDied","Data":"3738aac064327f1feed693d3f0a4604d9c893ec5483224f762e69ee89fe9d190"} Jan 29 08:22:27 crc kubenswrapper[4826]: I0129 08:22:27.092333 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" event={"ID":"357050f5-02bf-4697-9e17-3d7389a90a6d","Type":"ContainerStarted","Data":"ed2696a7953b39f8c177221bc6bc989f7b5a5d078c73821cc532e5fc533082c4"} Jan 29 08:22:29 crc kubenswrapper[4826]: I0129 08:22:29.118100 4826 generic.go:334] "Generic (PLEG): container finished" podID="357050f5-02bf-4697-9e17-3d7389a90a6d" containerID="0da022f81eef571aff1a6a8e08ddcd378769869be5a82a53ef3e3c1f36670fe0" exitCode=0 Jan 29 08:22:29 crc kubenswrapper[4826]: I0129 08:22:29.118185 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" event={"ID":"357050f5-02bf-4697-9e17-3d7389a90a6d","Type":"ContainerDied","Data":"0da022f81eef571aff1a6a8e08ddcd378769869be5a82a53ef3e3c1f36670fe0"} Jan 29 08:22:30 crc kubenswrapper[4826]: I0129 08:22:30.134083 4826 generic.go:334] "Generic (PLEG): container finished" podID="357050f5-02bf-4697-9e17-3d7389a90a6d" containerID="c44d4850ef376b3c1b2dd46ed2fa29427b0726648046ebdb8c5b20a84e2ca006" exitCode=0 Jan 29 08:22:30 crc kubenswrapper[4826]: I0129 08:22:30.134179 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" event={"ID":"357050f5-02bf-4697-9e17-3d7389a90a6d","Type":"ContainerDied","Data":"c44d4850ef376b3c1b2dd46ed2fa29427b0726648046ebdb8c5b20a84e2ca006"} Jan 29 08:22:31 crc kubenswrapper[4826]: I0129 08:22:31.610094 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" Jan 29 08:22:31 crc kubenswrapper[4826]: I0129 08:22:31.778429 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/357050f5-02bf-4697-9e17-3d7389a90a6d-util\") pod \"357050f5-02bf-4697-9e17-3d7389a90a6d\" (UID: \"357050f5-02bf-4697-9e17-3d7389a90a6d\") " Jan 29 08:22:31 crc kubenswrapper[4826]: I0129 08:22:31.778603 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmlwl\" (UniqueName: \"kubernetes.io/projected/357050f5-02bf-4697-9e17-3d7389a90a6d-kube-api-access-jmlwl\") pod \"357050f5-02bf-4697-9e17-3d7389a90a6d\" (UID: \"357050f5-02bf-4697-9e17-3d7389a90a6d\") " Jan 29 08:22:31 crc kubenswrapper[4826]: I0129 08:22:31.778794 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/357050f5-02bf-4697-9e17-3d7389a90a6d-bundle\") pod \"357050f5-02bf-4697-9e17-3d7389a90a6d\" (UID: \"357050f5-02bf-4697-9e17-3d7389a90a6d\") " Jan 29 08:22:31 crc kubenswrapper[4826]: I0129 08:22:31.782764 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/357050f5-02bf-4697-9e17-3d7389a90a6d-bundle" (OuterVolumeSpecName: "bundle") pod "357050f5-02bf-4697-9e17-3d7389a90a6d" (UID: "357050f5-02bf-4697-9e17-3d7389a90a6d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:22:31 crc kubenswrapper[4826]: I0129 08:22:31.783706 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/357050f5-02bf-4697-9e17-3d7389a90a6d-kube-api-access-jmlwl" (OuterVolumeSpecName: "kube-api-access-jmlwl") pod "357050f5-02bf-4697-9e17-3d7389a90a6d" (UID: "357050f5-02bf-4697-9e17-3d7389a90a6d"). InnerVolumeSpecName "kube-api-access-jmlwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:22:31 crc kubenswrapper[4826]: I0129 08:22:31.792996 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/357050f5-02bf-4697-9e17-3d7389a90a6d-util" (OuterVolumeSpecName: "util") pod "357050f5-02bf-4697-9e17-3d7389a90a6d" (UID: "357050f5-02bf-4697-9e17-3d7389a90a6d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:22:31 crc kubenswrapper[4826]: I0129 08:22:31.882265 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmlwl\" (UniqueName: \"kubernetes.io/projected/357050f5-02bf-4697-9e17-3d7389a90a6d-kube-api-access-jmlwl\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:31 crc kubenswrapper[4826]: I0129 08:22:31.882352 4826 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/357050f5-02bf-4697-9e17-3d7389a90a6d-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:31 crc kubenswrapper[4826]: I0129 08:22:31.882376 4826 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/357050f5-02bf-4697-9e17-3d7389a90a6d-util\") on node \"crc\" DevicePath \"\"" Jan 29 08:22:32 crc kubenswrapper[4826]: I0129 08:22:32.177252 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" event={"ID":"357050f5-02bf-4697-9e17-3d7389a90a6d","Type":"ContainerDied","Data":"ed2696a7953b39f8c177221bc6bc989f7b5a5d078c73821cc532e5fc533082c4"} Jan 29 08:22:32 crc kubenswrapper[4826]: I0129 08:22:32.177346 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed2696a7953b39f8c177221bc6bc989f7b5a5d078c73821cc532e5fc533082c4" Jan 29 08:22:32 crc kubenswrapper[4826]: I0129 08:22:32.177372 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr" Jan 29 08:22:40 crc kubenswrapper[4826]: I0129 08:22:40.808728 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:22:40 crc kubenswrapper[4826]: E0129 08:22:40.809455 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.706971 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-k97cs"] Jan 29 08:22:43 crc kubenswrapper[4826]: E0129 08:22:43.707618 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="357050f5-02bf-4697-9e17-3d7389a90a6d" containerName="extract" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.707630 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="357050f5-02bf-4697-9e17-3d7389a90a6d" containerName="extract" Jan 29 08:22:43 crc kubenswrapper[4826]: E0129 08:22:43.707652 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="357050f5-02bf-4697-9e17-3d7389a90a6d" containerName="util" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.707658 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="357050f5-02bf-4697-9e17-3d7389a90a6d" containerName="util" Jan 29 08:22:43 crc kubenswrapper[4826]: E0129 08:22:43.707683 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="357050f5-02bf-4697-9e17-3d7389a90a6d" containerName="pull" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.707688 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="357050f5-02bf-4697-9e17-3d7389a90a6d" containerName="pull" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.707865 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="357050f5-02bf-4697-9e17-3d7389a90a6d" containerName="extract" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.708492 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k97cs" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.712044 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-vtzvt" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.713146 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.724688 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-k97cs"] Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.729585 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.741215 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dzmq\" (UniqueName: \"kubernetes.io/projected/5fc63d8d-d2f6-48da-95e1-47ce4038127e-kube-api-access-5dzmq\") pod \"obo-prometheus-operator-68bc856cb9-k97cs\" (UID: \"5fc63d8d-d2f6-48da-95e1-47ce4038127e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k97cs" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.829397 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh"] Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.830626 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.832918 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-qtn4v" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.835452 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.842765 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dzmq\" (UniqueName: \"kubernetes.io/projected/5fc63d8d-d2f6-48da-95e1-47ce4038127e-kube-api-access-5dzmq\") pod \"obo-prometheus-operator-68bc856cb9-k97cs\" (UID: \"5fc63d8d-d2f6-48da-95e1-47ce4038127e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k97cs" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.854261 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k"] Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.855917 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.864607 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh"] Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.881990 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dzmq\" (UniqueName: \"kubernetes.io/projected/5fc63d8d-d2f6-48da-95e1-47ce4038127e-kube-api-access-5dzmq\") pod \"obo-prometheus-operator-68bc856cb9-k97cs\" (UID: \"5fc63d8d-d2f6-48da-95e1-47ce4038127e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k97cs" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.910206 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k"] Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.945076 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a45663bb-b6c8-40cd-8f5c-9e3c08b2480a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56696b5f9-8r95k\" (UID: \"a45663bb-b6c8-40cd-8f5c-9e3c08b2480a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.945312 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a45663bb-b6c8-40cd-8f5c-9e3c08b2480a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56696b5f9-8r95k\" (UID: \"a45663bb-b6c8-40cd-8f5c-9e3c08b2480a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.945368 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15a35e48-2620-4093-a986-0d7b1ecde3c5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh\" (UID: \"15a35e48-2620-4093-a986-0d7b1ecde3c5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh" Jan 29 08:22:43 crc kubenswrapper[4826]: I0129 08:22:43.945666 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/15a35e48-2620-4093-a986-0d7b1ecde3c5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh\" (UID: \"15a35e48-2620-4093-a986-0d7b1ecde3c5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.030383 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7r5fl"] Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.031884 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.036389 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-hwqkg" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.036898 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k97cs" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.038028 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.047013 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7r5fl"] Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.047433 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/15a35e48-2620-4093-a986-0d7b1ecde3c5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh\" (UID: \"15a35e48-2620-4093-a986-0d7b1ecde3c5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.047654 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a45663bb-b6c8-40cd-8f5c-9e3c08b2480a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56696b5f9-8r95k\" (UID: \"a45663bb-b6c8-40cd-8f5c-9e3c08b2480a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.047846 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a45663bb-b6c8-40cd-8f5c-9e3c08b2480a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56696b5f9-8r95k\" (UID: \"a45663bb-b6c8-40cd-8f5c-9e3c08b2480a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.047958 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15a35e48-2620-4093-a986-0d7b1ecde3c5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh\" (UID: \"15a35e48-2620-4093-a986-0d7b1ecde3c5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.052336 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/15a35e48-2620-4093-a986-0d7b1ecde3c5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh\" (UID: \"15a35e48-2620-4093-a986-0d7b1ecde3c5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.054693 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a45663bb-b6c8-40cd-8f5c-9e3c08b2480a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56696b5f9-8r95k\" (UID: \"a45663bb-b6c8-40cd-8f5c-9e3c08b2480a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.056441 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a45663bb-b6c8-40cd-8f5c-9e3c08b2480a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56696b5f9-8r95k\" (UID: \"a45663bb-b6c8-40cd-8f5c-9e3c08b2480a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.070147 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/15a35e48-2620-4093-a986-0d7b1ecde3c5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh\" (UID: \"15a35e48-2620-4093-a986-0d7b1ecde3c5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.146141 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.149462 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/04f8a571-0326-413e-92ca-d9d706b91187-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7r5fl\" (UID: \"04f8a571-0326-413e-92ca-d9d706b91187\") " pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.149555 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb4gv\" (UniqueName: \"kubernetes.io/projected/04f8a571-0326-413e-92ca-d9d706b91187-kube-api-access-jb4gv\") pod \"observability-operator-59bdc8b94-7r5fl\" (UID: \"04f8a571-0326-413e-92ca-d9d706b91187\") " pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.180774 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.248374 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-snp7s"] Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.251177 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-snp7s" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.252915 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb4gv\" (UniqueName: \"kubernetes.io/projected/04f8a571-0326-413e-92ca-d9d706b91187-kube-api-access-jb4gv\") pod \"observability-operator-59bdc8b94-7r5fl\" (UID: \"04f8a571-0326-413e-92ca-d9d706b91187\") " pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.253039 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/04f8a571-0326-413e-92ca-d9d706b91187-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7r5fl\" (UID: \"04f8a571-0326-413e-92ca-d9d706b91187\") " pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.258868 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/04f8a571-0326-413e-92ca-d9d706b91187-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7r5fl\" (UID: \"04f8a571-0326-413e-92ca-d9d706b91187\") " pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.259108 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-2rgbk" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.268168 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-snp7s"] Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.296996 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb4gv\" (UniqueName: \"kubernetes.io/projected/04f8a571-0326-413e-92ca-d9d706b91187-kube-api-access-jb4gv\") pod \"observability-operator-59bdc8b94-7r5fl\" (UID: \"04f8a571-0326-413e-92ca-d9d706b91187\") " pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.354355 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-489pp\" (UniqueName: \"kubernetes.io/projected/c4081b90-9d2f-4aa8-b1f9-f16819f5f36b-kube-api-access-489pp\") pod \"perses-operator-5bf474d74f-snp7s\" (UID: \"c4081b90-9d2f-4aa8-b1f9-f16819f5f36b\") " pod="openshift-operators/perses-operator-5bf474d74f-snp7s" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.354418 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4081b90-9d2f-4aa8-b1f9-f16819f5f36b-openshift-service-ca\") pod \"perses-operator-5bf474d74f-snp7s\" (UID: \"c4081b90-9d2f-4aa8-b1f9-f16819f5f36b\") " pod="openshift-operators/perses-operator-5bf474d74f-snp7s" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.444476 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.457730 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-489pp\" (UniqueName: \"kubernetes.io/projected/c4081b90-9d2f-4aa8-b1f9-f16819f5f36b-kube-api-access-489pp\") pod \"perses-operator-5bf474d74f-snp7s\" (UID: \"c4081b90-9d2f-4aa8-b1f9-f16819f5f36b\") " pod="openshift-operators/perses-operator-5bf474d74f-snp7s" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.457776 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4081b90-9d2f-4aa8-b1f9-f16819f5f36b-openshift-service-ca\") pod \"perses-operator-5bf474d74f-snp7s\" (UID: \"c4081b90-9d2f-4aa8-b1f9-f16819f5f36b\") " pod="openshift-operators/perses-operator-5bf474d74f-snp7s" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.461220 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c4081b90-9d2f-4aa8-b1f9-f16819f5f36b-openshift-service-ca\") pod \"perses-operator-5bf474d74f-snp7s\" (UID: \"c4081b90-9d2f-4aa8-b1f9-f16819f5f36b\") " pod="openshift-operators/perses-operator-5bf474d74f-snp7s" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.479586 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-489pp\" (UniqueName: \"kubernetes.io/projected/c4081b90-9d2f-4aa8-b1f9-f16819f5f36b-kube-api-access-489pp\") pod \"perses-operator-5bf474d74f-snp7s\" (UID: \"c4081b90-9d2f-4aa8-b1f9-f16819f5f36b\") " pod="openshift-operators/perses-operator-5bf474d74f-snp7s" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.630654 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-snp7s" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.959386 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f8xqm"] Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.969800 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.979279 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f8xqm"] Jan 29 08:22:44 crc kubenswrapper[4826]: I0129 08:22:44.992747 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k"] Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.000619 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh"] Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.028581 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929f70d3-5278-4653-9216-b0aa3a555b48-utilities\") pod \"certified-operators-f8xqm\" (UID: \"929f70d3-5278-4653-9216-b0aa3a555b48\") " pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.028640 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z86n9\" (UniqueName: \"kubernetes.io/projected/929f70d3-5278-4653-9216-b0aa3a555b48-kube-api-access-z86n9\") pod \"certified-operators-f8xqm\" (UID: \"929f70d3-5278-4653-9216-b0aa3a555b48\") " pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.028737 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929f70d3-5278-4653-9216-b0aa3a555b48-catalog-content\") pod \"certified-operators-f8xqm\" (UID: \"929f70d3-5278-4653-9216-b0aa3a555b48\") " pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.130755 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929f70d3-5278-4653-9216-b0aa3a555b48-utilities\") pod \"certified-operators-f8xqm\" (UID: \"929f70d3-5278-4653-9216-b0aa3a555b48\") " pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.130796 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z86n9\" (UniqueName: \"kubernetes.io/projected/929f70d3-5278-4653-9216-b0aa3a555b48-kube-api-access-z86n9\") pod \"certified-operators-f8xqm\" (UID: \"929f70d3-5278-4653-9216-b0aa3a555b48\") " pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.130864 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929f70d3-5278-4653-9216-b0aa3a555b48-catalog-content\") pod \"certified-operators-f8xqm\" (UID: \"929f70d3-5278-4653-9216-b0aa3a555b48\") " pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.131510 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929f70d3-5278-4653-9216-b0aa3a555b48-utilities\") pod \"certified-operators-f8xqm\" (UID: \"929f70d3-5278-4653-9216-b0aa3a555b48\") " pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.131681 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929f70d3-5278-4653-9216-b0aa3a555b48-catalog-content\") pod \"certified-operators-f8xqm\" (UID: \"929f70d3-5278-4653-9216-b0aa3a555b48\") " pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.148879 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z86n9\" (UniqueName: \"kubernetes.io/projected/929f70d3-5278-4653-9216-b0aa3a555b48-kube-api-access-z86n9\") pod \"certified-operators-f8xqm\" (UID: \"929f70d3-5278-4653-9216-b0aa3a555b48\") " pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:22:45 crc kubenswrapper[4826]: W0129 08:22:45.225994 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fc63d8d_d2f6_48da_95e1_47ce4038127e.slice/crio-7f13d1e73eb8abfeef51e1f07977f106d4ce99d28ee7f7819aafa058dbb154b1 WatchSource:0}: Error finding container 7f13d1e73eb8abfeef51e1f07977f106d4ce99d28ee7f7819aafa058dbb154b1: Status 404 returned error can't find the container with id 7f13d1e73eb8abfeef51e1f07977f106d4ce99d28ee7f7819aafa058dbb154b1 Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.226569 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-k97cs"] Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.301538 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7r5fl"] Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.322717 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k" event={"ID":"a45663bb-b6c8-40cd-8f5c-9e3c08b2480a","Type":"ContainerStarted","Data":"08119d534c6cef3d910601a0ccd8a1c8a541b7375c8b6135f0c8133c0d64cd74"} Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.335138 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh" event={"ID":"15a35e48-2620-4093-a986-0d7b1ecde3c5","Type":"ContainerStarted","Data":"fd1f655184264586d136d6a63224795c6531c0e70f32edcf21d1475c92d6d256"} Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.340546 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k97cs" event={"ID":"5fc63d8d-d2f6-48da-95e1-47ce4038127e","Type":"ContainerStarted","Data":"7f13d1e73eb8abfeef51e1f07977f106d4ce99d28ee7f7819aafa058dbb154b1"} Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.372091 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.470006 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-snp7s"] Jan 29 08:22:45 crc kubenswrapper[4826]: W0129 08:22:45.484190 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4081b90_9d2f_4aa8_b1f9_f16819f5f36b.slice/crio-38850ca9a3c85bee3db49e2b449823ea4801b2df084c57a7c7539930810a124e WatchSource:0}: Error finding container 38850ca9a3c85bee3db49e2b449823ea4801b2df084c57a7c7539930810a124e: Status 404 returned error can't find the container with id 38850ca9a3c85bee3db49e2b449823ea4801b2df084c57a7c7539930810a124e Jan 29 08:22:45 crc kubenswrapper[4826]: I0129 08:22:45.972205 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f8xqm"] Jan 29 08:22:46 crc kubenswrapper[4826]: I0129 08:22:46.416905 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8xqm" event={"ID":"929f70d3-5278-4653-9216-b0aa3a555b48","Type":"ContainerStarted","Data":"a307442daa1b61b7cbcfb793d9c627a7757c8edf956d4b10106ea95d56e51c5a"} Jan 29 08:22:46 crc kubenswrapper[4826]: I0129 08:22:46.462545 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-snp7s" event={"ID":"c4081b90-9d2f-4aa8-b1f9-f16819f5f36b","Type":"ContainerStarted","Data":"38850ca9a3c85bee3db49e2b449823ea4801b2df084c57a7c7539930810a124e"} Jan 29 08:22:46 crc kubenswrapper[4826]: I0129 08:22:46.473690 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" event={"ID":"04f8a571-0326-413e-92ca-d9d706b91187","Type":"ContainerStarted","Data":"a28a7ff323ff659096efdbed32a60f521940144ff68194365507c79bc7980de9"} Jan 29 08:22:47 crc kubenswrapper[4826]: I0129 08:22:47.484242 4826 generic.go:334] "Generic (PLEG): container finished" podID="929f70d3-5278-4653-9216-b0aa3a555b48" containerID="5d4ae163e5f9906b714234e819702c4ee5b811d28f61ca11f92e52370c56270d" exitCode=0 Jan 29 08:22:47 crc kubenswrapper[4826]: I0129 08:22:47.484289 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8xqm" event={"ID":"929f70d3-5278-4653-9216-b0aa3a555b48","Type":"ContainerDied","Data":"5d4ae163e5f9906b714234e819702c4ee5b811d28f61ca11f92e52370c56270d"} Jan 29 08:22:48 crc kubenswrapper[4826]: I0129 08:22:48.518553 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8xqm" event={"ID":"929f70d3-5278-4653-9216-b0aa3a555b48","Type":"ContainerStarted","Data":"b6c65828ab123eb28d60dc09c17f83a5986cf3802d00d671ab4aea7413b61039"} Jan 29 08:22:49 crc kubenswrapper[4826]: I0129 08:22:49.544480 4826 generic.go:334] "Generic (PLEG): container finished" podID="929f70d3-5278-4653-9216-b0aa3a555b48" containerID="b6c65828ab123eb28d60dc09c17f83a5986cf3802d00d671ab4aea7413b61039" exitCode=0 Jan 29 08:22:49 crc kubenswrapper[4826]: I0129 08:22:49.544813 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8xqm" event={"ID":"929f70d3-5278-4653-9216-b0aa3a555b48","Type":"ContainerDied","Data":"b6c65828ab123eb28d60dc09c17f83a5986cf3802d00d671ab4aea7413b61039"} Jan 29 08:22:51 crc kubenswrapper[4826]: I0129 08:22:51.809407 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:22:51 crc kubenswrapper[4826]: E0129 08:22:51.809978 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.642137 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" event={"ID":"04f8a571-0326-413e-92ca-d9d706b91187","Type":"ContainerStarted","Data":"fb321656db10ce07bc07899fcb541928e0ddb20f50b9177bc138889151a86d3f"} Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.642737 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.643934 4826 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-7r5fl container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.120:8081/healthz\": dial tcp 10.217.1.120:8081: connect: connection refused" start-of-body= Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.643987 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" podUID="04f8a571-0326-413e-92ca-d9d706b91187" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.120:8081/healthz\": dial tcp 10.217.1.120:8081: connect: connection refused" Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.655012 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8xqm" event={"ID":"929f70d3-5278-4653-9216-b0aa3a555b48","Type":"ContainerStarted","Data":"2e39193588093c38511439187105e2e9b02a2de15ca37f42e66dd467f2f29f45"} Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.659133 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k" event={"ID":"a45663bb-b6c8-40cd-8f5c-9e3c08b2480a","Type":"ContainerStarted","Data":"eee02280e7987b77dbba1b46cbf4340800105ef28f2cbd6033be2f9d5462e076"} Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.661410 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-snp7s" event={"ID":"c4081b90-9d2f-4aa8-b1f9-f16819f5f36b","Type":"ContainerStarted","Data":"56de2188677ca6dd9c97805d3be71750e74f9284cba67b3e664e3944fe8d7dc3"} Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.661983 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-snp7s" Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.665441 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh" event={"ID":"15a35e48-2620-4093-a986-0d7b1ecde3c5","Type":"ContainerStarted","Data":"92bc1561f258a87cdcecbcaa288b2f5ec9795c841c993fe7bb7de708e4020a5c"} Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.678820 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" podStartSLOduration=1.923457419 podStartE2EDuration="14.67880185s" podCreationTimestamp="2026-01-29 08:22:44 +0000 UTC" firstStartedPulling="2026-01-29 08:22:45.329502798 +0000 UTC m=+5949.191295867" lastFinishedPulling="2026-01-29 08:22:58.084847229 +0000 UTC m=+5961.946640298" observedRunningTime="2026-01-29 08:22:58.66967866 +0000 UTC m=+5962.531471749" watchObservedRunningTime="2026-01-29 08:22:58.67880185 +0000 UTC m=+5962.540594919" Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.706837 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh" podStartSLOduration=2.687273379 podStartE2EDuration="15.706815828s" podCreationTimestamp="2026-01-29 08:22:43 +0000 UTC" firstStartedPulling="2026-01-29 08:22:45.026014136 +0000 UTC m=+5948.887807205" lastFinishedPulling="2026-01-29 08:22:58.045556585 +0000 UTC m=+5961.907349654" observedRunningTime="2026-01-29 08:22:58.699740161 +0000 UTC m=+5962.561533230" watchObservedRunningTime="2026-01-29 08:22:58.706815828 +0000 UTC m=+5962.568608897" Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.745705 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f8xqm" podStartSLOduration=4.150658516 podStartE2EDuration="14.745690171s" podCreationTimestamp="2026-01-29 08:22:44 +0000 UTC" firstStartedPulling="2026-01-29 08:22:47.48662071 +0000 UTC m=+5951.348413779" lastFinishedPulling="2026-01-29 08:22:58.081652365 +0000 UTC m=+5961.943445434" observedRunningTime="2026-01-29 08:22:58.74071985 +0000 UTC m=+5962.602512919" watchObservedRunningTime="2026-01-29 08:22:58.745690171 +0000 UTC m=+5962.607483240" Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.826390 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-snp7s" podStartSLOduration=2.228197532 podStartE2EDuration="14.824591169s" podCreationTimestamp="2026-01-29 08:22:44 +0000 UTC" firstStartedPulling="2026-01-29 08:22:45.488421792 +0000 UTC m=+5949.350214861" lastFinishedPulling="2026-01-29 08:22:58.084815429 +0000 UTC m=+5961.946608498" observedRunningTime="2026-01-29 08:22:58.782124731 +0000 UTC m=+5962.643917810" watchObservedRunningTime="2026-01-29 08:22:58.824591169 +0000 UTC m=+5962.686384238" Jan 29 08:22:58 crc kubenswrapper[4826]: I0129 08:22:58.840509 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56696b5f9-8r95k" podStartSLOduration=2.761543285 podStartE2EDuration="15.840485958s" podCreationTimestamp="2026-01-29 08:22:43 +0000 UTC" firstStartedPulling="2026-01-29 08:22:44.97261579 +0000 UTC m=+5948.834408849" lastFinishedPulling="2026-01-29 08:22:58.051558463 +0000 UTC m=+5961.913351522" observedRunningTime="2026-01-29 08:22:58.820094111 +0000 UTC m=+5962.681887180" watchObservedRunningTime="2026-01-29 08:22:58.840485958 +0000 UTC m=+5962.702279027" Jan 29 08:22:59 crc kubenswrapper[4826]: I0129 08:22:59.675604 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k97cs" event={"ID":"5fc63d8d-d2f6-48da-95e1-47ce4038127e","Type":"ContainerStarted","Data":"f90ab7d05108721f945b1a97b573cff189c20a6d57b9cdbcc00433a856f30d9e"} Jan 29 08:22:59 crc kubenswrapper[4826]: I0129 08:22:59.694488 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-7r5fl" Jan 29 08:22:59 crc kubenswrapper[4826]: I0129 08:22:59.697446 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-k97cs" podStartSLOduration=3.847535152 podStartE2EDuration="16.697423533s" podCreationTimestamp="2026-01-29 08:22:43 +0000 UTC" firstStartedPulling="2026-01-29 08:22:45.22856794 +0000 UTC m=+5949.090361009" lastFinishedPulling="2026-01-29 08:22:58.078456321 +0000 UTC m=+5961.940249390" observedRunningTime="2026-01-29 08:22:59.69426095 +0000 UTC m=+5963.556054019" watchObservedRunningTime="2026-01-29 08:22:59.697423533 +0000 UTC m=+5963.559216602" Jan 29 08:23:04 crc kubenswrapper[4826]: I0129 08:23:04.635801 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-snp7s" Jan 29 08:23:05 crc kubenswrapper[4826]: I0129 08:23:05.372334 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:23:05 crc kubenswrapper[4826]: I0129 08:23:05.372406 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:23:05 crc kubenswrapper[4826]: I0129 08:23:05.485087 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:23:05 crc kubenswrapper[4826]: I0129 08:23:05.820743 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:23:05 crc kubenswrapper[4826]: I0129 08:23:05.866756 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f8xqm"] Jan 29 08:23:06 crc kubenswrapper[4826]: I0129 08:23:06.814695 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:23:06 crc kubenswrapper[4826]: E0129 08:23:06.815197 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.113763 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.114001 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="dffeb5ae-2969-47db-841f-cfe964097f10" containerName="openstackclient" containerID="cri-o://d0aaa1bc60096e5255d8993f7e980e990f1ac40aa3b8edb801d49c1c630fdf08" gracePeriod=2 Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.149440 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.206899 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 08:23:07 crc kubenswrapper[4826]: E0129 08:23:07.207384 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dffeb5ae-2969-47db-841f-cfe964097f10" containerName="openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.207397 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="dffeb5ae-2969-47db-841f-cfe964097f10" containerName="openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.207585 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="dffeb5ae-2969-47db-841f-cfe964097f10" containerName="openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.208233 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.230630 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.241080 4826 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="dffeb5ae-2969-47db-841f-cfe964097f10" podUID="0a7f33ac-d626-4959-b993-480af4a5bb66" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.342044 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0a7f33ac-d626-4959-b993-480af4a5bb66-openstack-config-secret\") pod \"openstackclient\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.342096 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0a7f33ac-d626-4959-b993-480af4a5bb66-openstack-config\") pod \"openstackclient\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.342131 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7f33ac-d626-4959-b993-480af4a5bb66-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.342148 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4l6k\" (UniqueName: \"kubernetes.io/projected/0a7f33ac-d626-4959-b993-480af4a5bb66-kube-api-access-l4l6k\") pod \"openstackclient\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.356626 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.361627 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.363896 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-22dpg" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.370658 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.444321 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0a7f33ac-d626-4959-b993-480af4a5bb66-openstack-config-secret\") pod \"openstackclient\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.444380 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0a7f33ac-d626-4959-b993-480af4a5bb66-openstack-config\") pod \"openstackclient\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.444405 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7f33ac-d626-4959-b993-480af4a5bb66-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.444420 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4l6k\" (UniqueName: \"kubernetes.io/projected/0a7f33ac-d626-4959-b993-480af4a5bb66-kube-api-access-l4l6k\") pod \"openstackclient\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.445517 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0a7f33ac-d626-4959-b993-480af4a5bb66-openstack-config\") pod \"openstackclient\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.458989 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0a7f33ac-d626-4959-b993-480af4a5bb66-openstack-config-secret\") pod \"openstackclient\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.463765 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7f33ac-d626-4959-b993-480af4a5bb66-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.473605 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4l6k\" (UniqueName: \"kubernetes.io/projected/0a7f33ac-d626-4959-b993-480af4a5bb66-kube-api-access-l4l6k\") pod \"openstackclient\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.546190 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5flsw\" (UniqueName: \"kubernetes.io/projected/856cc4ca-f8ee-4dd3-a31f-626eac71a6a1-kube-api-access-5flsw\") pod \"kube-state-metrics-0\" (UID: \"856cc4ca-f8ee-4dd3-a31f-626eac71a6a1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.551454 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.649097 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5flsw\" (UniqueName: \"kubernetes.io/projected/856cc4ca-f8ee-4dd3-a31f-626eac71a6a1-kube-api-access-5flsw\") pod \"kube-state-metrics-0\" (UID: \"856cc4ca-f8ee-4dd3-a31f-626eac71a6a1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.678849 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5flsw\" (UniqueName: \"kubernetes.io/projected/856cc4ca-f8ee-4dd3-a31f-626eac71a6a1-kube-api-access-5flsw\") pod \"kube-state-metrics-0\" (UID: \"856cc4ca-f8ee-4dd3-a31f-626eac71a6a1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.679541 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 08:23:07 crc kubenswrapper[4826]: I0129 08:23:07.846695 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f8xqm" podUID="929f70d3-5278-4653-9216-b0aa3a555b48" containerName="registry-server" containerID="cri-o://2e39193588093c38511439187105e2e9b02a2de15ca37f42e66dd467f2f29f45" gracePeriod=2 Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.202622 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.205461 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.212343 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.212612 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-k6zgl" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.213104 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.213280 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.219773 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.237127 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.376388 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/74d65e3a-57e3-4134-a934-f41a91ddaf16-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.379858 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/74d65e3a-57e3-4134-a934-f41a91ddaf16-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.379944 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgjfv\" (UniqueName: \"kubernetes.io/projected/74d65e3a-57e3-4134-a934-f41a91ddaf16-kube-api-access-wgjfv\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.379988 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/74d65e3a-57e3-4134-a934-f41a91ddaf16-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.380035 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/74d65e3a-57e3-4134-a934-f41a91ddaf16-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.380119 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/74d65e3a-57e3-4134-a934-f41a91ddaf16-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.380140 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/74d65e3a-57e3-4134-a934-f41a91ddaf16-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.486977 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/74d65e3a-57e3-4134-a934-f41a91ddaf16-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.487089 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/74d65e3a-57e3-4134-a934-f41a91ddaf16-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.487122 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/74d65e3a-57e3-4134-a934-f41a91ddaf16-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.487215 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/74d65e3a-57e3-4134-a934-f41a91ddaf16-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.487276 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/74d65e3a-57e3-4134-a934-f41a91ddaf16-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.487404 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgjfv\" (UniqueName: \"kubernetes.io/projected/74d65e3a-57e3-4134-a934-f41a91ddaf16-kube-api-access-wgjfv\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.487452 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/74d65e3a-57e3-4134-a934-f41a91ddaf16-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.515076 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/74d65e3a-57e3-4134-a934-f41a91ddaf16-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.516075 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/74d65e3a-57e3-4134-a934-f41a91ddaf16-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.531766 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/74d65e3a-57e3-4134-a934-f41a91ddaf16-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.551977 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/74d65e3a-57e3-4134-a934-f41a91ddaf16-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.552186 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/74d65e3a-57e3-4134-a934-f41a91ddaf16-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.554560 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/74d65e3a-57e3-4134-a934-f41a91ddaf16-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.566132 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgjfv\" (UniqueName: \"kubernetes.io/projected/74d65e3a-57e3-4134-a934-f41a91ddaf16-kube-api-access-wgjfv\") pod \"alertmanager-metric-storage-0\" (UID: \"74d65e3a-57e3-4134-a934-f41a91ddaf16\") " pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.656649 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.795361 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.797617 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.801978 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.813669 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.813702 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.813841 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.813672 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.813979 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.814134 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.814201 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-m6prc" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.829168 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.842366 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.912401 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/11d855ca-3051-4d20-9f8d-d37ed9e4625b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.912484 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.912515 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.912549 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phpdl\" (UniqueName: \"kubernetes.io/projected/11d855ca-3051-4d20-9f8d-d37ed9e4625b-kube-api-access-phpdl\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.912591 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.912611 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.912629 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/11d855ca-3051-4d20-9f8d-d37ed9e4625b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.912647 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.912670 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.912732 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-config\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.912986 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.954693 4826 generic.go:334] "Generic (PLEG): container finished" podID="929f70d3-5278-4653-9216-b0aa3a555b48" containerID="2e39193588093c38511439187105e2e9b02a2de15ca37f42e66dd467f2f29f45" exitCode=0 Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.954795 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8xqm" event={"ID":"929f70d3-5278-4653-9216-b0aa3a555b48","Type":"ContainerDied","Data":"2e39193588093c38511439187105e2e9b02a2de15ca37f42e66dd467f2f29f45"} Jan 29 08:23:08 crc kubenswrapper[4826]: I0129 08:23:08.985233 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"856cc4ca-f8ee-4dd3-a31f-626eac71a6a1","Type":"ContainerStarted","Data":"faa6322fe29436380ebe79ef16cbf8d81140a64c7eadfe9ec2c5297d0d7a0044"} Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.014088 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.014146 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/11d855ca-3051-4d20-9f8d-d37ed9e4625b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.014163 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.014198 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.014273 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-config\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.014326 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/11d855ca-3051-4d20-9f8d-d37ed9e4625b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.014373 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.014404 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.014434 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phpdl\" (UniqueName: \"kubernetes.io/projected/11d855ca-3051-4d20-9f8d-d37ed9e4625b-kube-api-access-phpdl\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.014475 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.015736 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.016973 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.019220 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.022151 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/11d855ca-3051-4d20-9f8d-d37ed9e4625b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.027231 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.033839 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-config\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.037504 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/11d855ca-3051-4d20-9f8d-d37ed9e4625b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.042914 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.099538 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phpdl\" (UniqueName: \"kubernetes.io/projected/11d855ca-3051-4d20-9f8d-d37ed9e4625b-kube-api-access-phpdl\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.182497 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.182557 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2f0922ad209e8831e6be213ef5b49bdb241892dd0a1d9521e73a47a97d5044c0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.230814 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.338001 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z86n9\" (UniqueName: \"kubernetes.io/projected/929f70d3-5278-4653-9216-b0aa3a555b48-kube-api-access-z86n9\") pod \"929f70d3-5278-4653-9216-b0aa3a555b48\" (UID: \"929f70d3-5278-4653-9216-b0aa3a555b48\") " Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.338086 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929f70d3-5278-4653-9216-b0aa3a555b48-catalog-content\") pod \"929f70d3-5278-4653-9216-b0aa3a555b48\" (UID: \"929f70d3-5278-4653-9216-b0aa3a555b48\") " Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.338130 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929f70d3-5278-4653-9216-b0aa3a555b48-utilities\") pod \"929f70d3-5278-4653-9216-b0aa3a555b48\" (UID: \"929f70d3-5278-4653-9216-b0aa3a555b48\") " Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.353819 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/929f70d3-5278-4653-9216-b0aa3a555b48-kube-api-access-z86n9" (OuterVolumeSpecName: "kube-api-access-z86n9") pod "929f70d3-5278-4653-9216-b0aa3a555b48" (UID: "929f70d3-5278-4653-9216-b0aa3a555b48"). InnerVolumeSpecName "kube-api-access-z86n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.360619 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/929f70d3-5278-4653-9216-b0aa3a555b48-utilities" (OuterVolumeSpecName: "utilities") pod "929f70d3-5278-4653-9216-b0aa3a555b48" (UID: "929f70d3-5278-4653-9216-b0aa3a555b48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.440599 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929f70d3-5278-4653-9216-b0aa3a555b48-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.440847 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z86n9\" (UniqueName: \"kubernetes.io/projected/929f70d3-5278-4653-9216-b0aa3a555b48-kube-api-access-z86n9\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.488458 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/929f70d3-5278-4653-9216-b0aa3a555b48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "929f70d3-5278-4653-9216-b0aa3a555b48" (UID: "929f70d3-5278-4653-9216-b0aa3a555b48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.542832 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929f70d3-5278-4653-9216-b0aa3a555b48-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.582849 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\") pod \"prometheus-metric-storage-0\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.593071 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.844046 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.849071 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.956653 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dffeb5ae-2969-47db-841f-cfe964097f10-openstack-config-secret\") pod \"dffeb5ae-2969-47db-841f-cfe964097f10\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.956802 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpvvd\" (UniqueName: \"kubernetes.io/projected/dffeb5ae-2969-47db-841f-cfe964097f10-kube-api-access-xpvvd\") pod \"dffeb5ae-2969-47db-841f-cfe964097f10\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.956821 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffeb5ae-2969-47db-841f-cfe964097f10-combined-ca-bundle\") pod \"dffeb5ae-2969-47db-841f-cfe964097f10\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.956952 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dffeb5ae-2969-47db-841f-cfe964097f10-openstack-config\") pod \"dffeb5ae-2969-47db-841f-cfe964097f10\" (UID: \"dffeb5ae-2969-47db-841f-cfe964097f10\") " Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.981940 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dffeb5ae-2969-47db-841f-cfe964097f10-kube-api-access-xpvvd" (OuterVolumeSpecName: "kube-api-access-xpvvd") pod "dffeb5ae-2969-47db-841f-cfe964097f10" (UID: "dffeb5ae-2969-47db-841f-cfe964097f10"). InnerVolumeSpecName "kube-api-access-xpvvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:23:09 crc kubenswrapper[4826]: I0129 08:23:09.985892 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dffeb5ae-2969-47db-841f-cfe964097f10-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "dffeb5ae-2969-47db-841f-cfe964097f10" (UID: "dffeb5ae-2969-47db-841f-cfe964097f10"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.010886 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dffeb5ae-2969-47db-841f-cfe964097f10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dffeb5ae-2969-47db-841f-cfe964097f10" (UID: "dffeb5ae-2969-47db-841f-cfe964097f10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.011163 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8xqm" event={"ID":"929f70d3-5278-4653-9216-b0aa3a555b48","Type":"ContainerDied","Data":"a307442daa1b61b7cbcfb793d9c627a7757c8edf956d4b10106ea95d56e51c5a"} Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.011211 4826 scope.go:117] "RemoveContainer" containerID="2e39193588093c38511439187105e2e9b02a2de15ca37f42e66dd467f2f29f45" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.011545 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8xqm" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.022345 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"74d65e3a-57e3-4134-a934-f41a91ddaf16","Type":"ContainerStarted","Data":"7c603481a431b018507372c0d8807435c6f8e398fa1f096d35bbd31b18e9b813"} Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.024616 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0a7f33ac-d626-4959-b993-480af4a5bb66","Type":"ContainerStarted","Data":"b08db1e3e8b0e382d1b0ca1b789bfe72a65e708846f4523cfb5e9656740219e9"} Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.024636 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0a7f33ac-d626-4959-b993-480af4a5bb66","Type":"ContainerStarted","Data":"89591c27cf76d57353ed8f088df118645b08ac32394f6ee91e4a4e0721871c13"} Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.032968 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.046523 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.046508702 podStartE2EDuration="3.046508702s" podCreationTimestamp="2026-01-29 08:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:23:10.045062414 +0000 UTC m=+5973.906855483" watchObservedRunningTime="2026-01-29 08:23:10.046508702 +0000 UTC m=+5973.908301771" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.055968 4826 generic.go:334] "Generic (PLEG): container finished" podID="dffeb5ae-2969-47db-841f-cfe964097f10" containerID="d0aaa1bc60096e5255d8993f7e980e990f1ac40aa3b8edb801d49c1c630fdf08" exitCode=137 Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.056132 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.059931 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpvvd\" (UniqueName: \"kubernetes.io/projected/dffeb5ae-2969-47db-841f-cfe964097f10-kube-api-access-xpvvd\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.059959 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffeb5ae-2969-47db-841f-cfe964097f10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.059969 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dffeb5ae-2969-47db-841f-cfe964097f10-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.061338 4826 scope.go:117] "RemoveContainer" containerID="b6c65828ab123eb28d60dc09c17f83a5986cf3802d00d671ab4aea7413b61039" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.063328 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dffeb5ae-2969-47db-841f-cfe964097f10-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "dffeb5ae-2969-47db-841f-cfe964097f10" (UID: "dffeb5ae-2969-47db-841f-cfe964097f10"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.085131 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.3770590240000002 podStartE2EDuration="3.084917724s" podCreationTimestamp="2026-01-29 08:23:07 +0000 UTC" firstStartedPulling="2026-01-29 08:23:08.669795239 +0000 UTC m=+5972.531588308" lastFinishedPulling="2026-01-29 08:23:09.377653939 +0000 UTC m=+5973.239447008" observedRunningTime="2026-01-29 08:23:10.066905159 +0000 UTC m=+5973.928698228" watchObservedRunningTime="2026-01-29 08:23:10.084917724 +0000 UTC m=+5973.946710793" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.103507 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f8xqm"] Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.109736 4826 scope.go:117] "RemoveContainer" containerID="5d4ae163e5f9906b714234e819702c4ee5b811d28f61ca11f92e52370c56270d" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.114908 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f8xqm"] Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.163811 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dffeb5ae-2969-47db-841f-cfe964097f10-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.175046 4826 scope.go:117] "RemoveContainer" containerID="d0aaa1bc60096e5255d8993f7e980e990f1ac40aa3b8edb801d49c1c630fdf08" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.221543 4826 scope.go:117] "RemoveContainer" containerID="d0aaa1bc60096e5255d8993f7e980e990f1ac40aa3b8edb801d49c1c630fdf08" Jan 29 08:23:10 crc kubenswrapper[4826]: E0129 08:23:10.225458 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0aaa1bc60096e5255d8993f7e980e990f1ac40aa3b8edb801d49c1c630fdf08\": container with ID starting with d0aaa1bc60096e5255d8993f7e980e990f1ac40aa3b8edb801d49c1c630fdf08 not found: ID does not exist" containerID="d0aaa1bc60096e5255d8993f7e980e990f1ac40aa3b8edb801d49c1c630fdf08" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.225507 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0aaa1bc60096e5255d8993f7e980e990f1ac40aa3b8edb801d49c1c630fdf08"} err="failed to get container status \"d0aaa1bc60096e5255d8993f7e980e990f1ac40aa3b8edb801d49c1c630fdf08\": rpc error: code = NotFound desc = could not find container \"d0aaa1bc60096e5255d8993f7e980e990f1ac40aa3b8edb801d49c1c630fdf08\": container with ID starting with d0aaa1bc60096e5255d8993f7e980e990f1ac40aa3b8edb801d49c1c630fdf08 not found: ID does not exist" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.372085 4826 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="dffeb5ae-2969-47db-841f-cfe964097f10" podUID="0a7f33ac-d626-4959-b993-480af4a5bb66" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.428116 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 08:23:10 crc kubenswrapper[4826]: W0129 08:23:10.442739 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11d855ca_3051_4d20_9f8d_d37ed9e4625b.slice/crio-fa968a0651de3b7a4df8e41a469b6e644125738aede96509c9f5561e205e1017 WatchSource:0}: Error finding container fa968a0651de3b7a4df8e41a469b6e644125738aede96509c9f5561e205e1017: Status 404 returned error can't find the container with id fa968a0651de3b7a4df8e41a469b6e644125738aede96509c9f5561e205e1017 Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.822684 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="929f70d3-5278-4653-9216-b0aa3a555b48" path="/var/lib/kubelet/pods/929f70d3-5278-4653-9216-b0aa3a555b48/volumes" Jan 29 08:23:10 crc kubenswrapper[4826]: I0129 08:23:10.824741 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dffeb5ae-2969-47db-841f-cfe964097f10" path="/var/lib/kubelet/pods/dffeb5ae-2969-47db-841f-cfe964097f10/volumes" Jan 29 08:23:11 crc kubenswrapper[4826]: I0129 08:23:11.065694 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"856cc4ca-f8ee-4dd3-a31f-626eac71a6a1","Type":"ContainerStarted","Data":"f390d0e5d6b3ed3316e8220240fe74bd1c2a55ab096c295995e4177e34fb40d8"} Jan 29 08:23:11 crc kubenswrapper[4826]: I0129 08:23:11.067092 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11d855ca-3051-4d20-9f8d-d37ed9e4625b","Type":"ContainerStarted","Data":"fa968a0651de3b7a4df8e41a469b6e644125738aede96509c9f5561e205e1017"} Jan 29 08:23:17 crc kubenswrapper[4826]: I0129 08:23:17.141507 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"74d65e3a-57e3-4134-a934-f41a91ddaf16","Type":"ContainerStarted","Data":"4cbe2c1063b1e18c1f024d21595573eefee500bd6244a9ca07686831af8acb81"} Jan 29 08:23:17 crc kubenswrapper[4826]: I0129 08:23:17.144375 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11d855ca-3051-4d20-9f8d-d37ed9e4625b","Type":"ContainerStarted","Data":"20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd"} Jan 29 08:23:17 crc kubenswrapper[4826]: I0129 08:23:17.687038 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 08:23:21 crc kubenswrapper[4826]: I0129 08:23:21.810671 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:23:21 crc kubenswrapper[4826]: E0129 08:23:21.814421 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:23:24 crc kubenswrapper[4826]: I0129 08:23:24.216152 4826 generic.go:334] "Generic (PLEG): container finished" podID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerID="20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd" exitCode=0 Jan 29 08:23:24 crc kubenswrapper[4826]: I0129 08:23:24.216224 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11d855ca-3051-4d20-9f8d-d37ed9e4625b","Type":"ContainerDied","Data":"20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd"} Jan 29 08:23:24 crc kubenswrapper[4826]: I0129 08:23:24.218835 4826 generic.go:334] "Generic (PLEG): container finished" podID="74d65e3a-57e3-4134-a934-f41a91ddaf16" containerID="4cbe2c1063b1e18c1f024d21595573eefee500bd6244a9ca07686831af8acb81" exitCode=0 Jan 29 08:23:24 crc kubenswrapper[4826]: I0129 08:23:24.218884 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"74d65e3a-57e3-4134-a934-f41a91ddaf16","Type":"ContainerDied","Data":"4cbe2c1063b1e18c1f024d21595573eefee500bd6244a9ca07686831af8acb81"} Jan 29 08:23:27 crc kubenswrapper[4826]: I0129 08:23:27.250895 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"74d65e3a-57e3-4134-a934-f41a91ddaf16","Type":"ContainerStarted","Data":"0e89cfe0407caebd521c8d31575f122afbbadd087cf34bca021ebf763d6d1193"} Jan 29 08:23:29 crc kubenswrapper[4826]: I0129 08:23:29.274263 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"74d65e3a-57e3-4134-a934-f41a91ddaf16","Type":"ContainerStarted","Data":"e763d63e12294f447324e32a08d22637d1155e1f6770d05c0899abfb55ce1391"} Jan 29 08:23:29 crc kubenswrapper[4826]: I0129 08:23:29.274591 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:29 crc kubenswrapper[4826]: I0129 08:23:29.277717 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 29 08:23:29 crc kubenswrapper[4826]: I0129 08:23:29.308611 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=4.685983646 podStartE2EDuration="21.308591554s" podCreationTimestamp="2026-01-29 08:23:08 +0000 UTC" firstStartedPulling="2026-01-29 08:23:09.626775249 +0000 UTC m=+5973.488568318" lastFinishedPulling="2026-01-29 08:23:26.249383157 +0000 UTC m=+5990.111176226" observedRunningTime="2026-01-29 08:23:29.29970088 +0000 UTC m=+5993.161493949" watchObservedRunningTime="2026-01-29 08:23:29.308591554 +0000 UTC m=+5993.170384623" Jan 29 08:23:31 crc kubenswrapper[4826]: I0129 08:23:31.294954 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11d855ca-3051-4d20-9f8d-d37ed9e4625b","Type":"ContainerStarted","Data":"45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f"} Jan 29 08:23:32 crc kubenswrapper[4826]: I0129 08:23:32.067187 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-64b9-account-create-update-vlrvd"] Jan 29 08:23:32 crc kubenswrapper[4826]: I0129 08:23:32.082692 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-f8lj8"] Jan 29 08:23:32 crc kubenswrapper[4826]: I0129 08:23:32.092470 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-64b9-account-create-update-vlrvd"] Jan 29 08:23:32 crc kubenswrapper[4826]: I0129 08:23:32.102274 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-f8lj8"] Jan 29 08:23:32 crc kubenswrapper[4826]: I0129 08:23:32.818632 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a4e318f-fe31-41b8-91d3-2b595c03914a" path="/var/lib/kubelet/pods/6a4e318f-fe31-41b8-91d3-2b595c03914a/volumes" Jan 29 08:23:32 crc kubenswrapper[4826]: I0129 08:23:32.819524 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b74f71-c8e0-4b71-91f3-84a55f48a8e6" path="/var/lib/kubelet/pods/f6b74f71-c8e0-4b71-91f3-84a55f48a8e6/volumes" Jan 29 08:23:35 crc kubenswrapper[4826]: I0129 08:23:35.345807 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11d855ca-3051-4d20-9f8d-d37ed9e4625b","Type":"ContainerStarted","Data":"aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431"} Jan 29 08:23:35 crc kubenswrapper[4826]: I0129 08:23:35.810057 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:23:35 crc kubenswrapper[4826]: E0129 08:23:35.810433 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:23:38 crc kubenswrapper[4826]: I0129 08:23:38.383723 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11d855ca-3051-4d20-9f8d-d37ed9e4625b","Type":"ContainerStarted","Data":"ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e"} Jan 29 08:23:38 crc kubenswrapper[4826]: I0129 08:23:38.447192 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.7615731930000003 podStartE2EDuration="31.447172338s" podCreationTimestamp="2026-01-29 08:23:07 +0000 UTC" firstStartedPulling="2026-01-29 08:23:10.445651243 +0000 UTC m=+5974.307444312" lastFinishedPulling="2026-01-29 08:23:38.131250388 +0000 UTC m=+6001.993043457" observedRunningTime="2026-01-29 08:23:38.431739331 +0000 UTC m=+6002.293532420" watchObservedRunningTime="2026-01-29 08:23:38.447172338 +0000 UTC m=+6002.308965417" Jan 29 08:23:39 crc kubenswrapper[4826]: I0129 08:23:39.846034 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:39 crc kubenswrapper[4826]: I0129 08:23:39.847851 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:39 crc kubenswrapper[4826]: I0129 08:23:39.848586 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:40 crc kubenswrapper[4826]: I0129 08:23:40.410780 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.755144 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:23:41 crc kubenswrapper[4826]: E0129 08:23:41.755772 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929f70d3-5278-4653-9216-b0aa3a555b48" containerName="extract-content" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.755786 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="929f70d3-5278-4653-9216-b0aa3a555b48" containerName="extract-content" Jan 29 08:23:41 crc kubenswrapper[4826]: E0129 08:23:41.755807 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929f70d3-5278-4653-9216-b0aa3a555b48" containerName="extract-utilities" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.755814 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="929f70d3-5278-4653-9216-b0aa3a555b48" containerName="extract-utilities" Jan 29 08:23:41 crc kubenswrapper[4826]: E0129 08:23:41.755825 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929f70d3-5278-4653-9216-b0aa3a555b48" containerName="registry-server" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.755832 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="929f70d3-5278-4653-9216-b0aa3a555b48" containerName="registry-server" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.756039 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="929f70d3-5278-4653-9216-b0aa3a555b48" containerName="registry-server" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.757739 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.769966 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.770601 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.770837 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.873937 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.873995 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2a9f639-f310-41f3-8b53-c713d66b71be-run-httpd\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.874703 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2a9f639-f310-41f3-8b53-c713d66b71be-log-httpd\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.874792 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-config-data\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.875012 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxlw2\" (UniqueName: \"kubernetes.io/projected/f2a9f639-f310-41f3-8b53-c713d66b71be-kube-api-access-mxlw2\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.875317 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.875462 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-scripts\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.976747 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-config-data\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.976857 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxlw2\" (UniqueName: \"kubernetes.io/projected/f2a9f639-f310-41f3-8b53-c713d66b71be-kube-api-access-mxlw2\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.976932 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.976963 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-scripts\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.977010 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.977041 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2a9f639-f310-41f3-8b53-c713d66b71be-run-httpd\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.977073 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2a9f639-f310-41f3-8b53-c713d66b71be-log-httpd\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.977569 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2a9f639-f310-41f3-8b53-c713d66b71be-run-httpd\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.977623 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2a9f639-f310-41f3-8b53-c713d66b71be-log-httpd\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.982957 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-scripts\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.983652 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-config-data\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.984101 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:41 crc kubenswrapper[4826]: I0129 08:23:41.989184 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.004480 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxlw2\" (UniqueName: \"kubernetes.io/projected/f2a9f639-f310-41f3-8b53-c713d66b71be-kube-api-access-mxlw2\") pod \"ceilometer-0\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " pod="openstack/ceilometer-0" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.046334 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.046575 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="0a7f33ac-d626-4959-b993-480af4a5bb66" containerName="openstackclient" containerID="cri-o://b08db1e3e8b0e382d1b0ca1b789bfe72a65e708846f4523cfb5e9656740219e9" gracePeriod=2 Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.059089 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.086117 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 08:23:42 crc kubenswrapper[4826]: E0129 08:23:42.086626 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7f33ac-d626-4959-b993-480af4a5bb66" containerName="openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.086641 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7f33ac-d626-4959-b993-480af4a5bb66" containerName="openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.086852 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7f33ac-d626-4959-b993-480af4a5bb66" containerName="openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.087549 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.091041 4826 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0a7f33ac-d626-4959-b993-480af4a5bb66" podUID="5f1a4fb9-7b51-4a6c-b594-8d8d98666063" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.114219 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.128570 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.190753 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f1a4fb9-7b51-4a6c-b594-8d8d98666063-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f1a4fb9-7b51-4a6c-b594-8d8d98666063\") " pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.190922 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f1a4fb9-7b51-4a6c-b594-8d8d98666063-openstack-config\") pod \"openstackclient\" (UID: \"5f1a4fb9-7b51-4a6c-b594-8d8d98666063\") " pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.190965 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlwgz\" (UniqueName: \"kubernetes.io/projected/5f1a4fb9-7b51-4a6c-b594-8d8d98666063-kube-api-access-jlwgz\") pod \"openstackclient\" (UID: \"5f1a4fb9-7b51-4a6c-b594-8d8d98666063\") " pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.191014 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1a4fb9-7b51-4a6c-b594-8d8d98666063-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5f1a4fb9-7b51-4a6c-b594-8d8d98666063\") " pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.295620 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwgz\" (UniqueName: \"kubernetes.io/projected/5f1a4fb9-7b51-4a6c-b594-8d8d98666063-kube-api-access-jlwgz\") pod \"openstackclient\" (UID: \"5f1a4fb9-7b51-4a6c-b594-8d8d98666063\") " pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.296140 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1a4fb9-7b51-4a6c-b594-8d8d98666063-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5f1a4fb9-7b51-4a6c-b594-8d8d98666063\") " pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.296267 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f1a4fb9-7b51-4a6c-b594-8d8d98666063-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f1a4fb9-7b51-4a6c-b594-8d8d98666063\") " pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.296354 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f1a4fb9-7b51-4a6c-b594-8d8d98666063-openstack-config\") pod \"openstackclient\" (UID: \"5f1a4fb9-7b51-4a6c-b594-8d8d98666063\") " pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.297692 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f1a4fb9-7b51-4a6c-b594-8d8d98666063-openstack-config\") pod \"openstackclient\" (UID: \"5f1a4fb9-7b51-4a6c-b594-8d8d98666063\") " pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.316355 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f1a4fb9-7b51-4a6c-b594-8d8d98666063-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f1a4fb9-7b51-4a6c-b594-8d8d98666063\") " pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.322394 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1a4fb9-7b51-4a6c-b594-8d8d98666063-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5f1a4fb9-7b51-4a6c-b594-8d8d98666063\") " pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.322777 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwgz\" (UniqueName: \"kubernetes.io/projected/5f1a4fb9-7b51-4a6c-b594-8d8d98666063-kube-api-access-jlwgz\") pod \"openstackclient\" (UID: \"5f1a4fb9-7b51-4a6c-b594-8d8d98666063\") " pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.438789 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:23:42 crc kubenswrapper[4826]: I0129 08:23:42.652883 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:23:42 crc kubenswrapper[4826]: W0129 08:23:42.664258 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2a9f639_f310_41f3_8b53_c713d66b71be.slice/crio-1ab926fef41040774410fb2b79a48e3ecf91f4ac7c570bf3c8995612e9f379f3 WatchSource:0}: Error finding container 1ab926fef41040774410fb2b79a48e3ecf91f4ac7c570bf3c8995612e9f379f3: Status 404 returned error can't find the container with id 1ab926fef41040774410fb2b79a48e3ecf91f4ac7c570bf3c8995612e9f379f3 Jan 29 08:23:43 crc kubenswrapper[4826]: I0129 08:23:43.016991 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 08:23:43 crc kubenswrapper[4826]: I0129 08:23:43.441775 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2a9f639-f310-41f3-8b53-c713d66b71be","Type":"ContainerStarted","Data":"1ab926fef41040774410fb2b79a48e3ecf91f4ac7c570bf3c8995612e9f379f3"} Jan 29 08:23:43 crc kubenswrapper[4826]: I0129 08:23:43.443678 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5f1a4fb9-7b51-4a6c-b594-8d8d98666063","Type":"ContainerStarted","Data":"26c654bd0ecd128588bc3b35d5447727528aa9fe5b814c6c077a6120ab67e528"} Jan 29 08:23:43 crc kubenswrapper[4826]: I0129 08:23:43.443714 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5f1a4fb9-7b51-4a6c-b594-8d8d98666063","Type":"ContainerStarted","Data":"990fc7c7a0857cec7f521f5afd62487e99d3831361ce10a7392c523ea93fc4fa"} Jan 29 08:23:43 crc kubenswrapper[4826]: I0129 08:23:43.468836 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.46881875 podStartE2EDuration="1.46881875s" podCreationTimestamp="2026-01-29 08:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:23:43.455956952 +0000 UTC m=+6007.317750021" watchObservedRunningTime="2026-01-29 08:23:43.46881875 +0000 UTC m=+6007.330611819" Jan 29 08:23:43 crc kubenswrapper[4826]: I0129 08:23:43.658500 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 08:23:43 crc kubenswrapper[4826]: I0129 08:23:43.659134 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="prometheus" containerID="cri-o://45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f" gracePeriod=600 Jan 29 08:23:43 crc kubenswrapper[4826]: I0129 08:23:43.659426 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="thanos-sidecar" containerID="cri-o://ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e" gracePeriod=600 Jan 29 08:23:43 crc kubenswrapper[4826]: I0129 08:23:43.659437 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="config-reloader" containerID="cri-o://aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431" gracePeriod=600 Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.324546 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.328929 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.454803 4826 generic.go:334] "Generic (PLEG): container finished" podID="0a7f33ac-d626-4959-b993-480af4a5bb66" containerID="b08db1e3e8b0e382d1b0ca1b789bfe72a65e708846f4523cfb5e9656740219e9" exitCode=137 Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.454893 4826 scope.go:117] "RemoveContainer" containerID="b08db1e3e8b0e382d1b0ca1b789bfe72a65e708846f4523cfb5e9656740219e9" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.455025 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.459332 4826 generic.go:334] "Generic (PLEG): container finished" podID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerID="ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e" exitCode=0 Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.459356 4826 generic.go:334] "Generic (PLEG): container finished" podID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerID="aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431" exitCode=0 Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.459368 4826 generic.go:334] "Generic (PLEG): container finished" podID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerID="45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f" exitCode=0 Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.459461 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.459471 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11d855ca-3051-4d20-9f8d-d37ed9e4625b","Type":"ContainerDied","Data":"ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e"} Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.459557 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11d855ca-3051-4d20-9f8d-d37ed9e4625b","Type":"ContainerDied","Data":"aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431"} Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.459573 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11d855ca-3051-4d20-9f8d-d37ed9e4625b","Type":"ContainerDied","Data":"45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f"} Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.459585 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"11d855ca-3051-4d20-9f8d-d37ed9e4625b","Type":"ContainerDied","Data":"fa968a0651de3b7a4df8e41a469b6e644125738aede96509c9f5561e205e1017"} Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.460188 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-1\") pod \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.460437 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0a7f33ac-d626-4959-b993-480af4a5bb66-openstack-config-secret\") pod \"0a7f33ac-d626-4959-b993-480af4a5bb66\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.460525 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-config\") pod \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.460888 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7f33ac-d626-4959-b993-480af4a5bb66-combined-ca-bundle\") pod \"0a7f33ac-d626-4959-b993-480af4a5bb66\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.460947 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4l6k\" (UniqueName: \"kubernetes.io/projected/0a7f33ac-d626-4959-b993-480af4a5bb66-kube-api-access-l4l6k\") pod \"0a7f33ac-d626-4959-b993-480af4a5bb66\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.461036 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "11d855ca-3051-4d20-9f8d-d37ed9e4625b" (UID: "11d855ca-3051-4d20-9f8d-d37ed9e4625b"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.461083 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\") pod \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.461133 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/11d855ca-3051-4d20-9f8d-d37ed9e4625b-config-out\") pod \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.461168 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phpdl\" (UniqueName: \"kubernetes.io/projected/11d855ca-3051-4d20-9f8d-d37ed9e4625b-kube-api-access-phpdl\") pod \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.461200 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-0\") pod \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.461269 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-2\") pod \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.461293 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-thanos-prometheus-http-client-file\") pod \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.461362 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-web-config\") pod \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.461384 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0a7f33ac-d626-4959-b993-480af4a5bb66-openstack-config\") pod \"0a7f33ac-d626-4959-b993-480af4a5bb66\" (UID: \"0a7f33ac-d626-4959-b993-480af4a5bb66\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.461405 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/11d855ca-3051-4d20-9f8d-d37ed9e4625b-tls-assets\") pod \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\" (UID: \"11d855ca-3051-4d20-9f8d-d37ed9e4625b\") " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.462048 4826 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.465695 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "11d855ca-3051-4d20-9f8d-d37ed9e4625b" (UID: "11d855ca-3051-4d20-9f8d-d37ed9e4625b"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.465754 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "11d855ca-3051-4d20-9f8d-d37ed9e4625b" (UID: "11d855ca-3051-4d20-9f8d-d37ed9e4625b"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.466358 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7f33ac-d626-4959-b993-480af4a5bb66-kube-api-access-l4l6k" (OuterVolumeSpecName: "kube-api-access-l4l6k") pod "0a7f33ac-d626-4959-b993-480af4a5bb66" (UID: "0a7f33ac-d626-4959-b993-480af4a5bb66"). InnerVolumeSpecName "kube-api-access-l4l6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.466495 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d855ca-3051-4d20-9f8d-d37ed9e4625b-kube-api-access-phpdl" (OuterVolumeSpecName: "kube-api-access-phpdl") pod "11d855ca-3051-4d20-9f8d-d37ed9e4625b" (UID: "11d855ca-3051-4d20-9f8d-d37ed9e4625b"). InnerVolumeSpecName "kube-api-access-phpdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.466498 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "11d855ca-3051-4d20-9f8d-d37ed9e4625b" (UID: "11d855ca-3051-4d20-9f8d-d37ed9e4625b"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.468469 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-config" (OuterVolumeSpecName: "config") pod "11d855ca-3051-4d20-9f8d-d37ed9e4625b" (UID: "11d855ca-3051-4d20-9f8d-d37ed9e4625b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.474605 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d855ca-3051-4d20-9f8d-d37ed9e4625b-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "11d855ca-3051-4d20-9f8d-d37ed9e4625b" (UID: "11d855ca-3051-4d20-9f8d-d37ed9e4625b"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.477754 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11d855ca-3051-4d20-9f8d-d37ed9e4625b-config-out" (OuterVolumeSpecName: "config-out") pod "11d855ca-3051-4d20-9f8d-d37ed9e4625b" (UID: "11d855ca-3051-4d20-9f8d-d37ed9e4625b"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.487212 4826 scope.go:117] "RemoveContainer" containerID="b08db1e3e8b0e382d1b0ca1b789bfe72a65e708846f4523cfb5e9656740219e9" Jan 29 08:23:44 crc kubenswrapper[4826]: E0129 08:23:44.487690 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b08db1e3e8b0e382d1b0ca1b789bfe72a65e708846f4523cfb5e9656740219e9\": container with ID starting with b08db1e3e8b0e382d1b0ca1b789bfe72a65e708846f4523cfb5e9656740219e9 not found: ID does not exist" containerID="b08db1e3e8b0e382d1b0ca1b789bfe72a65e708846f4523cfb5e9656740219e9" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.487722 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b08db1e3e8b0e382d1b0ca1b789bfe72a65e708846f4523cfb5e9656740219e9"} err="failed to get container status \"b08db1e3e8b0e382d1b0ca1b789bfe72a65e708846f4523cfb5e9656740219e9\": rpc error: code = NotFound desc = could not find container \"b08db1e3e8b0e382d1b0ca1b789bfe72a65e708846f4523cfb5e9656740219e9\": container with ID starting with b08db1e3e8b0e382d1b0ca1b789bfe72a65e708846f4523cfb5e9656740219e9 not found: ID does not exist" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.487739 4826 scope.go:117] "RemoveContainer" containerID="ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.497241 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7f33ac-d626-4959-b993-480af4a5bb66-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0a7f33ac-d626-4959-b993-480af4a5bb66" (UID: "0a7f33ac-d626-4959-b993-480af4a5bb66"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.499617 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "11d855ca-3051-4d20-9f8d-d37ed9e4625b" (UID: "11d855ca-3051-4d20-9f8d-d37ed9e4625b"). InnerVolumeSpecName "pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.513614 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7f33ac-d626-4959-b993-480af4a5bb66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a7f33ac-d626-4959-b993-480af4a5bb66" (UID: "0a7f33ac-d626-4959-b993-480af4a5bb66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.517237 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-web-config" (OuterVolumeSpecName: "web-config") pod "11d855ca-3051-4d20-9f8d-d37ed9e4625b" (UID: "11d855ca-3051-4d20-9f8d-d37ed9e4625b"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.525736 4826 scope.go:117] "RemoveContainer" containerID="aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.551565 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7f33ac-d626-4959-b993-480af4a5bb66-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0a7f33ac-d626-4959-b993-480af4a5bb66" (UID: "0a7f33ac-d626-4959-b993-480af4a5bb66"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.560596 4826 scope.go:117] "RemoveContainer" containerID="45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.563567 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4l6k\" (UniqueName: \"kubernetes.io/projected/0a7f33ac-d626-4959-b993-480af4a5bb66-kube-api-access-l4l6k\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.563618 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\") on node \"crc\" " Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.563631 4826 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/11d855ca-3051-4d20-9f8d-d37ed9e4625b-config-out\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.563642 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phpdl\" (UniqueName: \"kubernetes.io/projected/11d855ca-3051-4d20-9f8d-d37ed9e4625b-kube-api-access-phpdl\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.563653 4826 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.563664 4826 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/11d855ca-3051-4d20-9f8d-d37ed9e4625b-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.563682 4826 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.563692 4826 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-web-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.563703 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0a7f33ac-d626-4959-b993-480af4a5bb66-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.563712 4826 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/11d855ca-3051-4d20-9f8d-d37ed9e4625b-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.563720 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0a7f33ac-d626-4959-b993-480af4a5bb66-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.563730 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/11d855ca-3051-4d20-9f8d-d37ed9e4625b-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.563739 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a7f33ac-d626-4959-b993-480af4a5bb66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.588462 4826 scope.go:117] "RemoveContainer" containerID="20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.592951 4826 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.593108 4826 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9") on node "crc" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.616327 4826 scope.go:117] "RemoveContainer" containerID="ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e" Jan 29 08:23:44 crc kubenswrapper[4826]: E0129 08:23:44.617822 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e\": container with ID starting with ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e not found: ID does not exist" containerID="ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.617929 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e"} err="failed to get container status \"ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e\": rpc error: code = NotFound desc = could not find container \"ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e\": container with ID starting with ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e not found: ID does not exist" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.617989 4826 scope.go:117] "RemoveContainer" containerID="aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431" Jan 29 08:23:44 crc kubenswrapper[4826]: E0129 08:23:44.618484 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431\": container with ID starting with aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431 not found: ID does not exist" containerID="aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.618524 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431"} err="failed to get container status \"aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431\": rpc error: code = NotFound desc = could not find container \"aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431\": container with ID starting with aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431 not found: ID does not exist" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.618552 4826 scope.go:117] "RemoveContainer" containerID="45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f" Jan 29 08:23:44 crc kubenswrapper[4826]: E0129 08:23:44.619248 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f\": container with ID starting with 45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f not found: ID does not exist" containerID="45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.619281 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f"} err="failed to get container status \"45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f\": rpc error: code = NotFound desc = could not find container \"45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f\": container with ID starting with 45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f not found: ID does not exist" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.619373 4826 scope.go:117] "RemoveContainer" containerID="20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd" Jan 29 08:23:44 crc kubenswrapper[4826]: E0129 08:23:44.619992 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd\": container with ID starting with 20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd not found: ID does not exist" containerID="20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.620025 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd"} err="failed to get container status \"20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd\": rpc error: code = NotFound desc = could not find container \"20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd\": container with ID starting with 20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd not found: ID does not exist" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.620079 4826 scope.go:117] "RemoveContainer" containerID="ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.620388 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e"} err="failed to get container status \"ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e\": rpc error: code = NotFound desc = could not find container \"ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e\": container with ID starting with ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e not found: ID does not exist" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.620410 4826 scope.go:117] "RemoveContainer" containerID="aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.620706 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431"} err="failed to get container status \"aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431\": rpc error: code = NotFound desc = could not find container \"aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431\": container with ID starting with aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431 not found: ID does not exist" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.620745 4826 scope.go:117] "RemoveContainer" containerID="45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.621194 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f"} err="failed to get container status \"45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f\": rpc error: code = NotFound desc = could not find container \"45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f\": container with ID starting with 45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f not found: ID does not exist" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.621251 4826 scope.go:117] "RemoveContainer" containerID="20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.621561 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd"} err="failed to get container status \"20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd\": rpc error: code = NotFound desc = could not find container \"20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd\": container with ID starting with 20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd not found: ID does not exist" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.621587 4826 scope.go:117] "RemoveContainer" containerID="ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.621952 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e"} err="failed to get container status \"ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e\": rpc error: code = NotFound desc = could not find container \"ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e\": container with ID starting with ad9f3a3524d2684b5b1d2fb252780e4bbf8a954dab97e9d9ff12fef90098b28e not found: ID does not exist" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.621975 4826 scope.go:117] "RemoveContainer" containerID="aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.622760 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431"} err="failed to get container status \"aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431\": rpc error: code = NotFound desc = could not find container \"aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431\": container with ID starting with aa8d3be93cdc464aa3055aa57b72694d9c81d085e044a68f81ec713b5e288431 not found: ID does not exist" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.622789 4826 scope.go:117] "RemoveContainer" containerID="45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.623098 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f"} err="failed to get container status \"45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f\": rpc error: code = NotFound desc = could not find container \"45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f\": container with ID starting with 45415862e9479412acbc42002b602af8a4f5a62b1be6cdc5ef72416accdf224f not found: ID does not exist" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.623122 4826 scope.go:117] "RemoveContainer" containerID="20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.623814 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd"} err="failed to get container status \"20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd\": rpc error: code = NotFound desc = could not find container \"20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd\": container with ID starting with 20ec3c213befb25b3aa19d14eb1dfe78b13bbd05217af0240c11834a6af887fd not found: ID does not exist" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.665248 4826 reconciler_common.go:293] "Volume detached for volume \"pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\") on node \"crc\" DevicePath \"\"" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.804056 4826 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0a7f33ac-d626-4959-b993-480af4a5bb66" podUID="5f1a4fb9-7b51-4a6c-b594-8d8d98666063" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.859028 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7f33ac-d626-4959-b993-480af4a5bb66" path="/var/lib/kubelet/pods/0a7f33ac-d626-4959-b993-480af4a5bb66/volumes" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.859716 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.878602 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.901345 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 08:23:44 crc kubenswrapper[4826]: E0129 08:23:44.901885 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="thanos-sidecar" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.901902 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="thanos-sidecar" Jan 29 08:23:44 crc kubenswrapper[4826]: E0129 08:23:44.901921 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="init-config-reloader" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.901946 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="init-config-reloader" Jan 29 08:23:44 crc kubenswrapper[4826]: E0129 08:23:44.901958 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="config-reloader" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.901964 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="config-reloader" Jan 29 08:23:44 crc kubenswrapper[4826]: E0129 08:23:44.901977 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="prometheus" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.901984 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="prometheus" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.902281 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="config-reloader" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.902343 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="prometheus" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.902354 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" containerName="thanos-sidecar" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.904914 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.911815 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.913122 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.913986 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.918843 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.919147 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.919381 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.921859 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.922004 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-m6prc" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.922674 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 29 08:23:44 crc kubenswrapper[4826]: I0129 08:23:44.927944 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.076826 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-config\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.076864 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.076884 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6faa35a9-8847-44ab-b28a-abbbd186ca7c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.076902 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6faa35a9-8847-44ab-b28a-abbbd186ca7c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.076987 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.077011 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6faa35a9-8847-44ab-b28a-abbbd186ca7c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.077040 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f774l\" (UniqueName: \"kubernetes.io/projected/6faa35a9-8847-44ab-b28a-abbbd186ca7c-kube-api-access-f774l\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.077067 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.077087 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6faa35a9-8847-44ab-b28a-abbbd186ca7c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.077104 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6faa35a9-8847-44ab-b28a-abbbd186ca7c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.077290 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.077422 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.077481 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.178884 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6faa35a9-8847-44ab-b28a-abbbd186ca7c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.179263 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6faa35a9-8847-44ab-b28a-abbbd186ca7c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.179405 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.179432 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6faa35a9-8847-44ab-b28a-abbbd186ca7c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.179981 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6faa35a9-8847-44ab-b28a-abbbd186ca7c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.180139 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6faa35a9-8847-44ab-b28a-abbbd186ca7c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.180185 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f774l\" (UniqueName: \"kubernetes.io/projected/6faa35a9-8847-44ab-b28a-abbbd186ca7c-kube-api-access-f774l\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.180256 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.180289 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6faa35a9-8847-44ab-b28a-abbbd186ca7c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.180973 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6faa35a9-8847-44ab-b28a-abbbd186ca7c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.181095 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.181185 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.181335 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.181423 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-config\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.181454 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.181924 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6faa35a9-8847-44ab-b28a-abbbd186ca7c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.185216 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6faa35a9-8847-44ab-b28a-abbbd186ca7c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.185292 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.185535 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.185724 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6faa35a9-8847-44ab-b28a-abbbd186ca7c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.186033 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-config\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.200154 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.200534 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f774l\" (UniqueName: \"kubernetes.io/projected/6faa35a9-8847-44ab-b28a-abbbd186ca7c-kube-api-access-f774l\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.201279 4826 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.201333 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2f0922ad209e8831e6be213ef5b49bdb241892dd0a1d9521e73a47a97d5044c0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.226550 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.232153 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6faa35a9-8847-44ab-b28a-abbbd186ca7c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.261212 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a194a78-1c85-4c21-8388-f45fb09b16b9\") pod \"prometheus-metric-storage-0\" (UID: \"6faa35a9-8847-44ab-b28a-abbbd186ca7c\") " pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:45 crc kubenswrapper[4826]: I0129 08:23:45.523970 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 29 08:23:46 crc kubenswrapper[4826]: I0129 08:23:46.824524 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11d855ca-3051-4d20-9f8d-d37ed9e4625b" path="/var/lib/kubelet/pods/11d855ca-3051-4d20-9f8d-d37ed9e4625b/volumes" Jan 29 08:23:47 crc kubenswrapper[4826]: I0129 08:23:47.492275 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2a9f639-f310-41f3-8b53-c713d66b71be","Type":"ContainerStarted","Data":"b470ecdc91338a6f117f41269050c5e514ed54939a60ce5065798c0645a29ab1"} Jan 29 08:23:47 crc kubenswrapper[4826]: W0129 08:23:47.606406 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6faa35a9_8847_44ab_b28a_abbbd186ca7c.slice/crio-b7237bc24e81afda74e5d75e30fc1118c2935b4c5986b6a5b6acbbff71638070 WatchSource:0}: Error finding container b7237bc24e81afda74e5d75e30fc1118c2935b4c5986b6a5b6acbbff71638070: Status 404 returned error can't find the container with id b7237bc24e81afda74e5d75e30fc1118c2935b4c5986b6a5b6acbbff71638070 Jan 29 08:23:47 crc kubenswrapper[4826]: I0129 08:23:47.618964 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 29 08:23:47 crc kubenswrapper[4826]: I0129 08:23:47.808926 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:23:47 crc kubenswrapper[4826]: E0129 08:23:47.809244 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:23:48 crc kubenswrapper[4826]: I0129 08:23:48.501550 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6faa35a9-8847-44ab-b28a-abbbd186ca7c","Type":"ContainerStarted","Data":"b7237bc24e81afda74e5d75e30fc1118c2935b4c5986b6a5b6acbbff71638070"} Jan 29 08:23:49 crc kubenswrapper[4826]: I0129 08:23:49.515366 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2a9f639-f310-41f3-8b53-c713d66b71be","Type":"ContainerStarted","Data":"c818bc03625d015f49708a8af5c5fb9f6a59d40ebc6004a45bc481d3dfc7f1e9"} Jan 29 08:23:49 crc kubenswrapper[4826]: I0129 08:23:49.515693 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2a9f639-f310-41f3-8b53-c713d66b71be","Type":"ContainerStarted","Data":"05eaf60dd946b722c85e4c18a229dd52f904e39ef0dd6d6d3d61bff6b6846edc"} Jan 29 08:23:51 crc kubenswrapper[4826]: I0129 08:23:51.540043 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6faa35a9-8847-44ab-b28a-abbbd186ca7c","Type":"ContainerStarted","Data":"d8deda3a60fdc20924364b099da742568cbd389295784df5b5cb1dd0c84ceafc"} Jan 29 08:23:51 crc kubenswrapper[4826]: I0129 08:23:51.560642 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2a9f639-f310-41f3-8b53-c713d66b71be","Type":"ContainerStarted","Data":"b650b23a09fa6cfd50d21b53f9defe86eb629ddb6cc9f966daa7f15fc509ec9c"} Jan 29 08:23:51 crc kubenswrapper[4826]: I0129 08:23:51.561070 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 08:23:51 crc kubenswrapper[4826]: I0129 08:23:51.614525 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.366322442 podStartE2EDuration="10.614495708s" podCreationTimestamp="2026-01-29 08:23:41 +0000 UTC" firstStartedPulling="2026-01-29 08:23:42.666891884 +0000 UTC m=+6006.528684953" lastFinishedPulling="2026-01-29 08:23:50.91506515 +0000 UTC m=+6014.776858219" observedRunningTime="2026-01-29 08:23:51.603336684 +0000 UTC m=+6015.465129793" watchObservedRunningTime="2026-01-29 08:23:51.614495708 +0000 UTC m=+6015.476288807" Jan 29 08:23:57 crc kubenswrapper[4826]: I0129 08:23:57.978559 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-54q62"] Jan 29 08:23:57 crc kubenswrapper[4826]: I0129 08:23:57.980688 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-54q62" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.009550 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-02a3-account-create-update-zq6qg"] Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.011099 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-02a3-account-create-update-zq6qg" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.016176 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.016391 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-54q62"] Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.031112 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-02a3-account-create-update-zq6qg"] Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.091873 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6khtf\" (UniqueName: \"kubernetes.io/projected/95b6faea-69ff-4fbb-9c6c-d5201b093b6c-kube-api-access-6khtf\") pod \"aodh-db-create-54q62\" (UID: \"95b6faea-69ff-4fbb-9c6c-d5201b093b6c\") " pod="openstack/aodh-db-create-54q62" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.092021 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95b6faea-69ff-4fbb-9c6c-d5201b093b6c-operator-scripts\") pod \"aodh-db-create-54q62\" (UID: \"95b6faea-69ff-4fbb-9c6c-d5201b093b6c\") " pod="openstack/aodh-db-create-54q62" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.193353 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzhfc\" (UniqueName: \"kubernetes.io/projected/92c9832e-da91-4ee0-b54b-0efbb5f5571c-kube-api-access-fzhfc\") pod \"aodh-02a3-account-create-update-zq6qg\" (UID: \"92c9832e-da91-4ee0-b54b-0efbb5f5571c\") " pod="openstack/aodh-02a3-account-create-update-zq6qg" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.193412 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95b6faea-69ff-4fbb-9c6c-d5201b093b6c-operator-scripts\") pod \"aodh-db-create-54q62\" (UID: \"95b6faea-69ff-4fbb-9c6c-d5201b093b6c\") " pod="openstack/aodh-db-create-54q62" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.193602 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c9832e-da91-4ee0-b54b-0efbb5f5571c-operator-scripts\") pod \"aodh-02a3-account-create-update-zq6qg\" (UID: \"92c9832e-da91-4ee0-b54b-0efbb5f5571c\") " pod="openstack/aodh-02a3-account-create-update-zq6qg" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.193910 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6khtf\" (UniqueName: \"kubernetes.io/projected/95b6faea-69ff-4fbb-9c6c-d5201b093b6c-kube-api-access-6khtf\") pod \"aodh-db-create-54q62\" (UID: \"95b6faea-69ff-4fbb-9c6c-d5201b093b6c\") " pod="openstack/aodh-db-create-54q62" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.194333 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95b6faea-69ff-4fbb-9c6c-d5201b093b6c-operator-scripts\") pod \"aodh-db-create-54q62\" (UID: \"95b6faea-69ff-4fbb-9c6c-d5201b093b6c\") " pod="openstack/aodh-db-create-54q62" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.226159 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6khtf\" (UniqueName: \"kubernetes.io/projected/95b6faea-69ff-4fbb-9c6c-d5201b093b6c-kube-api-access-6khtf\") pod \"aodh-db-create-54q62\" (UID: \"95b6faea-69ff-4fbb-9c6c-d5201b093b6c\") " pod="openstack/aodh-db-create-54q62" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.298033 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzhfc\" (UniqueName: \"kubernetes.io/projected/92c9832e-da91-4ee0-b54b-0efbb5f5571c-kube-api-access-fzhfc\") pod \"aodh-02a3-account-create-update-zq6qg\" (UID: \"92c9832e-da91-4ee0-b54b-0efbb5f5571c\") " pod="openstack/aodh-02a3-account-create-update-zq6qg" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.298205 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c9832e-da91-4ee0-b54b-0efbb5f5571c-operator-scripts\") pod \"aodh-02a3-account-create-update-zq6qg\" (UID: \"92c9832e-da91-4ee0-b54b-0efbb5f5571c\") " pod="openstack/aodh-02a3-account-create-update-zq6qg" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.299492 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c9832e-da91-4ee0-b54b-0efbb5f5571c-operator-scripts\") pod \"aodh-02a3-account-create-update-zq6qg\" (UID: \"92c9832e-da91-4ee0-b54b-0efbb5f5571c\") " pod="openstack/aodh-02a3-account-create-update-zq6qg" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.300974 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-54q62" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.315840 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzhfc\" (UniqueName: \"kubernetes.io/projected/92c9832e-da91-4ee0-b54b-0efbb5f5571c-kube-api-access-fzhfc\") pod \"aodh-02a3-account-create-update-zq6qg\" (UID: \"92c9832e-da91-4ee0-b54b-0efbb5f5571c\") " pod="openstack/aodh-02a3-account-create-update-zq6qg" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.363863 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-02a3-account-create-update-zq6qg" Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.908985 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-02a3-account-create-update-zq6qg"] Jan 29 08:23:58 crc kubenswrapper[4826]: I0129 08:23:58.936261 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-54q62"] Jan 29 08:23:58 crc kubenswrapper[4826]: W0129 08:23:58.941928 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95b6faea_69ff_4fbb_9c6c_d5201b093b6c.slice/crio-a3bf8173ea43ace2a546ac54c8fd6f543cc054a96d0c86159da45b964177ac6f WatchSource:0}: Error finding container a3bf8173ea43ace2a546ac54c8fd6f543cc054a96d0c86159da45b964177ac6f: Status 404 returned error can't find the container with id a3bf8173ea43ace2a546ac54c8fd6f543cc054a96d0c86159da45b964177ac6f Jan 29 08:23:59 crc kubenswrapper[4826]: I0129 08:23:59.658068 4826 generic.go:334] "Generic (PLEG): container finished" podID="6faa35a9-8847-44ab-b28a-abbbd186ca7c" containerID="d8deda3a60fdc20924364b099da742568cbd389295784df5b5cb1dd0c84ceafc" exitCode=0 Jan 29 08:23:59 crc kubenswrapper[4826]: I0129 08:23:59.658254 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6faa35a9-8847-44ab-b28a-abbbd186ca7c","Type":"ContainerDied","Data":"d8deda3a60fdc20924364b099da742568cbd389295784df5b5cb1dd0c84ceafc"} Jan 29 08:23:59 crc kubenswrapper[4826]: I0129 08:23:59.664552 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-02a3-account-create-update-zq6qg" event={"ID":"92c9832e-da91-4ee0-b54b-0efbb5f5571c","Type":"ContainerStarted","Data":"e2ec23a8caf9d0091e8c02ee2ea19174f09e40b0f7a9b2cf82e51dce1606fec3"} Jan 29 08:23:59 crc kubenswrapper[4826]: I0129 08:23:59.666559 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-54q62" event={"ID":"95b6faea-69ff-4fbb-9c6c-d5201b093b6c","Type":"ContainerStarted","Data":"a3bf8173ea43ace2a546ac54c8fd6f543cc054a96d0c86159da45b964177ac6f"} Jan 29 08:24:00 crc kubenswrapper[4826]: I0129 08:24:00.679442 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-54q62" event={"ID":"95b6faea-69ff-4fbb-9c6c-d5201b093b6c","Type":"ContainerStarted","Data":"f11d95c6a3506bac102d28e4d1d6c3d642ea919ea581c730f086e7490d4d3262"} Jan 29 08:24:01 crc kubenswrapper[4826]: I0129 08:24:01.693421 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6faa35a9-8847-44ab-b28a-abbbd186ca7c","Type":"ContainerStarted","Data":"b90f2475db9f3c080deedd104465aed482e5a28800aa4fd4bb26b5bd708c27c3"} Jan 29 08:24:01 crc kubenswrapper[4826]: I0129 08:24:01.697263 4826 generic.go:334] "Generic (PLEG): container finished" podID="92c9832e-da91-4ee0-b54b-0efbb5f5571c" containerID="27954068fd2f68787c91edab1814b6b0b82a761c0118ded0dfacb05f666fce25" exitCode=0 Jan 29 08:24:01 crc kubenswrapper[4826]: I0129 08:24:01.697373 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-02a3-account-create-update-zq6qg" event={"ID":"92c9832e-da91-4ee0-b54b-0efbb5f5571c","Type":"ContainerDied","Data":"27954068fd2f68787c91edab1814b6b0b82a761c0118ded0dfacb05f666fce25"} Jan 29 08:24:01 crc kubenswrapper[4826]: I0129 08:24:01.700239 4826 generic.go:334] "Generic (PLEG): container finished" podID="95b6faea-69ff-4fbb-9c6c-d5201b093b6c" containerID="f11d95c6a3506bac102d28e4d1d6c3d642ea919ea581c730f086e7490d4d3262" exitCode=0 Jan 29 08:24:01 crc kubenswrapper[4826]: I0129 08:24:01.700314 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-54q62" event={"ID":"95b6faea-69ff-4fbb-9c6c-d5201b093b6c","Type":"ContainerDied","Data":"f11d95c6a3506bac102d28e4d1d6c3d642ea919ea581c730f086e7490d4d3262"} Jan 29 08:24:01 crc kubenswrapper[4826]: I0129 08:24:01.808644 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:24:01 crc kubenswrapper[4826]: E0129 08:24:01.809079 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.333653 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-02a3-account-create-update-zq6qg" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.340687 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-54q62" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.496540 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6khtf\" (UniqueName: \"kubernetes.io/projected/95b6faea-69ff-4fbb-9c6c-d5201b093b6c-kube-api-access-6khtf\") pod \"95b6faea-69ff-4fbb-9c6c-d5201b093b6c\" (UID: \"95b6faea-69ff-4fbb-9c6c-d5201b093b6c\") " Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.496836 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c9832e-da91-4ee0-b54b-0efbb5f5571c-operator-scripts\") pod \"92c9832e-da91-4ee0-b54b-0efbb5f5571c\" (UID: \"92c9832e-da91-4ee0-b54b-0efbb5f5571c\") " Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.496918 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95b6faea-69ff-4fbb-9c6c-d5201b093b6c-operator-scripts\") pod \"95b6faea-69ff-4fbb-9c6c-d5201b093b6c\" (UID: \"95b6faea-69ff-4fbb-9c6c-d5201b093b6c\") " Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.496943 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzhfc\" (UniqueName: \"kubernetes.io/projected/92c9832e-da91-4ee0-b54b-0efbb5f5571c-kube-api-access-fzhfc\") pod \"92c9832e-da91-4ee0-b54b-0efbb5f5571c\" (UID: \"92c9832e-da91-4ee0-b54b-0efbb5f5571c\") " Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.497653 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c9832e-da91-4ee0-b54b-0efbb5f5571c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92c9832e-da91-4ee0-b54b-0efbb5f5571c" (UID: "92c9832e-da91-4ee0-b54b-0efbb5f5571c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.497668 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95b6faea-69ff-4fbb-9c6c-d5201b093b6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95b6faea-69ff-4fbb-9c6c-d5201b093b6c" (UID: "95b6faea-69ff-4fbb-9c6c-d5201b093b6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.594192 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c9832e-da91-4ee0-b54b-0efbb5f5571c-kube-api-access-fzhfc" (OuterVolumeSpecName: "kube-api-access-fzhfc") pod "92c9832e-da91-4ee0-b54b-0efbb5f5571c" (UID: "92c9832e-da91-4ee0-b54b-0efbb5f5571c"). InnerVolumeSpecName "kube-api-access-fzhfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.594736 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95b6faea-69ff-4fbb-9c6c-d5201b093b6c-kube-api-access-6khtf" (OuterVolumeSpecName: "kube-api-access-6khtf") pod "95b6faea-69ff-4fbb-9c6c-d5201b093b6c" (UID: "95b6faea-69ff-4fbb-9c6c-d5201b093b6c"). InnerVolumeSpecName "kube-api-access-6khtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.598012 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c9832e-da91-4ee0-b54b-0efbb5f5571c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.598034 4826 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95b6faea-69ff-4fbb-9c6c-d5201b093b6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.598046 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzhfc\" (UniqueName: \"kubernetes.io/projected/92c9832e-da91-4ee0-b54b-0efbb5f5571c-kube-api-access-fzhfc\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.598056 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6khtf\" (UniqueName: \"kubernetes.io/projected/95b6faea-69ff-4fbb-9c6c-d5201b093b6c-kube-api-access-6khtf\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.728416 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-02a3-account-create-update-zq6qg" event={"ID":"92c9832e-da91-4ee0-b54b-0efbb5f5571c","Type":"ContainerDied","Data":"e2ec23a8caf9d0091e8c02ee2ea19174f09e40b0f7a9b2cf82e51dce1606fec3"} Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.728525 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ec23a8caf9d0091e8c02ee2ea19174f09e40b0f7a9b2cf82e51dce1606fec3" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.728651 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-02a3-account-create-update-zq6qg" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.732022 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-54q62" event={"ID":"95b6faea-69ff-4fbb-9c6c-d5201b093b6c","Type":"ContainerDied","Data":"a3bf8173ea43ace2a546ac54c8fd6f543cc054a96d0c86159da45b964177ac6f"} Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.732075 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3bf8173ea43ace2a546ac54c8fd6f543cc054a96d0c86159da45b964177ac6f" Jan 29 08:24:03 crc kubenswrapper[4826]: I0129 08:24:03.732142 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-54q62" Jan 29 08:24:05 crc kubenswrapper[4826]: I0129 08:24:05.804339 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6faa35a9-8847-44ab-b28a-abbbd186ca7c","Type":"ContainerStarted","Data":"e51ee13f2c6a90d46e7b454b78307d5512faa863adf5d09393b6018899179f65"} Jan 29 08:24:05 crc kubenswrapper[4826]: I0129 08:24:05.804773 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6faa35a9-8847-44ab-b28a-abbbd186ca7c","Type":"ContainerStarted","Data":"d06ed044d93a14efc773d211a03a56a67730a85aba6bbc71ffeea5edce85de9c"} Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.336931 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=24.336911423 podStartE2EDuration="24.336911423s" podCreationTimestamp="2026-01-29 08:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:24:05.850762247 +0000 UTC m=+6029.712555326" watchObservedRunningTime="2026-01-29 08:24:08.336911423 +0000 UTC m=+6032.198704492" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.341269 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-q46qd"] Jan 29 08:24:08 crc kubenswrapper[4826]: E0129 08:24:08.341903 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c9832e-da91-4ee0-b54b-0efbb5f5571c" containerName="mariadb-account-create-update" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.342002 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c9832e-da91-4ee0-b54b-0efbb5f5571c" containerName="mariadb-account-create-update" Jan 29 08:24:08 crc kubenswrapper[4826]: E0129 08:24:08.342113 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b6faea-69ff-4fbb-9c6c-d5201b093b6c" containerName="mariadb-database-create" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.342188 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b6faea-69ff-4fbb-9c6c-d5201b093b6c" containerName="mariadb-database-create" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.342519 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c9832e-da91-4ee0-b54b-0efbb5f5571c" containerName="mariadb-account-create-update" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.342599 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="95b6faea-69ff-4fbb-9c6c-d5201b093b6c" containerName="mariadb-database-create" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.343280 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.347133 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.347179 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.347595 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-gw9tj" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.347830 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.355924 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-q46qd"] Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.517202 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h67z\" (UniqueName: \"kubernetes.io/projected/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-kube-api-access-2h67z\") pod \"aodh-db-sync-q46qd\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.517286 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-scripts\") pod \"aodh-db-sync-q46qd\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.517699 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-config-data\") pod \"aodh-db-sync-q46qd\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.517781 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-combined-ca-bundle\") pod \"aodh-db-sync-q46qd\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.619226 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-config-data\") pod \"aodh-db-sync-q46qd\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.619589 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-combined-ca-bundle\") pod \"aodh-db-sync-q46qd\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.619747 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h67z\" (UniqueName: \"kubernetes.io/projected/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-kube-api-access-2h67z\") pod \"aodh-db-sync-q46qd\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.619879 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-scripts\") pod \"aodh-db-sync-q46qd\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.627568 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-config-data\") pod \"aodh-db-sync-q46qd\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.628713 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-combined-ca-bundle\") pod \"aodh-db-sync-q46qd\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.628963 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-scripts\") pod \"aodh-db-sync-q46qd\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.639116 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h67z\" (UniqueName: \"kubernetes.io/projected/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-kube-api-access-2h67z\") pod \"aodh-db-sync-q46qd\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:08 crc kubenswrapper[4826]: I0129 08:24:08.676836 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:09 crc kubenswrapper[4826]: I0129 08:24:09.152910 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-q46qd"] Jan 29 08:24:09 crc kubenswrapper[4826]: W0129 08:24:09.154543 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbce1eaf_5632_4daf_b860_e8bc1199ed0f.slice/crio-ff15cfe1d57b83fc970251349f9901ec422c4858a63b0f59996cc99290c44c06 WatchSource:0}: Error finding container ff15cfe1d57b83fc970251349f9901ec422c4858a63b0f59996cc99290c44c06: Status 404 returned error can't find the container with id ff15cfe1d57b83fc970251349f9901ec422c4858a63b0f59996cc99290c44c06 Jan 29 08:24:09 crc kubenswrapper[4826]: I0129 08:24:09.852187 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q46qd" event={"ID":"dbce1eaf-5632-4daf-b860-e8bc1199ed0f","Type":"ContainerStarted","Data":"ff15cfe1d57b83fc970251349f9901ec422c4858a63b0f59996cc99290c44c06"} Jan 29 08:24:10 crc kubenswrapper[4826]: I0129 08:24:10.524265 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 29 08:24:12 crc kubenswrapper[4826]: I0129 08:24:12.131263 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 08:24:12 crc kubenswrapper[4826]: I0129 08:24:12.809498 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:24:12 crc kubenswrapper[4826]: E0129 08:24:12.810009 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:24:14 crc kubenswrapper[4826]: I0129 08:24:14.064503 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-wjkh5"] Jan 29 08:24:14 crc kubenswrapper[4826]: I0129 08:24:14.080082 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-wjkh5"] Jan 29 08:24:14 crc kubenswrapper[4826]: I0129 08:24:14.823623 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db1c0417-3f99-43a6-b55c-8639e55cb922" path="/var/lib/kubelet/pods/db1c0417-3f99-43a6-b55c-8639e55cb922/volumes" Jan 29 08:24:15 crc kubenswrapper[4826]: I0129 08:24:15.524367 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 29 08:24:15 crc kubenswrapper[4826]: I0129 08:24:15.532276 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 29 08:24:15 crc kubenswrapper[4826]: I0129 08:24:15.919325 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 29 08:24:16 crc kubenswrapper[4826]: I0129 08:24:16.086853 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:24:16 crc kubenswrapper[4826]: I0129 08:24:16.087989 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="856cc4ca-f8ee-4dd3-a31f-626eac71a6a1" containerName="kube-state-metrics" containerID="cri-o://f390d0e5d6b3ed3316e8220240fe74bd1c2a55ab096c295995e4177e34fb40d8" gracePeriod=30 Jan 29 08:24:17 crc kubenswrapper[4826]: I0129 08:24:17.680948 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="856cc4ca-f8ee-4dd3-a31f-626eac71a6a1" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.124:8081/readyz\": dial tcp 10.217.1.124:8081: connect: connection refused" Jan 29 08:24:17 crc kubenswrapper[4826]: I0129 08:24:17.952880 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:17 crc kubenswrapper[4826]: I0129 08:24:17.953216 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="ceilometer-central-agent" containerID="cri-o://b470ecdc91338a6f117f41269050c5e514ed54939a60ce5065798c0645a29ab1" gracePeriod=30 Jan 29 08:24:17 crc kubenswrapper[4826]: I0129 08:24:17.953248 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="ceilometer-notification-agent" containerID="cri-o://05eaf60dd946b722c85e4c18a229dd52f904e39ef0dd6d6d3d61bff6b6846edc" gracePeriod=30 Jan 29 08:24:17 crc kubenswrapper[4826]: I0129 08:24:17.953223 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="proxy-httpd" containerID="cri-o://b650b23a09fa6cfd50d21b53f9defe86eb629ddb6cc9f966daa7f15fc509ec9c" gracePeriod=30 Jan 29 08:24:17 crc kubenswrapper[4826]: I0129 08:24:17.953355 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="sg-core" containerID="cri-o://c818bc03625d015f49708a8af5c5fb9f6a59d40ebc6004a45bc481d3dfc7f1e9" gracePeriod=30 Jan 29 08:24:18 crc kubenswrapper[4826]: I0129 08:24:18.945216 4826 generic.go:334] "Generic (PLEG): container finished" podID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerID="c818bc03625d015f49708a8af5c5fb9f6a59d40ebc6004a45bc481d3dfc7f1e9" exitCode=2 Jan 29 08:24:18 crc kubenswrapper[4826]: I0129 08:24:18.945359 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2a9f639-f310-41f3-8b53-c713d66b71be","Type":"ContainerDied","Data":"c818bc03625d015f49708a8af5c5fb9f6a59d40ebc6004a45bc481d3dfc7f1e9"} Jan 29 08:24:18 crc kubenswrapper[4826]: I0129 08:24:18.948240 4826 generic.go:334] "Generic (PLEG): container finished" podID="856cc4ca-f8ee-4dd3-a31f-626eac71a6a1" containerID="f390d0e5d6b3ed3316e8220240fe74bd1c2a55ab096c295995e4177e34fb40d8" exitCode=2 Jan 29 08:24:18 crc kubenswrapper[4826]: I0129 08:24:18.948343 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"856cc4ca-f8ee-4dd3-a31f-626eac71a6a1","Type":"ContainerDied","Data":"f390d0e5d6b3ed3316e8220240fe74bd1c2a55ab096c295995e4177e34fb40d8"} Jan 29 08:24:19 crc kubenswrapper[4826]: I0129 08:24:19.977267 4826 generic.go:334] "Generic (PLEG): container finished" podID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerID="b650b23a09fa6cfd50d21b53f9defe86eb629ddb6cc9f966daa7f15fc509ec9c" exitCode=0 Jan 29 08:24:19 crc kubenswrapper[4826]: I0129 08:24:19.978141 4826 generic.go:334] "Generic (PLEG): container finished" podID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerID="b470ecdc91338a6f117f41269050c5e514ed54939a60ce5065798c0645a29ab1" exitCode=0 Jan 29 08:24:19 crc kubenswrapper[4826]: I0129 08:24:19.977561 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2a9f639-f310-41f3-8b53-c713d66b71be","Type":"ContainerDied","Data":"b650b23a09fa6cfd50d21b53f9defe86eb629ddb6cc9f966daa7f15fc509ec9c"} Jan 29 08:24:19 crc kubenswrapper[4826]: I0129 08:24:19.979117 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2a9f639-f310-41f3-8b53-c713d66b71be","Type":"ContainerDied","Data":"b470ecdc91338a6f117f41269050c5e514ed54939a60ce5065798c0645a29ab1"} Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.557742 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.685911 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5flsw\" (UniqueName: \"kubernetes.io/projected/856cc4ca-f8ee-4dd3-a31f-626eac71a6a1-kube-api-access-5flsw\") pod \"856cc4ca-f8ee-4dd3-a31f-626eac71a6a1\" (UID: \"856cc4ca-f8ee-4dd3-a31f-626eac71a6a1\") " Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.691131 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856cc4ca-f8ee-4dd3-a31f-626eac71a6a1-kube-api-access-5flsw" (OuterVolumeSpecName: "kube-api-access-5flsw") pod "856cc4ca-f8ee-4dd3-a31f-626eac71a6a1" (UID: "856cc4ca-f8ee-4dd3-a31f-626eac71a6a1"). InnerVolumeSpecName "kube-api-access-5flsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.773459 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.788012 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5flsw\" (UniqueName: \"kubernetes.io/projected/856cc4ca-f8ee-4dd3-a31f-626eac71a6a1-kube-api-access-5flsw\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.889743 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-sg-core-conf-yaml\") pod \"f2a9f639-f310-41f3-8b53-c713d66b71be\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.890079 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2a9f639-f310-41f3-8b53-c713d66b71be-log-httpd\") pod \"f2a9f639-f310-41f3-8b53-c713d66b71be\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.890125 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-config-data\") pod \"f2a9f639-f310-41f3-8b53-c713d66b71be\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.890280 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2a9f639-f310-41f3-8b53-c713d66b71be-run-httpd\") pod \"f2a9f639-f310-41f3-8b53-c713d66b71be\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.890356 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxlw2\" (UniqueName: \"kubernetes.io/projected/f2a9f639-f310-41f3-8b53-c713d66b71be-kube-api-access-mxlw2\") pod \"f2a9f639-f310-41f3-8b53-c713d66b71be\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.890422 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-combined-ca-bundle\") pod \"f2a9f639-f310-41f3-8b53-c713d66b71be\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.890484 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-scripts\") pod \"f2a9f639-f310-41f3-8b53-c713d66b71be\" (UID: \"f2a9f639-f310-41f3-8b53-c713d66b71be\") " Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.890775 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2a9f639-f310-41f3-8b53-c713d66b71be-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f2a9f639-f310-41f3-8b53-c713d66b71be" (UID: "f2a9f639-f310-41f3-8b53-c713d66b71be"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.891064 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2a9f639-f310-41f3-8b53-c713d66b71be-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f2a9f639-f310-41f3-8b53-c713d66b71be" (UID: "f2a9f639-f310-41f3-8b53-c713d66b71be"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.891803 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2a9f639-f310-41f3-8b53-c713d66b71be-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.891857 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2a9f639-f310-41f3-8b53-c713d66b71be-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.896552 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-scripts" (OuterVolumeSpecName: "scripts") pod "f2a9f639-f310-41f3-8b53-c713d66b71be" (UID: "f2a9f639-f310-41f3-8b53-c713d66b71be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.896668 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a9f639-f310-41f3-8b53-c713d66b71be-kube-api-access-mxlw2" (OuterVolumeSpecName: "kube-api-access-mxlw2") pod "f2a9f639-f310-41f3-8b53-c713d66b71be" (UID: "f2a9f639-f310-41f3-8b53-c713d66b71be"). InnerVolumeSpecName "kube-api-access-mxlw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.919926 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f2a9f639-f310-41f3-8b53-c713d66b71be" (UID: "f2a9f639-f310-41f3-8b53-c713d66b71be"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.968556 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2a9f639-f310-41f3-8b53-c713d66b71be" (UID: "f2a9f639-f310-41f3-8b53-c713d66b71be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.981852 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-config-data" (OuterVolumeSpecName: "config-data") pod "f2a9f639-f310-41f3-8b53-c713d66b71be" (UID: "f2a9f639-f310-41f3-8b53-c713d66b71be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.989241 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q46qd" event={"ID":"dbce1eaf-5632-4daf-b860-e8bc1199ed0f","Type":"ContainerStarted","Data":"a74f087529465eb08354bc0a8d22d14761ad4939e60d21fe424e3a779d980f03"} Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.993565 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.993595 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.993605 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxlw2\" (UniqueName: \"kubernetes.io/projected/f2a9f639-f310-41f3-8b53-c713d66b71be-kube-api-access-mxlw2\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.993617 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:20 crc kubenswrapper[4826]: I0129 08:24:20.993625 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a9f639-f310-41f3-8b53-c713d66b71be-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.001202 4826 generic.go:334] "Generic (PLEG): container finished" podID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerID="05eaf60dd946b722c85e4c18a229dd52f904e39ef0dd6d6d3d61bff6b6846edc" exitCode=0 Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.001308 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2a9f639-f310-41f3-8b53-c713d66b71be","Type":"ContainerDied","Data":"05eaf60dd946b722c85e4c18a229dd52f904e39ef0dd6d6d3d61bff6b6846edc"} Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.001360 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2a9f639-f310-41f3-8b53-c713d66b71be","Type":"ContainerDied","Data":"1ab926fef41040774410fb2b79a48e3ecf91f4ac7c570bf3c8995612e9f379f3"} Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.001381 4826 scope.go:117] "RemoveContainer" containerID="b650b23a09fa6cfd50d21b53f9defe86eb629ddb6cc9f966daa7f15fc509ec9c" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.001695 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.004647 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"856cc4ca-f8ee-4dd3-a31f-626eac71a6a1","Type":"ContainerDied","Data":"faa6322fe29436380ebe79ef16cbf8d81140a64c7eadfe9ec2c5297d0d7a0044"} Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.004723 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.005832 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-q46qd" podStartSLOduration=1.677561015 podStartE2EDuration="13.005811339s" podCreationTimestamp="2026-01-29 08:24:08 +0000 UTC" firstStartedPulling="2026-01-29 08:24:09.15707853 +0000 UTC m=+6033.018871589" lastFinishedPulling="2026-01-29 08:24:20.485328844 +0000 UTC m=+6044.347121913" observedRunningTime="2026-01-29 08:24:21.005662035 +0000 UTC m=+6044.867455104" watchObservedRunningTime="2026-01-29 08:24:21.005811339 +0000 UTC m=+6044.867604408" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.090513 4826 scope.go:117] "RemoveContainer" containerID="c818bc03625d015f49708a8af5c5fb9f6a59d40ebc6004a45bc481d3dfc7f1e9" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.118305 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.134801 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.138435 4826 scope.go:117] "RemoveContainer" containerID="05eaf60dd946b722c85e4c18a229dd52f904e39ef0dd6d6d3d61bff6b6846edc" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.165003 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.172282 4826 scope.go:117] "RemoveContainer" containerID="b470ecdc91338a6f117f41269050c5e514ed54939a60ce5065798c0645a29ab1" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.180468 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.189203 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:24:21 crc kubenswrapper[4826]: E0129 08:24:21.189800 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="proxy-httpd" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.189822 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="proxy-httpd" Jan 29 08:24:21 crc kubenswrapper[4826]: E0129 08:24:21.189846 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856cc4ca-f8ee-4dd3-a31f-626eac71a6a1" containerName="kube-state-metrics" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.189853 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="856cc4ca-f8ee-4dd3-a31f-626eac71a6a1" containerName="kube-state-metrics" Jan 29 08:24:21 crc kubenswrapper[4826]: E0129 08:24:21.189868 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="ceilometer-notification-agent" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.189875 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="ceilometer-notification-agent" Jan 29 08:24:21 crc kubenswrapper[4826]: E0129 08:24:21.189890 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="ceilometer-central-agent" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.189896 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="ceilometer-central-agent" Jan 29 08:24:21 crc kubenswrapper[4826]: E0129 08:24:21.189910 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="sg-core" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.189916 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="sg-core" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.190081 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="sg-core" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.190094 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="ceilometer-central-agent" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.190102 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="ceilometer-notification-agent" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.190113 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="856cc4ca-f8ee-4dd3-a31f-626eac71a6a1" containerName="kube-state-metrics" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.190127 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" containerName="proxy-httpd" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.191229 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.193231 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.194604 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-22dpg" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.195203 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.196744 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.198131 4826 scope.go:117] "RemoveContainer" containerID="b650b23a09fa6cfd50d21b53f9defe86eb629ddb6cc9f966daa7f15fc509ec9c" Jan 29 08:24:21 crc kubenswrapper[4826]: E0129 08:24:21.198735 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b650b23a09fa6cfd50d21b53f9defe86eb629ddb6cc9f966daa7f15fc509ec9c\": container with ID starting with b650b23a09fa6cfd50d21b53f9defe86eb629ddb6cc9f966daa7f15fc509ec9c not found: ID does not exist" containerID="b650b23a09fa6cfd50d21b53f9defe86eb629ddb6cc9f966daa7f15fc509ec9c" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.198776 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b650b23a09fa6cfd50d21b53f9defe86eb629ddb6cc9f966daa7f15fc509ec9c"} err="failed to get container status \"b650b23a09fa6cfd50d21b53f9defe86eb629ddb6cc9f966daa7f15fc509ec9c\": rpc error: code = NotFound desc = could not find container \"b650b23a09fa6cfd50d21b53f9defe86eb629ddb6cc9f966daa7f15fc509ec9c\": container with ID starting with b650b23a09fa6cfd50d21b53f9defe86eb629ddb6cc9f966daa7f15fc509ec9c not found: ID does not exist" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.198809 4826 scope.go:117] "RemoveContainer" containerID="c818bc03625d015f49708a8af5c5fb9f6a59d40ebc6004a45bc481d3dfc7f1e9" Jan 29 08:24:21 crc kubenswrapper[4826]: E0129 08:24:21.199347 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c818bc03625d015f49708a8af5c5fb9f6a59d40ebc6004a45bc481d3dfc7f1e9\": container with ID starting with c818bc03625d015f49708a8af5c5fb9f6a59d40ebc6004a45bc481d3dfc7f1e9 not found: ID does not exist" containerID="c818bc03625d015f49708a8af5c5fb9f6a59d40ebc6004a45bc481d3dfc7f1e9" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.199386 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c818bc03625d015f49708a8af5c5fb9f6a59d40ebc6004a45bc481d3dfc7f1e9"} err="failed to get container status \"c818bc03625d015f49708a8af5c5fb9f6a59d40ebc6004a45bc481d3dfc7f1e9\": rpc error: code = NotFound desc = could not find container \"c818bc03625d015f49708a8af5c5fb9f6a59d40ebc6004a45bc481d3dfc7f1e9\": container with ID starting with c818bc03625d015f49708a8af5c5fb9f6a59d40ebc6004a45bc481d3dfc7f1e9 not found: ID does not exist" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.199412 4826 scope.go:117] "RemoveContainer" containerID="05eaf60dd946b722c85e4c18a229dd52f904e39ef0dd6d6d3d61bff6b6846edc" Jan 29 08:24:21 crc kubenswrapper[4826]: E0129 08:24:21.199952 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05eaf60dd946b722c85e4c18a229dd52f904e39ef0dd6d6d3d61bff6b6846edc\": container with ID starting with 05eaf60dd946b722c85e4c18a229dd52f904e39ef0dd6d6d3d61bff6b6846edc not found: ID does not exist" containerID="05eaf60dd946b722c85e4c18a229dd52f904e39ef0dd6d6d3d61bff6b6846edc" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.200072 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05eaf60dd946b722c85e4c18a229dd52f904e39ef0dd6d6d3d61bff6b6846edc"} err="failed to get container status \"05eaf60dd946b722c85e4c18a229dd52f904e39ef0dd6d6d3d61bff6b6846edc\": rpc error: code = NotFound desc = could not find container \"05eaf60dd946b722c85e4c18a229dd52f904e39ef0dd6d6d3d61bff6b6846edc\": container with ID starting with 05eaf60dd946b722c85e4c18a229dd52f904e39ef0dd6d6d3d61bff6b6846edc not found: ID does not exist" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.200162 4826 scope.go:117] "RemoveContainer" containerID="b470ecdc91338a6f117f41269050c5e514ed54939a60ce5065798c0645a29ab1" Jan 29 08:24:21 crc kubenswrapper[4826]: E0129 08:24:21.201456 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b470ecdc91338a6f117f41269050c5e514ed54939a60ce5065798c0645a29ab1\": container with ID starting with b470ecdc91338a6f117f41269050c5e514ed54939a60ce5065798c0645a29ab1 not found: ID does not exist" containerID="b470ecdc91338a6f117f41269050c5e514ed54939a60ce5065798c0645a29ab1" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.201507 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b470ecdc91338a6f117f41269050c5e514ed54939a60ce5065798c0645a29ab1"} err="failed to get container status \"b470ecdc91338a6f117f41269050c5e514ed54939a60ce5065798c0645a29ab1\": rpc error: code = NotFound desc = could not find container \"b470ecdc91338a6f117f41269050c5e514ed54939a60ce5065798c0645a29ab1\": container with ID starting with b470ecdc91338a6f117f41269050c5e514ed54939a60ce5065798c0645a29ab1 not found: ID does not exist" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.201538 4826 scope.go:117] "RemoveContainer" containerID="f390d0e5d6b3ed3316e8220240fe74bd1c2a55ab096c295995e4177e34fb40d8" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.207101 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.224625 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.228726 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.228897 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.229000 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.240158 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.299194 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-config-data\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.299248 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqmx5\" (UniqueName: \"kubernetes.io/projected/22ad018d-99ad-4eb2-bd3c-ef284341d2d1-kube-api-access-zqmx5\") pod \"kube-state-metrics-0\" (UID: \"22ad018d-99ad-4eb2-bd3c-ef284341d2d1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.299267 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-scripts\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.299355 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmq29\" (UniqueName: \"kubernetes.io/projected/eccb5925-3653-4316-9687-3e20a5fd1caa-kube-api-access-wmq29\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.299395 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.299412 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22ad018d-99ad-4eb2-bd3c-ef284341d2d1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"22ad018d-99ad-4eb2-bd3c-ef284341d2d1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.299441 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/22ad018d-99ad-4eb2-bd3c-ef284341d2d1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"22ad018d-99ad-4eb2-bd3c-ef284341d2d1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.299460 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.299475 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eccb5925-3653-4316-9687-3e20a5fd1caa-run-httpd\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.299497 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/22ad018d-99ad-4eb2-bd3c-ef284341d2d1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"22ad018d-99ad-4eb2-bd3c-ef284341d2d1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.299522 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eccb5925-3653-4316-9687-3e20a5fd1caa-log-httpd\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.299602 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.400807 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.401081 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-config-data\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.401235 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqmx5\" (UniqueName: \"kubernetes.io/projected/22ad018d-99ad-4eb2-bd3c-ef284341d2d1-kube-api-access-zqmx5\") pod \"kube-state-metrics-0\" (UID: \"22ad018d-99ad-4eb2-bd3c-ef284341d2d1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.401661 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-scripts\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.402054 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmq29\" (UniqueName: \"kubernetes.io/projected/eccb5925-3653-4316-9687-3e20a5fd1caa-kube-api-access-wmq29\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.402194 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.402339 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22ad018d-99ad-4eb2-bd3c-ef284341d2d1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"22ad018d-99ad-4eb2-bd3c-ef284341d2d1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.402443 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/22ad018d-99ad-4eb2-bd3c-ef284341d2d1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"22ad018d-99ad-4eb2-bd3c-ef284341d2d1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.402519 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.402596 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eccb5925-3653-4316-9687-3e20a5fd1caa-run-httpd\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.402682 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/22ad018d-99ad-4eb2-bd3c-ef284341d2d1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"22ad018d-99ad-4eb2-bd3c-ef284341d2d1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.402770 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eccb5925-3653-4316-9687-3e20a5fd1caa-log-httpd\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.403188 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eccb5925-3653-4316-9687-3e20a5fd1caa-log-httpd\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.403577 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eccb5925-3653-4316-9687-3e20a5fd1caa-run-httpd\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.406006 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.406753 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-scripts\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.408794 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.409435 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.412881 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22ad018d-99ad-4eb2-bd3c-ef284341d2d1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"22ad018d-99ad-4eb2-bd3c-ef284341d2d1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.416503 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-config-data\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.417543 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqmx5\" (UniqueName: \"kubernetes.io/projected/22ad018d-99ad-4eb2-bd3c-ef284341d2d1-kube-api-access-zqmx5\") pod \"kube-state-metrics-0\" (UID: \"22ad018d-99ad-4eb2-bd3c-ef284341d2d1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.430780 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/22ad018d-99ad-4eb2-bd3c-ef284341d2d1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"22ad018d-99ad-4eb2-bd3c-ef284341d2d1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.430837 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/22ad018d-99ad-4eb2-bd3c-ef284341d2d1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"22ad018d-99ad-4eb2-bd3c-ef284341d2d1\") " pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.431107 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmq29\" (UniqueName: \"kubernetes.io/projected/eccb5925-3653-4316-9687-3e20a5fd1caa-kube-api-access-wmq29\") pod \"ceilometer-0\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " pod="openstack/ceilometer-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.507233 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 08:24:21 crc kubenswrapper[4826]: I0129 08:24:21.545656 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:24:22 crc kubenswrapper[4826]: I0129 08:24:22.059272 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 08:24:22 crc kubenswrapper[4826]: W0129 08:24:22.066709 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22ad018d_99ad_4eb2_bd3c_ef284341d2d1.slice/crio-35cc964e5385afdb2e42e3cdc621374187804ff570fbbcbc5537054eb37f4dbe WatchSource:0}: Error finding container 35cc964e5385afdb2e42e3cdc621374187804ff570fbbcbc5537054eb37f4dbe: Status 404 returned error can't find the container with id 35cc964e5385afdb2e42e3cdc621374187804ff570fbbcbc5537054eb37f4dbe Jan 29 08:24:22 crc kubenswrapper[4826]: I0129 08:24:22.137005 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:22 crc kubenswrapper[4826]: I0129 08:24:22.189132 4826 scope.go:117] "RemoveContainer" containerID="1372d89b9e69b5bb1e19da846f4269d76f9f29d756db4d2666a887e2ea98e35b" Jan 29 08:24:22 crc kubenswrapper[4826]: I0129 08:24:22.209493 4826 scope.go:117] "RemoveContainer" containerID="ee46d04244e736678a3b64e10294d4119a68321ffba82764eee8ea1a70adcea2" Jan 29 08:24:22 crc kubenswrapper[4826]: I0129 08:24:22.242628 4826 scope.go:117] "RemoveContainer" containerID="c94a81169418473cd450d5b7c9db0f7d0ece2c5d56f648b9867b4bd0ff9fec4d" Jan 29 08:24:22 crc kubenswrapper[4826]: I0129 08:24:22.278933 4826 scope.go:117] "RemoveContainer" containerID="cbc371368313da977ac4c53783b4d6c96b6f9227f1a38ce4dc3e21176828a3f1" Jan 29 08:24:22 crc kubenswrapper[4826]: I0129 08:24:22.304183 4826 scope.go:117] "RemoveContainer" containerID="208de6e5593814423dfcb7997eb95aeb19dc937cb05fed570adf40052d492353" Jan 29 08:24:22 crc kubenswrapper[4826]: I0129 08:24:22.356873 4826 scope.go:117] "RemoveContainer" containerID="cc4cf9868c809eb79fea3d9df7b5fae3c1a0bd4a32970bb9e8dd16deb3d0ed93" Jan 29 08:24:22 crc kubenswrapper[4826]: I0129 08:24:22.392604 4826 scope.go:117] "RemoveContainer" containerID="8e4e9f9864983adcbca300b38d09172f56ed0779104f88eb86d472af9cfb06c4" Jan 29 08:24:22 crc kubenswrapper[4826]: I0129 08:24:22.827956 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856cc4ca-f8ee-4dd3-a31f-626eac71a6a1" path="/var/lib/kubelet/pods/856cc4ca-f8ee-4dd3-a31f-626eac71a6a1/volumes" Jan 29 08:24:22 crc kubenswrapper[4826]: I0129 08:24:22.829087 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a9f639-f310-41f3-8b53-c713d66b71be" path="/var/lib/kubelet/pods/f2a9f639-f310-41f3-8b53-c713d66b71be/volumes" Jan 29 08:24:23 crc kubenswrapper[4826]: I0129 08:24:23.036999 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eccb5925-3653-4316-9687-3e20a5fd1caa","Type":"ContainerStarted","Data":"6ad5783aac4f5841bd265e6b15d8040e484770ae64ad55cb3b94f5a45150ba74"} Jan 29 08:24:23 crc kubenswrapper[4826]: I0129 08:24:23.037232 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eccb5925-3653-4316-9687-3e20a5fd1caa","Type":"ContainerStarted","Data":"1ff0d0a53c8199ec5a308e69e3b363b9936ba7d209b004e29e04d4fbed1dcd73"} Jan 29 08:24:23 crc kubenswrapper[4826]: I0129 08:24:23.041677 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22ad018d-99ad-4eb2-bd3c-ef284341d2d1","Type":"ContainerStarted","Data":"51cb01fafb34aff0d80dc00f8c5ee7c77f5ae7b4fabf33515f095c29dcb5d9c9"} Jan 29 08:24:23 crc kubenswrapper[4826]: I0129 08:24:23.041760 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 08:24:23 crc kubenswrapper[4826]: I0129 08:24:23.041787 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22ad018d-99ad-4eb2-bd3c-ef284341d2d1","Type":"ContainerStarted","Data":"35cc964e5385afdb2e42e3cdc621374187804ff570fbbcbc5537054eb37f4dbe"} Jan 29 08:24:23 crc kubenswrapper[4826]: I0129 08:24:23.043802 4826 generic.go:334] "Generic (PLEG): container finished" podID="dbce1eaf-5632-4daf-b860-e8bc1199ed0f" containerID="a74f087529465eb08354bc0a8d22d14761ad4939e60d21fe424e3a779d980f03" exitCode=0 Jan 29 08:24:23 crc kubenswrapper[4826]: I0129 08:24:23.043907 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q46qd" event={"ID":"dbce1eaf-5632-4daf-b860-e8bc1199ed0f","Type":"ContainerDied","Data":"a74f087529465eb08354bc0a8d22d14761ad4939e60d21fe424e3a779d980f03"} Jan 29 08:24:23 crc kubenswrapper[4826]: I0129 08:24:23.077139 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.597561674 podStartE2EDuration="2.077108291s" podCreationTimestamp="2026-01-29 08:24:21 +0000 UTC" firstStartedPulling="2026-01-29 08:24:22.072327863 +0000 UTC m=+6045.934120932" lastFinishedPulling="2026-01-29 08:24:22.55187448 +0000 UTC m=+6046.413667549" observedRunningTime="2026-01-29 08:24:23.060397381 +0000 UTC m=+6046.922190470" watchObservedRunningTime="2026-01-29 08:24:23.077108291 +0000 UTC m=+6046.938901390" Jan 29 08:24:23 crc kubenswrapper[4826]: I0129 08:24:23.809138 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:24:23 crc kubenswrapper[4826]: E0129 08:24:23.809822 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:24:24 crc kubenswrapper[4826]: I0129 08:24:24.071803 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eccb5925-3653-4316-9687-3e20a5fd1caa","Type":"ContainerStarted","Data":"bad59b23d9202ce8c9028d033ba47261fb4905443796225c09175feeabbcbbb6"} Jan 29 08:24:24 crc kubenswrapper[4826]: I0129 08:24:24.071867 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eccb5925-3653-4316-9687-3e20a5fd1caa","Type":"ContainerStarted","Data":"270dd2941fa56edba923e13a8b0ef7e72f649a3873adce59151520f3538e370d"} Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:24.440723 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:24.472784 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h67z\" (UniqueName: \"kubernetes.io/projected/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-kube-api-access-2h67z\") pod \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:24.472899 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-config-data\") pod \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:24.472977 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-combined-ca-bundle\") pod \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:24.473050 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-scripts\") pod \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\" (UID: \"dbce1eaf-5632-4daf-b860-e8bc1199ed0f\") " Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:24.478162 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-scripts" (OuterVolumeSpecName: "scripts") pod "dbce1eaf-5632-4daf-b860-e8bc1199ed0f" (UID: "dbce1eaf-5632-4daf-b860-e8bc1199ed0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:24.478213 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-kube-api-access-2h67z" (OuterVolumeSpecName: "kube-api-access-2h67z") pod "dbce1eaf-5632-4daf-b860-e8bc1199ed0f" (UID: "dbce1eaf-5632-4daf-b860-e8bc1199ed0f"). InnerVolumeSpecName "kube-api-access-2h67z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:24.503018 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbce1eaf-5632-4daf-b860-e8bc1199ed0f" (UID: "dbce1eaf-5632-4daf-b860-e8bc1199ed0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:24.503384 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-config-data" (OuterVolumeSpecName: "config-data") pod "dbce1eaf-5632-4daf-b860-e8bc1199ed0f" (UID: "dbce1eaf-5632-4daf-b860-e8bc1199ed0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:24.575387 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:24.575418 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:24.575428 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h67z\" (UniqueName: \"kubernetes.io/projected/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-kube-api-access-2h67z\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:24.575437 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbce1eaf-5632-4daf-b860-e8bc1199ed0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:25.084971 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-q46qd" event={"ID":"dbce1eaf-5632-4daf-b860-e8bc1199ed0f","Type":"ContainerDied","Data":"ff15cfe1d57b83fc970251349f9901ec422c4858a63b0f59996cc99290c44c06"} Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:25.085035 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-q46qd" Jan 29 08:24:25 crc kubenswrapper[4826]: I0129 08:24:25.085062 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff15cfe1d57b83fc970251349f9901ec422c4858a63b0f59996cc99290c44c06" Jan 29 08:24:26 crc kubenswrapper[4826]: I0129 08:24:26.096737 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eccb5925-3653-4316-9687-3e20a5fd1caa","Type":"ContainerStarted","Data":"bf6de774e733166be4573ca1a1327bcaec2050513850c0a77d69345072d19695"} Jan 29 08:24:26 crc kubenswrapper[4826]: I0129 08:24:26.097119 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 08:24:26 crc kubenswrapper[4826]: I0129 08:24:26.125806 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.884797007 podStartE2EDuration="5.125787401s" podCreationTimestamp="2026-01-29 08:24:21 +0000 UTC" firstStartedPulling="2026-01-29 08:24:22.141464184 +0000 UTC m=+6046.003257243" lastFinishedPulling="2026-01-29 08:24:25.382454558 +0000 UTC m=+6049.244247637" observedRunningTime="2026-01-29 08:24:26.116141617 +0000 UTC m=+6049.977934686" watchObservedRunningTime="2026-01-29 08:24:26.125787401 +0000 UTC m=+6049.987580470" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.578434 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 29 08:24:28 crc kubenswrapper[4826]: E0129 08:24:28.579423 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbce1eaf-5632-4daf-b860-e8bc1199ed0f" containerName="aodh-db-sync" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.579441 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbce1eaf-5632-4daf-b860-e8bc1199ed0f" containerName="aodh-db-sync" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.579691 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbce1eaf-5632-4daf-b860-e8bc1199ed0f" containerName="aodh-db-sync" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.582047 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.594602 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.599650 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-gw9tj" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.599985 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.616958 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.676494 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-scripts\") pod \"aodh-0\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " pod="openstack/aodh-0" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.676703 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-config-data\") pod \"aodh-0\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " pod="openstack/aodh-0" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.677002 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh5tw\" (UniqueName: \"kubernetes.io/projected/b0fa18f7-5def-4287-9822-0fdab879c042-kube-api-access-sh5tw\") pod \"aodh-0\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " pod="openstack/aodh-0" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.677241 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " pod="openstack/aodh-0" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.784008 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-config-data\") pod \"aodh-0\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " pod="openstack/aodh-0" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.784246 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh5tw\" (UniqueName: \"kubernetes.io/projected/b0fa18f7-5def-4287-9822-0fdab879c042-kube-api-access-sh5tw\") pod \"aodh-0\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " pod="openstack/aodh-0" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.784392 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " pod="openstack/aodh-0" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.784432 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-scripts\") pod \"aodh-0\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " pod="openstack/aodh-0" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.790885 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-scripts\") pod \"aodh-0\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " pod="openstack/aodh-0" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.798828 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-config-data\") pod \"aodh-0\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " pod="openstack/aodh-0" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.801823 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh5tw\" (UniqueName: \"kubernetes.io/projected/b0fa18f7-5def-4287-9822-0fdab879c042-kube-api-access-sh5tw\") pod \"aodh-0\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " pod="openstack/aodh-0" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.809856 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " pod="openstack/aodh-0" Jan 29 08:24:28 crc kubenswrapper[4826]: I0129 08:24:28.909164 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 29 08:24:29 crc kubenswrapper[4826]: I0129 08:24:29.379416 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 29 08:24:30 crc kubenswrapper[4826]: I0129 08:24:30.140531 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b0fa18f7-5def-4287-9822-0fdab879c042","Type":"ContainerStarted","Data":"1564e2a19bc3c0529ba9a5a1eb92272a91e604c2327db007130e3c35834f0bd4"} Jan 29 08:24:30 crc kubenswrapper[4826]: I0129 08:24:30.141935 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b0fa18f7-5def-4287-9822-0fdab879c042","Type":"ContainerStarted","Data":"377c516bceef54a570a55c03e3d4b51565a4742882c4d5d2f86a13e4d3e2c6cb"} Jan 29 08:24:30 crc kubenswrapper[4826]: I0129 08:24:30.805116 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:30 crc kubenswrapper[4826]: I0129 08:24:30.805753 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="ceilometer-central-agent" containerID="cri-o://6ad5783aac4f5841bd265e6b15d8040e484770ae64ad55cb3b94f5a45150ba74" gracePeriod=30 Jan 29 08:24:30 crc kubenswrapper[4826]: I0129 08:24:30.805810 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="proxy-httpd" containerID="cri-o://bf6de774e733166be4573ca1a1327bcaec2050513850c0a77d69345072d19695" gracePeriod=30 Jan 29 08:24:30 crc kubenswrapper[4826]: I0129 08:24:30.805874 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="ceilometer-notification-agent" containerID="cri-o://270dd2941fa56edba923e13a8b0ef7e72f649a3873adce59151520f3538e370d" gracePeriod=30 Jan 29 08:24:30 crc kubenswrapper[4826]: I0129 08:24:30.805874 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="sg-core" containerID="cri-o://bad59b23d9202ce8c9028d033ba47261fb4905443796225c09175feeabbcbbb6" gracePeriod=30 Jan 29 08:24:31 crc kubenswrapper[4826]: I0129 08:24:31.155765 4826 generic.go:334] "Generic (PLEG): container finished" podID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerID="bf6de774e733166be4573ca1a1327bcaec2050513850c0a77d69345072d19695" exitCode=0 Jan 29 08:24:31 crc kubenswrapper[4826]: I0129 08:24:31.155797 4826 generic.go:334] "Generic (PLEG): container finished" podID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerID="bad59b23d9202ce8c9028d033ba47261fb4905443796225c09175feeabbcbbb6" exitCode=2 Jan 29 08:24:31 crc kubenswrapper[4826]: I0129 08:24:31.155805 4826 generic.go:334] "Generic (PLEG): container finished" podID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerID="6ad5783aac4f5841bd265e6b15d8040e484770ae64ad55cb3b94f5a45150ba74" exitCode=0 Jan 29 08:24:31 crc kubenswrapper[4826]: I0129 08:24:31.155823 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eccb5925-3653-4316-9687-3e20a5fd1caa","Type":"ContainerDied","Data":"bf6de774e733166be4573ca1a1327bcaec2050513850c0a77d69345072d19695"} Jan 29 08:24:31 crc kubenswrapper[4826]: I0129 08:24:31.155848 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eccb5925-3653-4316-9687-3e20a5fd1caa","Type":"ContainerDied","Data":"bad59b23d9202ce8c9028d033ba47261fb4905443796225c09175feeabbcbbb6"} Jan 29 08:24:31 crc kubenswrapper[4826]: I0129 08:24:31.155858 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eccb5925-3653-4316-9687-3e20a5fd1caa","Type":"ContainerDied","Data":"6ad5783aac4f5841bd265e6b15d8040e484770ae64ad55cb3b94f5a45150ba74"} Jan 29 08:24:31 crc kubenswrapper[4826]: I0129 08:24:31.519354 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 08:24:31 crc kubenswrapper[4826]: I0129 08:24:31.608558 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 29 08:24:31 crc kubenswrapper[4826]: I0129 08:24:31.976323 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.057167 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-combined-ca-bundle\") pod \"eccb5925-3653-4316-9687-3e20a5fd1caa\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.057265 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-config-data\") pod \"eccb5925-3653-4316-9687-3e20a5fd1caa\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.057357 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-scripts\") pod \"eccb5925-3653-4316-9687-3e20a5fd1caa\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.057388 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-sg-core-conf-yaml\") pod \"eccb5925-3653-4316-9687-3e20a5fd1caa\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.057475 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmq29\" (UniqueName: \"kubernetes.io/projected/eccb5925-3653-4316-9687-3e20a5fd1caa-kube-api-access-wmq29\") pod \"eccb5925-3653-4316-9687-3e20a5fd1caa\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.057561 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eccb5925-3653-4316-9687-3e20a5fd1caa-run-httpd\") pod \"eccb5925-3653-4316-9687-3e20a5fd1caa\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.057682 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eccb5925-3653-4316-9687-3e20a5fd1caa-log-httpd\") pod \"eccb5925-3653-4316-9687-3e20a5fd1caa\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.057772 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-ceilometer-tls-certs\") pod \"eccb5925-3653-4316-9687-3e20a5fd1caa\" (UID: \"eccb5925-3653-4316-9687-3e20a5fd1caa\") " Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.061616 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eccb5925-3653-4316-9687-3e20a5fd1caa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eccb5925-3653-4316-9687-3e20a5fd1caa" (UID: "eccb5925-3653-4316-9687-3e20a5fd1caa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.061815 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eccb5925-3653-4316-9687-3e20a5fd1caa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eccb5925-3653-4316-9687-3e20a5fd1caa" (UID: "eccb5925-3653-4316-9687-3e20a5fd1caa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.065881 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eccb5925-3653-4316-9687-3e20a5fd1caa-kube-api-access-wmq29" (OuterVolumeSpecName: "kube-api-access-wmq29") pod "eccb5925-3653-4316-9687-3e20a5fd1caa" (UID: "eccb5925-3653-4316-9687-3e20a5fd1caa"). InnerVolumeSpecName "kube-api-access-wmq29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.068548 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-scripts" (OuterVolumeSpecName: "scripts") pod "eccb5925-3653-4316-9687-3e20a5fd1caa" (UID: "eccb5925-3653-4316-9687-3e20a5fd1caa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.139983 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eccb5925-3653-4316-9687-3e20a5fd1caa" (UID: "eccb5925-3653-4316-9687-3e20a5fd1caa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.160528 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.160566 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.160582 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmq29\" (UniqueName: \"kubernetes.io/projected/eccb5925-3653-4316-9687-3e20a5fd1caa-kube-api-access-wmq29\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.160593 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eccb5925-3653-4316-9687-3e20a5fd1caa-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.160603 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eccb5925-3653-4316-9687-3e20a5fd1caa-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.171832 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b0fa18f7-5def-4287-9822-0fdab879c042","Type":"ContainerStarted","Data":"b729e02fd48aeb14131d866e4a4f5a5029b0d8a6ccd7fe764790b8b474410347"} Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.181989 4826 generic.go:334] "Generic (PLEG): container finished" podID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerID="270dd2941fa56edba923e13a8b0ef7e72f649a3873adce59151520f3538e370d" exitCode=0 Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.182028 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eccb5925-3653-4316-9687-3e20a5fd1caa","Type":"ContainerDied","Data":"270dd2941fa56edba923e13a8b0ef7e72f649a3873adce59151520f3538e370d"} Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.182054 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eccb5925-3653-4316-9687-3e20a5fd1caa","Type":"ContainerDied","Data":"1ff0d0a53c8199ec5a308e69e3b363b9936ba7d209b004e29e04d4fbed1dcd73"} Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.182072 4826 scope.go:117] "RemoveContainer" containerID="bf6de774e733166be4573ca1a1327bcaec2050513850c0a77d69345072d19695" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.182071 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.189883 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "eccb5925-3653-4316-9687-3e20a5fd1caa" (UID: "eccb5925-3653-4316-9687-3e20a5fd1caa"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.192775 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eccb5925-3653-4316-9687-3e20a5fd1caa" (UID: "eccb5925-3653-4316-9687-3e20a5fd1caa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.263578 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-config-data" (OuterVolumeSpecName: "config-data") pod "eccb5925-3653-4316-9687-3e20a5fd1caa" (UID: "eccb5925-3653-4316-9687-3e20a5fd1caa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.266933 4826 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.266964 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.266993 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eccb5925-3653-4316-9687-3e20a5fd1caa-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.517746 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.526168 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.548129 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:32 crc kubenswrapper[4826]: E0129 08:24:32.548627 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="sg-core" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.548642 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="sg-core" Jan 29 08:24:32 crc kubenswrapper[4826]: E0129 08:24:32.548663 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="ceilometer-notification-agent" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.548671 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="ceilometer-notification-agent" Jan 29 08:24:32 crc kubenswrapper[4826]: E0129 08:24:32.548688 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="ceilometer-central-agent" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.548699 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="ceilometer-central-agent" Jan 29 08:24:32 crc kubenswrapper[4826]: E0129 08:24:32.548716 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="proxy-httpd" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.548725 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="proxy-httpd" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.548942 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="proxy-httpd" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.548961 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="sg-core" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.548975 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="ceilometer-notification-agent" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.548988 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" containerName="ceilometer-central-agent" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.550841 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.555567 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.555753 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.555792 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.561156 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.571747 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-scripts\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.571811 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd1bd03-d686-4605-beff-889cd9a93fb3-run-httpd\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.571847 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd1bd03-d686-4605-beff-889cd9a93fb3-log-httpd\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.571874 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s549\" (UniqueName: \"kubernetes.io/projected/acd1bd03-d686-4605-beff-889cd9a93fb3-kube-api-access-6s549\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.571917 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.571947 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-config-data\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.571973 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.571992 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.673672 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd1bd03-d686-4605-beff-889cd9a93fb3-run-httpd\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.673749 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd1bd03-d686-4605-beff-889cd9a93fb3-log-httpd\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.673788 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s549\" (UniqueName: \"kubernetes.io/projected/acd1bd03-d686-4605-beff-889cd9a93fb3-kube-api-access-6s549\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.673842 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.673875 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-config-data\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.673906 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.673928 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.673999 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-scripts\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.674333 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd1bd03-d686-4605-beff-889cd9a93fb3-run-httpd\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.674353 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd1bd03-d686-4605-beff-889cd9a93fb3-log-httpd\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.679411 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-scripts\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.679451 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.681698 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.683393 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.692519 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s549\" (UniqueName: \"kubernetes.io/projected/acd1bd03-d686-4605-beff-889cd9a93fb3-kube-api-access-6s549\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.696370 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-config-data\") pod \"ceilometer-0\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.757942 4826 scope.go:117] "RemoveContainer" containerID="bad59b23d9202ce8c9028d033ba47261fb4905443796225c09175feeabbcbbb6" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.807830 4826 scope.go:117] "RemoveContainer" containerID="270dd2941fa56edba923e13a8b0ef7e72f649a3873adce59151520f3538e370d" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.823747 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eccb5925-3653-4316-9687-3e20a5fd1caa" path="/var/lib/kubelet/pods/eccb5925-3653-4316-9687-3e20a5fd1caa/volumes" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.871572 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.876464 4826 scope.go:117] "RemoveContainer" containerID="6ad5783aac4f5841bd265e6b15d8040e484770ae64ad55cb3b94f5a45150ba74" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.925083 4826 scope.go:117] "RemoveContainer" containerID="bf6de774e733166be4573ca1a1327bcaec2050513850c0a77d69345072d19695" Jan 29 08:24:32 crc kubenswrapper[4826]: E0129 08:24:32.926857 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf6de774e733166be4573ca1a1327bcaec2050513850c0a77d69345072d19695\": container with ID starting with bf6de774e733166be4573ca1a1327bcaec2050513850c0a77d69345072d19695 not found: ID does not exist" containerID="bf6de774e733166be4573ca1a1327bcaec2050513850c0a77d69345072d19695" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.926917 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf6de774e733166be4573ca1a1327bcaec2050513850c0a77d69345072d19695"} err="failed to get container status \"bf6de774e733166be4573ca1a1327bcaec2050513850c0a77d69345072d19695\": rpc error: code = NotFound desc = could not find container \"bf6de774e733166be4573ca1a1327bcaec2050513850c0a77d69345072d19695\": container with ID starting with bf6de774e733166be4573ca1a1327bcaec2050513850c0a77d69345072d19695 not found: ID does not exist" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.926947 4826 scope.go:117] "RemoveContainer" containerID="bad59b23d9202ce8c9028d033ba47261fb4905443796225c09175feeabbcbbb6" Jan 29 08:24:32 crc kubenswrapper[4826]: E0129 08:24:32.927551 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bad59b23d9202ce8c9028d033ba47261fb4905443796225c09175feeabbcbbb6\": container with ID starting with bad59b23d9202ce8c9028d033ba47261fb4905443796225c09175feeabbcbbb6 not found: ID does not exist" containerID="bad59b23d9202ce8c9028d033ba47261fb4905443796225c09175feeabbcbbb6" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.927635 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bad59b23d9202ce8c9028d033ba47261fb4905443796225c09175feeabbcbbb6"} err="failed to get container status \"bad59b23d9202ce8c9028d033ba47261fb4905443796225c09175feeabbcbbb6\": rpc error: code = NotFound desc = could not find container \"bad59b23d9202ce8c9028d033ba47261fb4905443796225c09175feeabbcbbb6\": container with ID starting with bad59b23d9202ce8c9028d033ba47261fb4905443796225c09175feeabbcbbb6 not found: ID does not exist" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.927669 4826 scope.go:117] "RemoveContainer" containerID="270dd2941fa56edba923e13a8b0ef7e72f649a3873adce59151520f3538e370d" Jan 29 08:24:32 crc kubenswrapper[4826]: E0129 08:24:32.928175 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"270dd2941fa56edba923e13a8b0ef7e72f649a3873adce59151520f3538e370d\": container with ID starting with 270dd2941fa56edba923e13a8b0ef7e72f649a3873adce59151520f3538e370d not found: ID does not exist" containerID="270dd2941fa56edba923e13a8b0ef7e72f649a3873adce59151520f3538e370d" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.928242 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"270dd2941fa56edba923e13a8b0ef7e72f649a3873adce59151520f3538e370d"} err="failed to get container status \"270dd2941fa56edba923e13a8b0ef7e72f649a3873adce59151520f3538e370d\": rpc error: code = NotFound desc = could not find container \"270dd2941fa56edba923e13a8b0ef7e72f649a3873adce59151520f3538e370d\": container with ID starting with 270dd2941fa56edba923e13a8b0ef7e72f649a3873adce59151520f3538e370d not found: ID does not exist" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.928278 4826 scope.go:117] "RemoveContainer" containerID="6ad5783aac4f5841bd265e6b15d8040e484770ae64ad55cb3b94f5a45150ba74" Jan 29 08:24:32 crc kubenswrapper[4826]: E0129 08:24:32.930973 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad5783aac4f5841bd265e6b15d8040e484770ae64ad55cb3b94f5a45150ba74\": container with ID starting with 6ad5783aac4f5841bd265e6b15d8040e484770ae64ad55cb3b94f5a45150ba74 not found: ID does not exist" containerID="6ad5783aac4f5841bd265e6b15d8040e484770ae64ad55cb3b94f5a45150ba74" Jan 29 08:24:32 crc kubenswrapper[4826]: I0129 08:24:32.931035 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad5783aac4f5841bd265e6b15d8040e484770ae64ad55cb3b94f5a45150ba74"} err="failed to get container status \"6ad5783aac4f5841bd265e6b15d8040e484770ae64ad55cb3b94f5a45150ba74\": rpc error: code = NotFound desc = could not find container \"6ad5783aac4f5841bd265e6b15d8040e484770ae64ad55cb3b94f5a45150ba74\": container with ID starting with 6ad5783aac4f5841bd265e6b15d8040e484770ae64ad55cb3b94f5a45150ba74 not found: ID does not exist" Jan 29 08:24:33 crc kubenswrapper[4826]: I0129 08:24:33.208024 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b0fa18f7-5def-4287-9822-0fdab879c042","Type":"ContainerStarted","Data":"71b44edcb5bb108e8f31e9f6d7cf0e50ca597393650331e132337e524ae121aa"} Jan 29 08:24:33 crc kubenswrapper[4826]: I0129 08:24:33.218262 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:33 crc kubenswrapper[4826]: I0129 08:24:33.397042 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:33 crc kubenswrapper[4826]: W0129 08:24:33.411589 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacd1bd03_d686_4605_beff_889cd9a93fb3.slice/crio-b589dbb6187aee73b0d00004d0effa375b638cba59e408cbe9693a270515737c WatchSource:0}: Error finding container b589dbb6187aee73b0d00004d0effa375b638cba59e408cbe9693a270515737c: Status 404 returned error can't find the container with id b589dbb6187aee73b0d00004d0effa375b638cba59e408cbe9693a270515737c Jan 29 08:24:34 crc kubenswrapper[4826]: I0129 08:24:34.232391 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd1bd03-d686-4605-beff-889cd9a93fb3","Type":"ContainerStarted","Data":"570185b32cb6b145e1c9e99e999f1b5bc089994e2b2832f0ce7c59980b008092"} Jan 29 08:24:34 crc kubenswrapper[4826]: I0129 08:24:34.233548 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd1bd03-d686-4605-beff-889cd9a93fb3","Type":"ContainerStarted","Data":"b589dbb6187aee73b0d00004d0effa375b638cba59e408cbe9693a270515737c"} Jan 29 08:24:35 crc kubenswrapper[4826]: I0129 08:24:35.243900 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd1bd03-d686-4605-beff-889cd9a93fb3","Type":"ContainerStarted","Data":"7a51adbbb44efccbfc358f30a1f2f74d4fe3d724bbb14d12c68ce7777410ae9a"} Jan 29 08:24:35 crc kubenswrapper[4826]: I0129 08:24:35.247812 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b0fa18f7-5def-4287-9822-0fdab879c042","Type":"ContainerStarted","Data":"c6880bac666712180c24145f7eaa6cbe5c9551f466fcb4011c440e9dc42f2b14"} Jan 29 08:24:35 crc kubenswrapper[4826]: I0129 08:24:35.247949 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-listener" containerID="cri-o://c6880bac666712180c24145f7eaa6cbe5c9551f466fcb4011c440e9dc42f2b14" gracePeriod=30 Jan 29 08:24:35 crc kubenswrapper[4826]: I0129 08:24:35.248066 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-evaluator" containerID="cri-o://b729e02fd48aeb14131d866e4a4f5a5029b0d8a6ccd7fe764790b8b474410347" gracePeriod=30 Jan 29 08:24:35 crc kubenswrapper[4826]: I0129 08:24:35.247951 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-api" containerID="cri-o://1564e2a19bc3c0529ba9a5a1eb92272a91e604c2327db007130e3c35834f0bd4" gracePeriod=30 Jan 29 08:24:35 crc kubenswrapper[4826]: I0129 08:24:35.248096 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-notifier" containerID="cri-o://71b44edcb5bb108e8f31e9f6d7cf0e50ca597393650331e132337e524ae121aa" gracePeriod=30 Jan 29 08:24:35 crc kubenswrapper[4826]: I0129 08:24:35.273601 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.548218447 podStartE2EDuration="7.273580297s" podCreationTimestamp="2026-01-29 08:24:28 +0000 UTC" firstStartedPulling="2026-01-29 08:24:29.387851791 +0000 UTC m=+6053.249644860" lastFinishedPulling="2026-01-29 08:24:34.113213641 +0000 UTC m=+6057.975006710" observedRunningTime="2026-01-29 08:24:35.265680799 +0000 UTC m=+6059.127473868" watchObservedRunningTime="2026-01-29 08:24:35.273580297 +0000 UTC m=+6059.135373366" Jan 29 08:24:35 crc kubenswrapper[4826]: I0129 08:24:35.809086 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:24:35 crc kubenswrapper[4826]: E0129 08:24:35.809689 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:24:36 crc kubenswrapper[4826]: I0129 08:24:36.259787 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd1bd03-d686-4605-beff-889cd9a93fb3","Type":"ContainerStarted","Data":"c58c2e5d5d8e33fc837d87cdaf367c8d0c22cf10ecf50d69c08a4a5d20587f62"} Jan 29 08:24:36 crc kubenswrapper[4826]: I0129 08:24:36.278051 4826 generic.go:334] "Generic (PLEG): container finished" podID="b0fa18f7-5def-4287-9822-0fdab879c042" containerID="71b44edcb5bb108e8f31e9f6d7cf0e50ca597393650331e132337e524ae121aa" exitCode=0 Jan 29 08:24:36 crc kubenswrapper[4826]: I0129 08:24:36.278081 4826 generic.go:334] "Generic (PLEG): container finished" podID="b0fa18f7-5def-4287-9822-0fdab879c042" containerID="b729e02fd48aeb14131d866e4a4f5a5029b0d8a6ccd7fe764790b8b474410347" exitCode=0 Jan 29 08:24:36 crc kubenswrapper[4826]: I0129 08:24:36.278090 4826 generic.go:334] "Generic (PLEG): container finished" podID="b0fa18f7-5def-4287-9822-0fdab879c042" containerID="1564e2a19bc3c0529ba9a5a1eb92272a91e604c2327db007130e3c35834f0bd4" exitCode=0 Jan 29 08:24:36 crc kubenswrapper[4826]: I0129 08:24:36.278111 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b0fa18f7-5def-4287-9822-0fdab879c042","Type":"ContainerDied","Data":"71b44edcb5bb108e8f31e9f6d7cf0e50ca597393650331e132337e524ae121aa"} Jan 29 08:24:36 crc kubenswrapper[4826]: I0129 08:24:36.278136 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b0fa18f7-5def-4287-9822-0fdab879c042","Type":"ContainerDied","Data":"b729e02fd48aeb14131d866e4a4f5a5029b0d8a6ccd7fe764790b8b474410347"} Jan 29 08:24:36 crc kubenswrapper[4826]: I0129 08:24:36.278145 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b0fa18f7-5def-4287-9822-0fdab879c042","Type":"ContainerDied","Data":"1564e2a19bc3c0529ba9a5a1eb92272a91e604c2327db007130e3c35834f0bd4"} Jan 29 08:24:38 crc kubenswrapper[4826]: I0129 08:24:38.306982 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd1bd03-d686-4605-beff-889cd9a93fb3","Type":"ContainerStarted","Data":"57e421d20331a76579cacb4ab59e7507e61135a55762d67861924aa0e3eff4dd"} Jan 29 08:24:38 crc kubenswrapper[4826]: I0129 08:24:38.307121 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="ceilometer-central-agent" containerID="cri-o://570185b32cb6b145e1c9e99e999f1b5bc089994e2b2832f0ce7c59980b008092" gracePeriod=30 Jan 29 08:24:38 crc kubenswrapper[4826]: I0129 08:24:38.307327 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 08:24:38 crc kubenswrapper[4826]: I0129 08:24:38.307419 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="proxy-httpd" containerID="cri-o://57e421d20331a76579cacb4ab59e7507e61135a55762d67861924aa0e3eff4dd" gracePeriod=30 Jan 29 08:24:38 crc kubenswrapper[4826]: I0129 08:24:38.307553 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="ceilometer-notification-agent" containerID="cri-o://7a51adbbb44efccbfc358f30a1f2f74d4fe3d724bbb14d12c68ce7777410ae9a" gracePeriod=30 Jan 29 08:24:38 crc kubenswrapper[4826]: I0129 08:24:38.307585 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="sg-core" containerID="cri-o://c58c2e5d5d8e33fc837d87cdaf367c8d0c22cf10ecf50d69c08a4a5d20587f62" gracePeriod=30 Jan 29 08:24:38 crc kubenswrapper[4826]: I0129 08:24:38.342941 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.642701975 podStartE2EDuration="6.342924701s" podCreationTimestamp="2026-01-29 08:24:32 +0000 UTC" firstStartedPulling="2026-01-29 08:24:33.413568908 +0000 UTC m=+6057.275361977" lastFinishedPulling="2026-01-29 08:24:37.113791634 +0000 UTC m=+6060.975584703" observedRunningTime="2026-01-29 08:24:38.333645657 +0000 UTC m=+6062.195438726" watchObservedRunningTime="2026-01-29 08:24:38.342924701 +0000 UTC m=+6062.204717770" Jan 29 08:24:39 crc kubenswrapper[4826]: I0129 08:24:39.322730 4826 generic.go:334] "Generic (PLEG): container finished" podID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerID="57e421d20331a76579cacb4ab59e7507e61135a55762d67861924aa0e3eff4dd" exitCode=0 Jan 29 08:24:39 crc kubenswrapper[4826]: I0129 08:24:39.323338 4826 generic.go:334] "Generic (PLEG): container finished" podID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerID="c58c2e5d5d8e33fc837d87cdaf367c8d0c22cf10ecf50d69c08a4a5d20587f62" exitCode=2 Jan 29 08:24:39 crc kubenswrapper[4826]: I0129 08:24:39.323351 4826 generic.go:334] "Generic (PLEG): container finished" podID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerID="7a51adbbb44efccbfc358f30a1f2f74d4fe3d724bbb14d12c68ce7777410ae9a" exitCode=0 Jan 29 08:24:39 crc kubenswrapper[4826]: I0129 08:24:39.322793 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd1bd03-d686-4605-beff-889cd9a93fb3","Type":"ContainerDied","Data":"57e421d20331a76579cacb4ab59e7507e61135a55762d67861924aa0e3eff4dd"} Jan 29 08:24:39 crc kubenswrapper[4826]: I0129 08:24:39.323434 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd1bd03-d686-4605-beff-889cd9a93fb3","Type":"ContainerDied","Data":"c58c2e5d5d8e33fc837d87cdaf367c8d0c22cf10ecf50d69c08a4a5d20587f62"} Jan 29 08:24:39 crc kubenswrapper[4826]: I0129 08:24:39.323473 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd1bd03-d686-4605-beff-889cd9a93fb3","Type":"ContainerDied","Data":"7a51adbbb44efccbfc358f30a1f2f74d4fe3d724bbb14d12c68ce7777410ae9a"} Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.156605 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9crlp"] Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.162777 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.169011 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9crlp"] Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.197662 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.335698 4826 generic.go:334] "Generic (PLEG): container finished" podID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerID="570185b32cb6b145e1c9e99e999f1b5bc089994e2b2832f0ce7c59980b008092" exitCode=0 Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.335737 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd1bd03-d686-4605-beff-889cd9a93fb3","Type":"ContainerDied","Data":"570185b32cb6b145e1c9e99e999f1b5bc089994e2b2832f0ce7c59980b008092"} Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.335764 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"acd1bd03-d686-4605-beff-889cd9a93fb3","Type":"ContainerDied","Data":"b589dbb6187aee73b0d00004d0effa375b638cba59e408cbe9693a270515737c"} Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.335764 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.335780 4826 scope.go:117] "RemoveContainer" containerID="57e421d20331a76579cacb4ab59e7507e61135a55762d67861924aa0e3eff4dd" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.348228 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd1bd03-d686-4605-beff-889cd9a93fb3-run-httpd\") pod \"acd1bd03-d686-4605-beff-889cd9a93fb3\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.348270 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd1bd03-d686-4605-beff-889cd9a93fb3-log-httpd\") pod \"acd1bd03-d686-4605-beff-889cd9a93fb3\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.348353 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-sg-core-conf-yaml\") pod \"acd1bd03-d686-4605-beff-889cd9a93fb3\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.348370 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-config-data\") pod \"acd1bd03-d686-4605-beff-889cd9a93fb3\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.348415 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-combined-ca-bundle\") pod \"acd1bd03-d686-4605-beff-889cd9a93fb3\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.348431 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-ceilometer-tls-certs\") pod \"acd1bd03-d686-4605-beff-889cd9a93fb3\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.348512 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-scripts\") pod \"acd1bd03-d686-4605-beff-889cd9a93fb3\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.348564 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s549\" (UniqueName: \"kubernetes.io/projected/acd1bd03-d686-4605-beff-889cd9a93fb3-kube-api-access-6s549\") pod \"acd1bd03-d686-4605-beff-889cd9a93fb3\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.348769 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/415db082-8ac8-4151-8163-99d6789bb22c-catalog-content\") pod \"redhat-operators-9crlp\" (UID: \"415db082-8ac8-4151-8163-99d6789bb22c\") " pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.348811 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/415db082-8ac8-4151-8163-99d6789bb22c-utilities\") pod \"redhat-operators-9crlp\" (UID: \"415db082-8ac8-4151-8163-99d6789bb22c\") " pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.348959 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62kqz\" (UniqueName: \"kubernetes.io/projected/415db082-8ac8-4151-8163-99d6789bb22c-kube-api-access-62kqz\") pod \"redhat-operators-9crlp\" (UID: \"415db082-8ac8-4151-8163-99d6789bb22c\") " pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.349079 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acd1bd03-d686-4605-beff-889cd9a93fb3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "acd1bd03-d686-4605-beff-889cd9a93fb3" (UID: "acd1bd03-d686-4605-beff-889cd9a93fb3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.349265 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acd1bd03-d686-4605-beff-889cd9a93fb3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "acd1bd03-d686-4605-beff-889cd9a93fb3" (UID: "acd1bd03-d686-4605-beff-889cd9a93fb3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.353512 4826 scope.go:117] "RemoveContainer" containerID="c58c2e5d5d8e33fc837d87cdaf367c8d0c22cf10ecf50d69c08a4a5d20587f62" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.354636 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-scripts" (OuterVolumeSpecName: "scripts") pod "acd1bd03-d686-4605-beff-889cd9a93fb3" (UID: "acd1bd03-d686-4605-beff-889cd9a93fb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.354801 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd1bd03-d686-4605-beff-889cd9a93fb3-kube-api-access-6s549" (OuterVolumeSpecName: "kube-api-access-6s549") pod "acd1bd03-d686-4605-beff-889cd9a93fb3" (UID: "acd1bd03-d686-4605-beff-889cd9a93fb3"). InnerVolumeSpecName "kube-api-access-6s549". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.379878 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "acd1bd03-d686-4605-beff-889cd9a93fb3" (UID: "acd1bd03-d686-4605-beff-889cd9a93fb3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.418600 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "acd1bd03-d686-4605-beff-889cd9a93fb3" (UID: "acd1bd03-d686-4605-beff-889cd9a93fb3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.451481 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acd1bd03-d686-4605-beff-889cd9a93fb3" (UID: "acd1bd03-d686-4605-beff-889cd9a93fb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.452342 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-combined-ca-bundle\") pod \"acd1bd03-d686-4605-beff-889cd9a93fb3\" (UID: \"acd1bd03-d686-4605-beff-889cd9a93fb3\") " Jan 29 08:24:40 crc kubenswrapper[4826]: W0129 08:24:40.452552 4826 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/acd1bd03-d686-4605-beff-889cd9a93fb3/volumes/kubernetes.io~secret/combined-ca-bundle Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.452584 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acd1bd03-d686-4605-beff-889cd9a93fb3" (UID: "acd1bd03-d686-4605-beff-889cd9a93fb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.452991 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62kqz\" (UniqueName: \"kubernetes.io/projected/415db082-8ac8-4151-8163-99d6789bb22c-kube-api-access-62kqz\") pod \"redhat-operators-9crlp\" (UID: \"415db082-8ac8-4151-8163-99d6789bb22c\") " pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.453059 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/415db082-8ac8-4151-8163-99d6789bb22c-catalog-content\") pod \"redhat-operators-9crlp\" (UID: \"415db082-8ac8-4151-8163-99d6789bb22c\") " pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.453098 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/415db082-8ac8-4151-8163-99d6789bb22c-utilities\") pod \"redhat-operators-9crlp\" (UID: \"415db082-8ac8-4151-8163-99d6789bb22c\") " pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.453177 4826 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd1bd03-d686-4605-beff-889cd9a93fb3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.453197 4826 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/acd1bd03-d686-4605-beff-889cd9a93fb3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.453207 4826 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.453218 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.453229 4826 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.453244 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.453255 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s549\" (UniqueName: \"kubernetes.io/projected/acd1bd03-d686-4605-beff-889cd9a93fb3-kube-api-access-6s549\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.453700 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/415db082-8ac8-4151-8163-99d6789bb22c-catalog-content\") pod \"redhat-operators-9crlp\" (UID: \"415db082-8ac8-4151-8163-99d6789bb22c\") " pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.453824 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/415db082-8ac8-4151-8163-99d6789bb22c-utilities\") pod \"redhat-operators-9crlp\" (UID: \"415db082-8ac8-4151-8163-99d6789bb22c\") " pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.461192 4826 scope.go:117] "RemoveContainer" containerID="7a51adbbb44efccbfc358f30a1f2f74d4fe3d724bbb14d12c68ce7777410ae9a" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.469612 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62kqz\" (UniqueName: \"kubernetes.io/projected/415db082-8ac8-4151-8163-99d6789bb22c-kube-api-access-62kqz\") pod \"redhat-operators-9crlp\" (UID: \"415db082-8ac8-4151-8163-99d6789bb22c\") " pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.480188 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-config-data" (OuterVolumeSpecName: "config-data") pod "acd1bd03-d686-4605-beff-889cd9a93fb3" (UID: "acd1bd03-d686-4605-beff-889cd9a93fb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.491252 4826 scope.go:117] "RemoveContainer" containerID="570185b32cb6b145e1c9e99e999f1b5bc089994e2b2832f0ce7c59980b008092" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.508367 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.515680 4826 scope.go:117] "RemoveContainer" containerID="57e421d20331a76579cacb4ab59e7507e61135a55762d67861924aa0e3eff4dd" Jan 29 08:24:40 crc kubenswrapper[4826]: E0129 08:24:40.517796 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57e421d20331a76579cacb4ab59e7507e61135a55762d67861924aa0e3eff4dd\": container with ID starting with 57e421d20331a76579cacb4ab59e7507e61135a55762d67861924aa0e3eff4dd not found: ID does not exist" containerID="57e421d20331a76579cacb4ab59e7507e61135a55762d67861924aa0e3eff4dd" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.517854 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e421d20331a76579cacb4ab59e7507e61135a55762d67861924aa0e3eff4dd"} err="failed to get container status \"57e421d20331a76579cacb4ab59e7507e61135a55762d67861924aa0e3eff4dd\": rpc error: code = NotFound desc = could not find container \"57e421d20331a76579cacb4ab59e7507e61135a55762d67861924aa0e3eff4dd\": container with ID starting with 57e421d20331a76579cacb4ab59e7507e61135a55762d67861924aa0e3eff4dd not found: ID does not exist" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.517890 4826 scope.go:117] "RemoveContainer" containerID="c58c2e5d5d8e33fc837d87cdaf367c8d0c22cf10ecf50d69c08a4a5d20587f62" Jan 29 08:24:40 crc kubenswrapper[4826]: E0129 08:24:40.518395 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c58c2e5d5d8e33fc837d87cdaf367c8d0c22cf10ecf50d69c08a4a5d20587f62\": container with ID starting with c58c2e5d5d8e33fc837d87cdaf367c8d0c22cf10ecf50d69c08a4a5d20587f62 not found: ID does not exist" containerID="c58c2e5d5d8e33fc837d87cdaf367c8d0c22cf10ecf50d69c08a4a5d20587f62" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.518496 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c58c2e5d5d8e33fc837d87cdaf367c8d0c22cf10ecf50d69c08a4a5d20587f62"} err="failed to get container status \"c58c2e5d5d8e33fc837d87cdaf367c8d0c22cf10ecf50d69c08a4a5d20587f62\": rpc error: code = NotFound desc = could not find container \"c58c2e5d5d8e33fc837d87cdaf367c8d0c22cf10ecf50d69c08a4a5d20587f62\": container with ID starting with c58c2e5d5d8e33fc837d87cdaf367c8d0c22cf10ecf50d69c08a4a5d20587f62 not found: ID does not exist" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.518597 4826 scope.go:117] "RemoveContainer" containerID="7a51adbbb44efccbfc358f30a1f2f74d4fe3d724bbb14d12c68ce7777410ae9a" Jan 29 08:24:40 crc kubenswrapper[4826]: E0129 08:24:40.518990 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a51adbbb44efccbfc358f30a1f2f74d4fe3d724bbb14d12c68ce7777410ae9a\": container with ID starting with 7a51adbbb44efccbfc358f30a1f2f74d4fe3d724bbb14d12c68ce7777410ae9a not found: ID does not exist" containerID="7a51adbbb44efccbfc358f30a1f2f74d4fe3d724bbb14d12c68ce7777410ae9a" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.519053 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a51adbbb44efccbfc358f30a1f2f74d4fe3d724bbb14d12c68ce7777410ae9a"} err="failed to get container status \"7a51adbbb44efccbfc358f30a1f2f74d4fe3d724bbb14d12c68ce7777410ae9a\": rpc error: code = NotFound desc = could not find container \"7a51adbbb44efccbfc358f30a1f2f74d4fe3d724bbb14d12c68ce7777410ae9a\": container with ID starting with 7a51adbbb44efccbfc358f30a1f2f74d4fe3d724bbb14d12c68ce7777410ae9a not found: ID does not exist" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.519081 4826 scope.go:117] "RemoveContainer" containerID="570185b32cb6b145e1c9e99e999f1b5bc089994e2b2832f0ce7c59980b008092" Jan 29 08:24:40 crc kubenswrapper[4826]: E0129 08:24:40.520072 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"570185b32cb6b145e1c9e99e999f1b5bc089994e2b2832f0ce7c59980b008092\": container with ID starting with 570185b32cb6b145e1c9e99e999f1b5bc089994e2b2832f0ce7c59980b008092 not found: ID does not exist" containerID="570185b32cb6b145e1c9e99e999f1b5bc089994e2b2832f0ce7c59980b008092" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.520107 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"570185b32cb6b145e1c9e99e999f1b5bc089994e2b2832f0ce7c59980b008092"} err="failed to get container status \"570185b32cb6b145e1c9e99e999f1b5bc089994e2b2832f0ce7c59980b008092\": rpc error: code = NotFound desc = could not find container \"570185b32cb6b145e1c9e99e999f1b5bc089994e2b2832f0ce7c59980b008092\": container with ID starting with 570185b32cb6b145e1c9e99e999f1b5bc089994e2b2832f0ce7c59980b008092 not found: ID does not exist" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.554893 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd1bd03-d686-4605-beff-889cd9a93fb3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.695886 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.716897 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.755399 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:40 crc kubenswrapper[4826]: E0129 08:24:40.755855 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="sg-core" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.755873 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="sg-core" Jan 29 08:24:40 crc kubenswrapper[4826]: E0129 08:24:40.755886 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="proxy-httpd" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.755894 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="proxy-httpd" Jan 29 08:24:40 crc kubenswrapper[4826]: E0129 08:24:40.755910 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="ceilometer-notification-agent" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.755918 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="ceilometer-notification-agent" Jan 29 08:24:40 crc kubenswrapper[4826]: E0129 08:24:40.755932 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="ceilometer-central-agent" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.755938 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="ceilometer-central-agent" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.756115 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="proxy-httpd" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.756129 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="sg-core" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.756141 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="ceilometer-central-agent" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.756159 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" containerName="ceilometer-notification-agent" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.758209 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.760228 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.760950 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.761102 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.765717 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.819440 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd1bd03-d686-4605-beff-889cd9a93fb3" path="/var/lib/kubelet/pods/acd1bd03-d686-4605-beff-889cd9a93fb3/volumes" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.860494 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-config-data\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.860598 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36963e88-f319-40c6-94c1-b4c7aba80b44-run-httpd\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.860624 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.860646 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36963e88-f319-40c6-94c1-b4c7aba80b44-log-httpd\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.860663 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.860713 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r4wp\" (UniqueName: \"kubernetes.io/projected/36963e88-f319-40c6-94c1-b4c7aba80b44-kube-api-access-7r4wp\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.860768 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.860802 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-scripts\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.962778 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-config-data\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.962865 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36963e88-f319-40c6-94c1-b4c7aba80b44-run-httpd\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.962898 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.962926 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36963e88-f319-40c6-94c1-b4c7aba80b44-log-httpd\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.962955 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.963034 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r4wp\" (UniqueName: \"kubernetes.io/projected/36963e88-f319-40c6-94c1-b4c7aba80b44-kube-api-access-7r4wp\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.963142 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.963204 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-scripts\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.963549 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36963e88-f319-40c6-94c1-b4c7aba80b44-run-httpd\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.963697 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/36963e88-f319-40c6-94c1-b4c7aba80b44-log-httpd\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.968462 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.969874 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.970291 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-config-data\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.972395 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-scripts\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.983310 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36963e88-f319-40c6-94c1-b4c7aba80b44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:40 crc kubenswrapper[4826]: I0129 08:24:40.989871 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r4wp\" (UniqueName: \"kubernetes.io/projected/36963e88-f319-40c6-94c1-b4c7aba80b44-kube-api-access-7r4wp\") pod \"ceilometer-0\" (UID: \"36963e88-f319-40c6-94c1-b4c7aba80b44\") " pod="openstack/ceilometer-0" Jan 29 08:24:41 crc kubenswrapper[4826]: I0129 08:24:41.013955 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9crlp"] Jan 29 08:24:41 crc kubenswrapper[4826]: I0129 08:24:41.109848 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 08:24:41 crc kubenswrapper[4826]: I0129 08:24:41.346264 4826 generic.go:334] "Generic (PLEG): container finished" podID="415db082-8ac8-4151-8163-99d6789bb22c" containerID="3e553e70f650d382e0fc34831065981e006573f9ce69705cfb6655f11a2bfd2a" exitCode=0 Jan 29 08:24:41 crc kubenswrapper[4826]: I0129 08:24:41.346346 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9crlp" event={"ID":"415db082-8ac8-4151-8163-99d6789bb22c","Type":"ContainerDied","Data":"3e553e70f650d382e0fc34831065981e006573f9ce69705cfb6655f11a2bfd2a"} Jan 29 08:24:41 crc kubenswrapper[4826]: I0129 08:24:41.347785 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9crlp" event={"ID":"415db082-8ac8-4151-8163-99d6789bb22c","Type":"ContainerStarted","Data":"3a8df7e36f2a824207a5f66c5e9f7a97fd2cd780cd45f928c83c829686e2de62"} Jan 29 08:24:41 crc kubenswrapper[4826]: I0129 08:24:41.571530 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 08:24:42 crc kubenswrapper[4826]: I0129 08:24:42.376729 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9crlp" event={"ID":"415db082-8ac8-4151-8163-99d6789bb22c","Type":"ContainerStarted","Data":"bf66e66dcdc3ddd3e09851f97a61840ba50c16c8d2091cb059337f705d759f8b"} Jan 29 08:24:42 crc kubenswrapper[4826]: I0129 08:24:42.386121 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36963e88-f319-40c6-94c1-b4c7aba80b44","Type":"ContainerStarted","Data":"53cf88de423557aee94c45be19b09bf22241b2edd8edb21a782df9a305c30c3c"} Jan 29 08:24:42 crc kubenswrapper[4826]: I0129 08:24:42.386159 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36963e88-f319-40c6-94c1-b4c7aba80b44","Type":"ContainerStarted","Data":"05e801cef9af9377619b4b67607a082bff1f474eff2a06810a1f375fe483e6d6"} Jan 29 08:24:43 crc kubenswrapper[4826]: I0129 08:24:43.401646 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36963e88-f319-40c6-94c1-b4c7aba80b44","Type":"ContainerStarted","Data":"4704208f7039be321384cfa95ec730f62853475885ccecb9f1056dd2b8177074"} Jan 29 08:24:43 crc kubenswrapper[4826]: I0129 08:24:43.401994 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36963e88-f319-40c6-94c1-b4c7aba80b44","Type":"ContainerStarted","Data":"1c41550e9fce9a3594dbabaa9da93eb19879f2f288a338625e4dce98c634d5d4"} Jan 29 08:24:45 crc kubenswrapper[4826]: I0129 08:24:45.423880 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"36963e88-f319-40c6-94c1-b4c7aba80b44","Type":"ContainerStarted","Data":"94fe4a675698be40edb7356538d800093a73691f2a4cd094c4c0f77baf209297"} Jan 29 08:24:45 crc kubenswrapper[4826]: I0129 08:24:45.424550 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 08:24:45 crc kubenswrapper[4826]: I0129 08:24:45.459521 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.938102372 podStartE2EDuration="5.4595018s" podCreationTimestamp="2026-01-29 08:24:40 +0000 UTC" firstStartedPulling="2026-01-29 08:24:41.589767229 +0000 UTC m=+6065.451560298" lastFinishedPulling="2026-01-29 08:24:45.111166657 +0000 UTC m=+6068.972959726" observedRunningTime="2026-01-29 08:24:45.457537248 +0000 UTC m=+6069.319330327" watchObservedRunningTime="2026-01-29 08:24:45.4595018 +0000 UTC m=+6069.321294869" Jan 29 08:24:46 crc kubenswrapper[4826]: I0129 08:24:46.808449 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:24:46 crc kubenswrapper[4826]: E0129 08:24:46.809071 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:24:48 crc kubenswrapper[4826]: I0129 08:24:48.504998 4826 generic.go:334] "Generic (PLEG): container finished" podID="415db082-8ac8-4151-8163-99d6789bb22c" containerID="bf66e66dcdc3ddd3e09851f97a61840ba50c16c8d2091cb059337f705d759f8b" exitCode=0 Jan 29 08:24:48 crc kubenswrapper[4826]: I0129 08:24:48.505349 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9crlp" event={"ID":"415db082-8ac8-4151-8163-99d6789bb22c","Type":"ContainerDied","Data":"bf66e66dcdc3ddd3e09851f97a61840ba50c16c8d2091cb059337f705d759f8b"} Jan 29 08:24:49 crc kubenswrapper[4826]: I0129 08:24:49.517796 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9crlp" event={"ID":"415db082-8ac8-4151-8163-99d6789bb22c","Type":"ContainerStarted","Data":"dcec4360d8e4f3affb4cf67707e2fb20bad9b6df34fc0288f1338f9a7d0ab2d4"} Jan 29 08:24:49 crc kubenswrapper[4826]: I0129 08:24:49.554938 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9crlp" podStartSLOduration=1.955381327 podStartE2EDuration="9.554916453s" podCreationTimestamp="2026-01-29 08:24:40 +0000 UTC" firstStartedPulling="2026-01-29 08:24:41.347934131 +0000 UTC m=+6065.209727210" lastFinishedPulling="2026-01-29 08:24:48.947469237 +0000 UTC m=+6072.809262336" observedRunningTime="2026-01-29 08:24:49.548523215 +0000 UTC m=+6073.410316284" watchObservedRunningTime="2026-01-29 08:24:49.554916453 +0000 UTC m=+6073.416709522" Jan 29 08:24:50 crc kubenswrapper[4826]: I0129 08:24:50.508852 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:24:50 crc kubenswrapper[4826]: I0129 08:24:50.509129 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:24:51 crc kubenswrapper[4826]: I0129 08:24:51.559847 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9crlp" podUID="415db082-8ac8-4151-8163-99d6789bb22c" containerName="registry-server" probeResult="failure" output=< Jan 29 08:24:51 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 08:24:51 crc kubenswrapper[4826]: > Jan 29 08:24:58 crc kubenswrapper[4826]: I0129 08:24:58.808619 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:24:58 crc kubenswrapper[4826]: E0129 08:24:58.809327 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:25:00 crc kubenswrapper[4826]: I0129 08:25:00.559408 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:25:00 crc kubenswrapper[4826]: I0129 08:25:00.610352 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:25:00 crc kubenswrapper[4826]: I0129 08:25:00.820949 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9crlp"] Jan 29 08:25:01 crc kubenswrapper[4826]: I0129 08:25:01.655118 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9crlp" podUID="415db082-8ac8-4151-8163-99d6789bb22c" containerName="registry-server" containerID="cri-o://dcec4360d8e4f3affb4cf67707e2fb20bad9b6df34fc0288f1338f9a7d0ab2d4" gracePeriod=2 Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.311451 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.411865 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/415db082-8ac8-4151-8163-99d6789bb22c-utilities\") pod \"415db082-8ac8-4151-8163-99d6789bb22c\" (UID: \"415db082-8ac8-4151-8163-99d6789bb22c\") " Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.412161 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/415db082-8ac8-4151-8163-99d6789bb22c-catalog-content\") pod \"415db082-8ac8-4151-8163-99d6789bb22c\" (UID: \"415db082-8ac8-4151-8163-99d6789bb22c\") " Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.412215 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62kqz\" (UniqueName: \"kubernetes.io/projected/415db082-8ac8-4151-8163-99d6789bb22c-kube-api-access-62kqz\") pod \"415db082-8ac8-4151-8163-99d6789bb22c\" (UID: \"415db082-8ac8-4151-8163-99d6789bb22c\") " Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.412670 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/415db082-8ac8-4151-8163-99d6789bb22c-utilities" (OuterVolumeSpecName: "utilities") pod "415db082-8ac8-4151-8163-99d6789bb22c" (UID: "415db082-8ac8-4151-8163-99d6789bb22c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.412812 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/415db082-8ac8-4151-8163-99d6789bb22c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.420143 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/415db082-8ac8-4151-8163-99d6789bb22c-kube-api-access-62kqz" (OuterVolumeSpecName: "kube-api-access-62kqz") pod "415db082-8ac8-4151-8163-99d6789bb22c" (UID: "415db082-8ac8-4151-8163-99d6789bb22c"). InnerVolumeSpecName "kube-api-access-62kqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.516256 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62kqz\" (UniqueName: \"kubernetes.io/projected/415db082-8ac8-4151-8163-99d6789bb22c-kube-api-access-62kqz\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.558773 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/415db082-8ac8-4151-8163-99d6789bb22c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "415db082-8ac8-4151-8163-99d6789bb22c" (UID: "415db082-8ac8-4151-8163-99d6789bb22c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.618469 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/415db082-8ac8-4151-8163-99d6789bb22c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.668049 4826 generic.go:334] "Generic (PLEG): container finished" podID="415db082-8ac8-4151-8163-99d6789bb22c" containerID="dcec4360d8e4f3affb4cf67707e2fb20bad9b6df34fc0288f1338f9a7d0ab2d4" exitCode=0 Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.668106 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9crlp" event={"ID":"415db082-8ac8-4151-8163-99d6789bb22c","Type":"ContainerDied","Data":"dcec4360d8e4f3affb4cf67707e2fb20bad9b6df34fc0288f1338f9a7d0ab2d4"} Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.668144 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9crlp" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.668173 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9crlp" event={"ID":"415db082-8ac8-4151-8163-99d6789bb22c","Type":"ContainerDied","Data":"3a8df7e36f2a824207a5f66c5e9f7a97fd2cd780cd45f928c83c829686e2de62"} Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.668205 4826 scope.go:117] "RemoveContainer" containerID="dcec4360d8e4f3affb4cf67707e2fb20bad9b6df34fc0288f1338f9a7d0ab2d4" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.698217 4826 scope.go:117] "RemoveContainer" containerID="bf66e66dcdc3ddd3e09851f97a61840ba50c16c8d2091cb059337f705d759f8b" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.718254 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9crlp"] Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.732750 4826 scope.go:117] "RemoveContainer" containerID="3e553e70f650d382e0fc34831065981e006573f9ce69705cfb6655f11a2bfd2a" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.736467 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9crlp"] Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.795323 4826 scope.go:117] "RemoveContainer" containerID="dcec4360d8e4f3affb4cf67707e2fb20bad9b6df34fc0288f1338f9a7d0ab2d4" Jan 29 08:25:02 crc kubenswrapper[4826]: E0129 08:25:02.795811 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcec4360d8e4f3affb4cf67707e2fb20bad9b6df34fc0288f1338f9a7d0ab2d4\": container with ID starting with dcec4360d8e4f3affb4cf67707e2fb20bad9b6df34fc0288f1338f9a7d0ab2d4 not found: ID does not exist" containerID="dcec4360d8e4f3affb4cf67707e2fb20bad9b6df34fc0288f1338f9a7d0ab2d4" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.795850 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcec4360d8e4f3affb4cf67707e2fb20bad9b6df34fc0288f1338f9a7d0ab2d4"} err="failed to get container status \"dcec4360d8e4f3affb4cf67707e2fb20bad9b6df34fc0288f1338f9a7d0ab2d4\": rpc error: code = NotFound desc = could not find container \"dcec4360d8e4f3affb4cf67707e2fb20bad9b6df34fc0288f1338f9a7d0ab2d4\": container with ID starting with dcec4360d8e4f3affb4cf67707e2fb20bad9b6df34fc0288f1338f9a7d0ab2d4 not found: ID does not exist" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.795877 4826 scope.go:117] "RemoveContainer" containerID="bf66e66dcdc3ddd3e09851f97a61840ba50c16c8d2091cb059337f705d759f8b" Jan 29 08:25:02 crc kubenswrapper[4826]: E0129 08:25:02.796260 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf66e66dcdc3ddd3e09851f97a61840ba50c16c8d2091cb059337f705d759f8b\": container with ID starting with bf66e66dcdc3ddd3e09851f97a61840ba50c16c8d2091cb059337f705d759f8b not found: ID does not exist" containerID="bf66e66dcdc3ddd3e09851f97a61840ba50c16c8d2091cb059337f705d759f8b" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.796309 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf66e66dcdc3ddd3e09851f97a61840ba50c16c8d2091cb059337f705d759f8b"} err="failed to get container status \"bf66e66dcdc3ddd3e09851f97a61840ba50c16c8d2091cb059337f705d759f8b\": rpc error: code = NotFound desc = could not find container \"bf66e66dcdc3ddd3e09851f97a61840ba50c16c8d2091cb059337f705d759f8b\": container with ID starting with bf66e66dcdc3ddd3e09851f97a61840ba50c16c8d2091cb059337f705d759f8b not found: ID does not exist" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.796328 4826 scope.go:117] "RemoveContainer" containerID="3e553e70f650d382e0fc34831065981e006573f9ce69705cfb6655f11a2bfd2a" Jan 29 08:25:02 crc kubenswrapper[4826]: E0129 08:25:02.796621 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e553e70f650d382e0fc34831065981e006573f9ce69705cfb6655f11a2bfd2a\": container with ID starting with 3e553e70f650d382e0fc34831065981e006573f9ce69705cfb6655f11a2bfd2a not found: ID does not exist" containerID="3e553e70f650d382e0fc34831065981e006573f9ce69705cfb6655f11a2bfd2a" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.796650 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e553e70f650d382e0fc34831065981e006573f9ce69705cfb6655f11a2bfd2a"} err="failed to get container status \"3e553e70f650d382e0fc34831065981e006573f9ce69705cfb6655f11a2bfd2a\": rpc error: code = NotFound desc = could not find container \"3e553e70f650d382e0fc34831065981e006573f9ce69705cfb6655f11a2bfd2a\": container with ID starting with 3e553e70f650d382e0fc34831065981e006573f9ce69705cfb6655f11a2bfd2a not found: ID does not exist" Jan 29 08:25:02 crc kubenswrapper[4826]: I0129 08:25:02.824080 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="415db082-8ac8-4151-8163-99d6789bb22c" path="/var/lib/kubelet/pods/415db082-8ac8-4151-8163-99d6789bb22c/volumes" Jan 29 08:25:05 crc kubenswrapper[4826]: I0129 08:25:05.702187 4826 generic.go:334] "Generic (PLEG): container finished" podID="b0fa18f7-5def-4287-9822-0fdab879c042" containerID="c6880bac666712180c24145f7eaa6cbe5c9551f466fcb4011c440e9dc42f2b14" exitCode=137 Jan 29 08:25:05 crc kubenswrapper[4826]: I0129 08:25:05.702346 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b0fa18f7-5def-4287-9822-0fdab879c042","Type":"ContainerDied","Data":"c6880bac666712180c24145f7eaa6cbe5c9551f466fcb4011c440e9dc42f2b14"} Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.202594 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.294003 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-config-data\") pod \"b0fa18f7-5def-4287-9822-0fdab879c042\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.294330 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh5tw\" (UniqueName: \"kubernetes.io/projected/b0fa18f7-5def-4287-9822-0fdab879c042-kube-api-access-sh5tw\") pod \"b0fa18f7-5def-4287-9822-0fdab879c042\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.294437 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-scripts\") pod \"b0fa18f7-5def-4287-9822-0fdab879c042\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.294545 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-combined-ca-bundle\") pod \"b0fa18f7-5def-4287-9822-0fdab879c042\" (UID: \"b0fa18f7-5def-4287-9822-0fdab879c042\") " Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.300127 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0fa18f7-5def-4287-9822-0fdab879c042-kube-api-access-sh5tw" (OuterVolumeSpecName: "kube-api-access-sh5tw") pod "b0fa18f7-5def-4287-9822-0fdab879c042" (UID: "b0fa18f7-5def-4287-9822-0fdab879c042"). InnerVolumeSpecName "kube-api-access-sh5tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.307935 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-scripts" (OuterVolumeSpecName: "scripts") pod "b0fa18f7-5def-4287-9822-0fdab879c042" (UID: "b0fa18f7-5def-4287-9822-0fdab879c042"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.396587 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh5tw\" (UniqueName: \"kubernetes.io/projected/b0fa18f7-5def-4287-9822-0fdab879c042-kube-api-access-sh5tw\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.396622 4826 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.418176 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-config-data" (OuterVolumeSpecName: "config-data") pod "b0fa18f7-5def-4287-9822-0fdab879c042" (UID: "b0fa18f7-5def-4287-9822-0fdab879c042"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.441259 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0fa18f7-5def-4287-9822-0fdab879c042" (UID: "b0fa18f7-5def-4287-9822-0fdab879c042"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.498382 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.498419 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0fa18f7-5def-4287-9822-0fdab879c042-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.715987 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b0fa18f7-5def-4287-9822-0fdab879c042","Type":"ContainerDied","Data":"377c516bceef54a570a55c03e3d4b51565a4742882c4d5d2f86a13e4d3e2c6cb"} Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.716404 4826 scope.go:117] "RemoveContainer" containerID="c6880bac666712180c24145f7eaa6cbe5c9551f466fcb4011c440e9dc42f2b14" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.716083 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.770351 4826 scope.go:117] "RemoveContainer" containerID="71b44edcb5bb108e8f31e9f6d7cf0e50ca597393650331e132337e524ae121aa" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.770392 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.789502 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.816859 4826 scope.go:117] "RemoveContainer" containerID="b729e02fd48aeb14131d866e4a4f5a5029b0d8a6ccd7fe764790b8b474410347" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.837722 4826 scope.go:117] "RemoveContainer" containerID="1564e2a19bc3c0529ba9a5a1eb92272a91e604c2327db007130e3c35834f0bd4" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.838332 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" path="/var/lib/kubelet/pods/b0fa18f7-5def-4287-9822-0fdab879c042/volumes" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.839105 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 29 08:25:06 crc kubenswrapper[4826]: E0129 08:25:06.839429 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415db082-8ac8-4151-8163-99d6789bb22c" containerName="extract-utilities" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.839441 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="415db082-8ac8-4151-8163-99d6789bb22c" containerName="extract-utilities" Jan 29 08:25:06 crc kubenswrapper[4826]: E0129 08:25:06.839468 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-api" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.839476 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-api" Jan 29 08:25:06 crc kubenswrapper[4826]: E0129 08:25:06.839492 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415db082-8ac8-4151-8163-99d6789bb22c" containerName="registry-server" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.839498 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="415db082-8ac8-4151-8163-99d6789bb22c" containerName="registry-server" Jan 29 08:25:06 crc kubenswrapper[4826]: E0129 08:25:06.839509 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415db082-8ac8-4151-8163-99d6789bb22c" containerName="extract-content" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.839515 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="415db082-8ac8-4151-8163-99d6789bb22c" containerName="extract-content" Jan 29 08:25:06 crc kubenswrapper[4826]: E0129 08:25:06.839531 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-evaluator" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.839537 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-evaluator" Jan 29 08:25:06 crc kubenswrapper[4826]: E0129 08:25:06.839548 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-notifier" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.839554 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-notifier" Jan 29 08:25:06 crc kubenswrapper[4826]: E0129 08:25:06.839565 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-listener" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.839571 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-listener" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.839747 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-notifier" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.839759 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-listener" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.839771 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-evaluator" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.839782 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fa18f7-5def-4287-9822-0fdab879c042" containerName="aodh-api" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.839792 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="415db082-8ac8-4151-8163-99d6789bb22c" containerName="registry-server" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.841858 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.841976 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.845822 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.845997 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.846039 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-gw9tj" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.846086 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 29 08:25:06 crc kubenswrapper[4826]: I0129 08:25:06.846285 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.008812 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-scripts\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.008915 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-internal-tls-certs\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.008984 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-config-data\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.009047 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqmxd\" (UniqueName: \"kubernetes.io/projected/b502c48d-ff95-44af-a9ad-06dc74aa731e-kube-api-access-gqmxd\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.009108 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.009140 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-public-tls-certs\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.111412 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-internal-tls-certs\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.111504 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-config-data\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.111544 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqmxd\" (UniqueName: \"kubernetes.io/projected/b502c48d-ff95-44af-a9ad-06dc74aa731e-kube-api-access-gqmxd\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.111587 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.112055 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-public-tls-certs\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.112210 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-scripts\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.116017 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.117078 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-config-data\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.122965 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-public-tls-certs\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.126897 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-scripts\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.134582 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b502c48d-ff95-44af-a9ad-06dc74aa731e-internal-tls-certs\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.139241 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqmxd\" (UniqueName: \"kubernetes.io/projected/b502c48d-ff95-44af-a9ad-06dc74aa731e-kube-api-access-gqmxd\") pod \"aodh-0\" (UID: \"b502c48d-ff95-44af-a9ad-06dc74aa731e\") " pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.169648 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.695343 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 29 08:25:07 crc kubenswrapper[4826]: W0129 08:25:07.699737 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb502c48d_ff95_44af_a9ad_06dc74aa731e.slice/crio-5572a1663fd668a84ca9e7e901e19fb8cdcba5fc87e70c995e1d5b047e74bc05 WatchSource:0}: Error finding container 5572a1663fd668a84ca9e7e901e19fb8cdcba5fc87e70c995e1d5b047e74bc05: Status 404 returned error can't find the container with id 5572a1663fd668a84ca9e7e901e19fb8cdcba5fc87e70c995e1d5b047e74bc05 Jan 29 08:25:07 crc kubenswrapper[4826]: I0129 08:25:07.729508 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b502c48d-ff95-44af-a9ad-06dc74aa731e","Type":"ContainerStarted","Data":"5572a1663fd668a84ca9e7e901e19fb8cdcba5fc87e70c995e1d5b047e74bc05"} Jan 29 08:25:08 crc kubenswrapper[4826]: I0129 08:25:08.742522 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b502c48d-ff95-44af-a9ad-06dc74aa731e","Type":"ContainerStarted","Data":"0d0cb09267a1c385644b47d75029eec1ebea22918861a4487dba254d346a978c"} Jan 29 08:25:09 crc kubenswrapper[4826]: I0129 08:25:09.782940 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b502c48d-ff95-44af-a9ad-06dc74aa731e","Type":"ContainerStarted","Data":"4ff46e517e15d154d8b8594e3bcaf699f65c29e570e35de2f07bb7b697d43453"} Jan 29 08:25:09 crc kubenswrapper[4826]: I0129 08:25:09.783585 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b502c48d-ff95-44af-a9ad-06dc74aa731e","Type":"ContainerStarted","Data":"442022c58f80bd91e106c8a1be8dff4db8140d582ed040f48445f1da3883569f"} Jan 29 08:25:09 crc kubenswrapper[4826]: I0129 08:25:09.824992 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fb45fcfcf-6nf5s"] Jan 29 08:25:09 crc kubenswrapper[4826]: I0129 08:25:09.844382 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fb45fcfcf-6nf5s"] Jan 29 08:25:09 crc kubenswrapper[4826]: I0129 08:25:09.844509 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:09 crc kubenswrapper[4826]: I0129 08:25:09.850208 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Jan 29 08:25:09 crc kubenswrapper[4826]: I0129 08:25:09.979981 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-config\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:09 crc kubenswrapper[4826]: I0129 08:25:09.980028 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-dns-svc\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:09 crc kubenswrapper[4826]: I0129 08:25:09.980062 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-ovsdbserver-nb\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:09 crc kubenswrapper[4826]: I0129 08:25:09.980155 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-openstack-cell1\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:09 crc kubenswrapper[4826]: I0129 08:25:09.980201 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-ovsdbserver-sb\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:09 crc kubenswrapper[4826]: I0129 08:25:09.980218 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w6kt\" (UniqueName: \"kubernetes.io/projected/f112179f-7da7-4633-b2b6-3f8e56358a0d-kube-api-access-8w6kt\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.081767 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-openstack-cell1\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.082178 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-ovsdbserver-sb\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.082210 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w6kt\" (UniqueName: \"kubernetes.io/projected/f112179f-7da7-4633-b2b6-3f8e56358a0d-kube-api-access-8w6kt\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.082368 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-config\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.082462 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-dns-svc\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.082496 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-ovsdbserver-nb\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.082894 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-openstack-cell1\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.083714 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-ovsdbserver-nb\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.083786 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-config\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.085063 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-dns-svc\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.086032 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-ovsdbserver-sb\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.102347 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w6kt\" (UniqueName: \"kubernetes.io/projected/f112179f-7da7-4633-b2b6-3f8e56358a0d-kube-api-access-8w6kt\") pod \"dnsmasq-dns-5fb45fcfcf-6nf5s\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.168219 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.723851 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fb45fcfcf-6nf5s"] Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.801783 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b502c48d-ff95-44af-a9ad-06dc74aa731e","Type":"ContainerStarted","Data":"af844f0b5a1a1dca1ea0727a7f5b0fe801e011d2d397320889b154a1e14597ad"} Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.804703 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" event={"ID":"f112179f-7da7-4633-b2b6-3f8e56358a0d","Type":"ContainerStarted","Data":"694053eb29eec952010c0150f86ed4a293b92b1fd235630a35da7afeca5fe67f"} Jan 29 08:25:10 crc kubenswrapper[4826]: I0129 08:25:10.852391 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.9360993520000003 podStartE2EDuration="4.852373002s" podCreationTimestamp="2026-01-29 08:25:06 +0000 UTC" firstStartedPulling="2026-01-29 08:25:07.704518381 +0000 UTC m=+6091.566311450" lastFinishedPulling="2026-01-29 08:25:09.620792031 +0000 UTC m=+6093.482585100" observedRunningTime="2026-01-29 08:25:10.834565713 +0000 UTC m=+6094.696358782" watchObservedRunningTime="2026-01-29 08:25:10.852373002 +0000 UTC m=+6094.714166071" Jan 29 08:25:11 crc kubenswrapper[4826]: I0129 08:25:11.124254 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 08:25:11 crc kubenswrapper[4826]: I0129 08:25:11.814070 4826 generic.go:334] "Generic (PLEG): container finished" podID="f112179f-7da7-4633-b2b6-3f8e56358a0d" containerID="0da7f093c7b729647cc01a584a2f686c18ac2d298833ca3682d5c5b5892e626e" exitCode=0 Jan 29 08:25:11 crc kubenswrapper[4826]: I0129 08:25:11.814183 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" event={"ID":"f112179f-7da7-4633-b2b6-3f8e56358a0d","Type":"ContainerDied","Data":"0da7f093c7b729647cc01a584a2f686c18ac2d298833ca3682d5c5b5892e626e"} Jan 29 08:25:12 crc kubenswrapper[4826]: I0129 08:25:12.826785 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" event={"ID":"f112179f-7da7-4633-b2b6-3f8e56358a0d","Type":"ContainerStarted","Data":"5db4e44ccaa87c2bc759cd4f1675756db71d040c2f2e186eaee3506f62437e8d"} Jan 29 08:25:12 crc kubenswrapper[4826]: I0129 08:25:12.827435 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:12 crc kubenswrapper[4826]: I0129 08:25:12.856174 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" podStartSLOduration=3.856148636 podStartE2EDuration="3.856148636s" podCreationTimestamp="2026-01-29 08:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:25:12.847114898 +0000 UTC m=+6096.708907987" watchObservedRunningTime="2026-01-29 08:25:12.856148636 +0000 UTC m=+6096.717941725" Jan 29 08:25:14 crc kubenswrapper[4826]: I0129 08:25:14.161049 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:25:14 crc kubenswrapper[4826]: E0129 08:25:14.162260 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:25:16 crc kubenswrapper[4826]: I0129 08:25:16.046326 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b054-account-create-update-sjkrb"] Jan 29 08:25:16 crc kubenswrapper[4826]: I0129 08:25:16.061508 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-68dff"] Jan 29 08:25:16 crc kubenswrapper[4826]: I0129 08:25:16.072620 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-68dff"] Jan 29 08:25:16 crc kubenswrapper[4826]: I0129 08:25:16.083864 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b054-account-create-update-sjkrb"] Jan 29 08:25:16 crc kubenswrapper[4826]: I0129 08:25:16.821946 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e86956-b407-4607-823d-754830701852" path="/var/lib/kubelet/pods/20e86956-b407-4607-823d-754830701852/volumes" Jan 29 08:25:16 crc kubenswrapper[4826]: I0129 08:25:16.823130 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372c37cb-4f61-42ba-8bd8-d744b414f501" path="/var/lib/kubelet/pods/372c37cb-4f61-42ba-8bd8-d744b414f501/volumes" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.171470 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.258181 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74d5b89cc-k22tf"] Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.258807 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" podUID="08eec373-3907-4e7f-999e-10bec4fa374d" containerName="dnsmasq-dns" containerID="cri-o://f03fb4d1143aa041ed08af1f079d8cb290d684d0e2910f5000a634d0278b2be2" gracePeriod=10 Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.425173 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74bdfdf6c9-sjbjs"] Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.429249 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.441132 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74bdfdf6c9-sjbjs"] Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.493124 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcd5r\" (UniqueName: \"kubernetes.io/projected/ebef110f-fd34-4dc3-b8b4-346cd06b189b-kube-api-access-wcd5r\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.493193 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-ovsdbserver-nb\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.493222 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-dns-svc\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.493261 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-openstack-cell1\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.493355 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-ovsdbserver-sb\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.493483 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-config\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.594078 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-ovsdbserver-sb\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.594198 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-config\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.594236 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcd5r\" (UniqueName: \"kubernetes.io/projected/ebef110f-fd34-4dc3-b8b4-346cd06b189b-kube-api-access-wcd5r\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.594274 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-ovsdbserver-nb\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.594333 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-dns-svc\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.594357 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-openstack-cell1\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.595099 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-openstack-cell1\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.595641 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-ovsdbserver-sb\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.596365 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-config\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.596632 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-ovsdbserver-nb\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.597026 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebef110f-fd34-4dc3-b8b4-346cd06b189b-dns-svc\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.613925 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcd5r\" (UniqueName: \"kubernetes.io/projected/ebef110f-fd34-4dc3-b8b4-346cd06b189b-kube-api-access-wcd5r\") pod \"dnsmasq-dns-74bdfdf6c9-sjbjs\" (UID: \"ebef110f-fd34-4dc3-b8b4-346cd06b189b\") " pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.768094 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.886081 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.902342 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-ovsdbserver-nb\") pod \"08eec373-3907-4e7f-999e-10bec4fa374d\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.902417 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d42nq\" (UniqueName: \"kubernetes.io/projected/08eec373-3907-4e7f-999e-10bec4fa374d-kube-api-access-d42nq\") pod \"08eec373-3907-4e7f-999e-10bec4fa374d\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.902490 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-config\") pod \"08eec373-3907-4e7f-999e-10bec4fa374d\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.902543 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-dns-svc\") pod \"08eec373-3907-4e7f-999e-10bec4fa374d\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.902575 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-ovsdbserver-sb\") pod \"08eec373-3907-4e7f-999e-10bec4fa374d\" (UID: \"08eec373-3907-4e7f-999e-10bec4fa374d\") " Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.908534 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08eec373-3907-4e7f-999e-10bec4fa374d-kube-api-access-d42nq" (OuterVolumeSpecName: "kube-api-access-d42nq") pod "08eec373-3907-4e7f-999e-10bec4fa374d" (UID: "08eec373-3907-4e7f-999e-10bec4fa374d"). InnerVolumeSpecName "kube-api-access-d42nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.969662 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "08eec373-3907-4e7f-999e-10bec4fa374d" (UID: "08eec373-3907-4e7f-999e-10bec4fa374d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.970151 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-config" (OuterVolumeSpecName: "config") pod "08eec373-3907-4e7f-999e-10bec4fa374d" (UID: "08eec373-3907-4e7f-999e-10bec4fa374d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.972765 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "08eec373-3907-4e7f-999e-10bec4fa374d" (UID: "08eec373-3907-4e7f-999e-10bec4fa374d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:25:20 crc kubenswrapper[4826]: I0129 08:25:20.984095 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "08eec373-3907-4e7f-999e-10bec4fa374d" (UID: "08eec373-3907-4e7f-999e-10bec4fa374d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.008140 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.008176 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d42nq\" (UniqueName: \"kubernetes.io/projected/08eec373-3907-4e7f-999e-10bec4fa374d-kube-api-access-d42nq\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.008190 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.008199 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.008208 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08eec373-3907-4e7f-999e-10bec4fa374d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.230280 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74bdfdf6c9-sjbjs"] Jan 29 08:25:21 crc kubenswrapper[4826]: W0129 08:25:21.235815 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebef110f_fd34_4dc3_b8b4_346cd06b189b.slice/crio-a32f547f1b4423398935839ba2554ba2fa518e38a5ae7143c82d8d0d8b910173 WatchSource:0}: Error finding container a32f547f1b4423398935839ba2554ba2fa518e38a5ae7143c82d8d0d8b910173: Status 404 returned error can't find the container with id a32f547f1b4423398935839ba2554ba2fa518e38a5ae7143c82d8d0d8b910173 Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.249365 4826 generic.go:334] "Generic (PLEG): container finished" podID="08eec373-3907-4e7f-999e-10bec4fa374d" containerID="f03fb4d1143aa041ed08af1f079d8cb290d684d0e2910f5000a634d0278b2be2" exitCode=0 Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.249417 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" event={"ID":"08eec373-3907-4e7f-999e-10bec4fa374d","Type":"ContainerDied","Data":"f03fb4d1143aa041ed08af1f079d8cb290d684d0e2910f5000a634d0278b2be2"} Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.249471 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" event={"ID":"08eec373-3907-4e7f-999e-10bec4fa374d","Type":"ContainerDied","Data":"aadba11c8bddfc3bee96e5939fa4ad7393bc9c87ceb349e6eb8e7ca263707582"} Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.249503 4826 scope.go:117] "RemoveContainer" containerID="f03fb4d1143aa041ed08af1f079d8cb290d684d0e2910f5000a634d0278b2be2" Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.249713 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74d5b89cc-k22tf" Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.286132 4826 scope.go:117] "RemoveContainer" containerID="d4a0a239394cfe4d1a436f5f0f6c2e48e1be8011071cfdf07f6b684d831443c9" Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.301830 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74d5b89cc-k22tf"] Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.310827 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74d5b89cc-k22tf"] Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.358063 4826 scope.go:117] "RemoveContainer" containerID="f03fb4d1143aa041ed08af1f079d8cb290d684d0e2910f5000a634d0278b2be2" Jan 29 08:25:21 crc kubenswrapper[4826]: E0129 08:25:21.363202 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f03fb4d1143aa041ed08af1f079d8cb290d684d0e2910f5000a634d0278b2be2\": container with ID starting with f03fb4d1143aa041ed08af1f079d8cb290d684d0e2910f5000a634d0278b2be2 not found: ID does not exist" containerID="f03fb4d1143aa041ed08af1f079d8cb290d684d0e2910f5000a634d0278b2be2" Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.363235 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f03fb4d1143aa041ed08af1f079d8cb290d684d0e2910f5000a634d0278b2be2"} err="failed to get container status \"f03fb4d1143aa041ed08af1f079d8cb290d684d0e2910f5000a634d0278b2be2\": rpc error: code = NotFound desc = could not find container \"f03fb4d1143aa041ed08af1f079d8cb290d684d0e2910f5000a634d0278b2be2\": container with ID starting with f03fb4d1143aa041ed08af1f079d8cb290d684d0e2910f5000a634d0278b2be2 not found: ID does not exist" Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.363255 4826 scope.go:117] "RemoveContainer" containerID="d4a0a239394cfe4d1a436f5f0f6c2e48e1be8011071cfdf07f6b684d831443c9" Jan 29 08:25:21 crc kubenswrapper[4826]: E0129 08:25:21.363864 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4a0a239394cfe4d1a436f5f0f6c2e48e1be8011071cfdf07f6b684d831443c9\": container with ID starting with d4a0a239394cfe4d1a436f5f0f6c2e48e1be8011071cfdf07f6b684d831443c9 not found: ID does not exist" containerID="d4a0a239394cfe4d1a436f5f0f6c2e48e1be8011071cfdf07f6b684d831443c9" Jan 29 08:25:21 crc kubenswrapper[4826]: I0129 08:25:21.363886 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4a0a239394cfe4d1a436f5f0f6c2e48e1be8011071cfdf07f6b684d831443c9"} err="failed to get container status \"d4a0a239394cfe4d1a436f5f0f6c2e48e1be8011071cfdf07f6b684d831443c9\": rpc error: code = NotFound desc = could not find container \"d4a0a239394cfe4d1a436f5f0f6c2e48e1be8011071cfdf07f6b684d831443c9\": container with ID starting with d4a0a239394cfe4d1a436f5f0f6c2e48e1be8011071cfdf07f6b684d831443c9 not found: ID does not exist" Jan 29 08:25:22 crc kubenswrapper[4826]: I0129 08:25:22.261179 4826 generic.go:334] "Generic (PLEG): container finished" podID="ebef110f-fd34-4dc3-b8b4-346cd06b189b" containerID="cb57175c31a93000fafa026d20187d9c6841a8ff14bda5854dc62e7a28f5657a" exitCode=0 Jan 29 08:25:22 crc kubenswrapper[4826]: I0129 08:25:22.261330 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" event={"ID":"ebef110f-fd34-4dc3-b8b4-346cd06b189b","Type":"ContainerDied","Data":"cb57175c31a93000fafa026d20187d9c6841a8ff14bda5854dc62e7a28f5657a"} Jan 29 08:25:22 crc kubenswrapper[4826]: I0129 08:25:22.261599 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" event={"ID":"ebef110f-fd34-4dc3-b8b4-346cd06b189b","Type":"ContainerStarted","Data":"a32f547f1b4423398935839ba2554ba2fa518e38a5ae7143c82d8d0d8b910173"} Jan 29 08:25:22 crc kubenswrapper[4826]: I0129 08:25:22.651053 4826 scope.go:117] "RemoveContainer" containerID="586f0b6f5979b804d19efc9c987dba5825e6e00707e61358786c72c77c9beea0" Jan 29 08:25:22 crc kubenswrapper[4826]: I0129 08:25:22.697287 4826 scope.go:117] "RemoveContainer" containerID="777bc48a98c5819b956035a3928bc4b93ab4fd15aee51a6053cc1caec1c06b3f" Jan 29 08:25:22 crc kubenswrapper[4826]: I0129 08:25:22.823476 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08eec373-3907-4e7f-999e-10bec4fa374d" path="/var/lib/kubelet/pods/08eec373-3907-4e7f-999e-10bec4fa374d/volumes" Jan 29 08:25:23 crc kubenswrapper[4826]: I0129 08:25:23.279095 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" event={"ID":"ebef110f-fd34-4dc3-b8b4-346cd06b189b","Type":"ContainerStarted","Data":"f2a5e9dcc697cdaab440d17cef2a2f17daa332345bef47d848dd890388bc8863"} Jan 29 08:25:23 crc kubenswrapper[4826]: I0129 08:25:23.279510 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:23 crc kubenswrapper[4826]: I0129 08:25:23.307593 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" podStartSLOduration=3.30756965 podStartE2EDuration="3.30756965s" podCreationTimestamp="2026-01-29 08:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:25:23.299806105 +0000 UTC m=+6107.161599194" watchObservedRunningTime="2026-01-29 08:25:23.30756965 +0000 UTC m=+6107.169362719" Jan 29 08:25:27 crc kubenswrapper[4826]: I0129 08:25:27.809675 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:25:27 crc kubenswrapper[4826]: E0129 08:25:27.810451 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:25:30 crc kubenswrapper[4826]: I0129 08:25:30.769435 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74bdfdf6c9-sjbjs" Jan 29 08:25:30 crc kubenswrapper[4826]: I0129 08:25:30.838462 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fb45fcfcf-6nf5s"] Jan 29 08:25:30 crc kubenswrapper[4826]: I0129 08:25:30.838749 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" podUID="f112179f-7da7-4633-b2b6-3f8e56358a0d" containerName="dnsmasq-dns" containerID="cri-o://5db4e44ccaa87c2bc759cd4f1675756db71d040c2f2e186eaee3506f62437e8d" gracePeriod=10 Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.373225 4826 generic.go:334] "Generic (PLEG): container finished" podID="f112179f-7da7-4633-b2b6-3f8e56358a0d" containerID="5db4e44ccaa87c2bc759cd4f1675756db71d040c2f2e186eaee3506f62437e8d" exitCode=0 Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.373271 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" event={"ID":"f112179f-7da7-4633-b2b6-3f8e56358a0d","Type":"ContainerDied","Data":"5db4e44ccaa87c2bc759cd4f1675756db71d040c2f2e186eaee3506f62437e8d"} Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.373491 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" event={"ID":"f112179f-7da7-4633-b2b6-3f8e56358a0d","Type":"ContainerDied","Data":"694053eb29eec952010c0150f86ed4a293b92b1fd235630a35da7afeca5fe67f"} Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.373505 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="694053eb29eec952010c0150f86ed4a293b92b1fd235630a35da7afeca5fe67f" Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.384942 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.540871 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-ovsdbserver-nb\") pod \"f112179f-7da7-4633-b2b6-3f8e56358a0d\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.541036 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w6kt\" (UniqueName: \"kubernetes.io/projected/f112179f-7da7-4633-b2b6-3f8e56358a0d-kube-api-access-8w6kt\") pod \"f112179f-7da7-4633-b2b6-3f8e56358a0d\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.541100 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-config\") pod \"f112179f-7da7-4633-b2b6-3f8e56358a0d\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.541149 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-dns-svc\") pod \"f112179f-7da7-4633-b2b6-3f8e56358a0d\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.541185 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-openstack-cell1\") pod \"f112179f-7da7-4633-b2b6-3f8e56358a0d\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.541388 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-ovsdbserver-sb\") pod \"f112179f-7da7-4633-b2b6-3f8e56358a0d\" (UID: \"f112179f-7da7-4633-b2b6-3f8e56358a0d\") " Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.546391 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f112179f-7da7-4633-b2b6-3f8e56358a0d-kube-api-access-8w6kt" (OuterVolumeSpecName: "kube-api-access-8w6kt") pod "f112179f-7da7-4633-b2b6-3f8e56358a0d" (UID: "f112179f-7da7-4633-b2b6-3f8e56358a0d"). InnerVolumeSpecName "kube-api-access-8w6kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.587388 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f112179f-7da7-4633-b2b6-3f8e56358a0d" (UID: "f112179f-7da7-4633-b2b6-3f8e56358a0d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.602103 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f112179f-7da7-4633-b2b6-3f8e56358a0d" (UID: "f112179f-7da7-4633-b2b6-3f8e56358a0d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.613395 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "f112179f-7da7-4633-b2b6-3f8e56358a0d" (UID: "f112179f-7da7-4633-b2b6-3f8e56358a0d"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.622862 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-config" (OuterVolumeSpecName: "config") pod "f112179f-7da7-4633-b2b6-3f8e56358a0d" (UID: "f112179f-7da7-4633-b2b6-3f8e56358a0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.622949 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f112179f-7da7-4633-b2b6-3f8e56358a0d" (UID: "f112179f-7da7-4633-b2b6-3f8e56358a0d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.643647 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.643683 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w6kt\" (UniqueName: \"kubernetes.io/projected/f112179f-7da7-4633-b2b6-3f8e56358a0d-kube-api-access-8w6kt\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.643695 4826 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-config\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.643704 4826 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.643714 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:31 crc kubenswrapper[4826]: I0129 08:25:31.643721 4826 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f112179f-7da7-4633-b2b6-3f8e56358a0d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 08:25:32 crc kubenswrapper[4826]: I0129 08:25:32.381674 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb45fcfcf-6nf5s" Jan 29 08:25:32 crc kubenswrapper[4826]: I0129 08:25:32.433189 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fb45fcfcf-6nf5s"] Jan 29 08:25:32 crc kubenswrapper[4826]: I0129 08:25:32.453817 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fb45fcfcf-6nf5s"] Jan 29 08:25:32 crc kubenswrapper[4826]: I0129 08:25:32.824430 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f112179f-7da7-4633-b2b6-3f8e56358a0d" path="/var/lib/kubelet/pods/f112179f-7da7-4633-b2b6-3f8e56358a0d/volumes" Jan 29 08:25:40 crc kubenswrapper[4826]: I0129 08:25:40.948419 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf"] Jan 29 08:25:40 crc kubenswrapper[4826]: E0129 08:25:40.949318 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f112179f-7da7-4633-b2b6-3f8e56358a0d" containerName="dnsmasq-dns" Jan 29 08:25:40 crc kubenswrapper[4826]: I0129 08:25:40.949333 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f112179f-7da7-4633-b2b6-3f8e56358a0d" containerName="dnsmasq-dns" Jan 29 08:25:40 crc kubenswrapper[4826]: E0129 08:25:40.949349 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08eec373-3907-4e7f-999e-10bec4fa374d" containerName="init" Jan 29 08:25:40 crc kubenswrapper[4826]: I0129 08:25:40.949355 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="08eec373-3907-4e7f-999e-10bec4fa374d" containerName="init" Jan 29 08:25:40 crc kubenswrapper[4826]: E0129 08:25:40.949376 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08eec373-3907-4e7f-999e-10bec4fa374d" containerName="dnsmasq-dns" Jan 29 08:25:40 crc kubenswrapper[4826]: I0129 08:25:40.949382 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="08eec373-3907-4e7f-999e-10bec4fa374d" containerName="dnsmasq-dns" Jan 29 08:25:40 crc kubenswrapper[4826]: E0129 08:25:40.949402 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f112179f-7da7-4633-b2b6-3f8e56358a0d" containerName="init" Jan 29 08:25:40 crc kubenswrapper[4826]: I0129 08:25:40.949408 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="f112179f-7da7-4633-b2b6-3f8e56358a0d" containerName="init" Jan 29 08:25:40 crc kubenswrapper[4826]: I0129 08:25:40.949577 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="08eec373-3907-4e7f-999e-10bec4fa374d" containerName="dnsmasq-dns" Jan 29 08:25:40 crc kubenswrapper[4826]: I0129 08:25:40.949597 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="f112179f-7da7-4633-b2b6-3f8e56358a0d" containerName="dnsmasq-dns" Jan 29 08:25:40 crc kubenswrapper[4826]: I0129 08:25:40.950244 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:40 crc kubenswrapper[4826]: I0129 08:25:40.953488 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:25:40 crc kubenswrapper[4826]: I0129 08:25:40.953484 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:25:40 crc kubenswrapper[4826]: I0129 08:25:40.954339 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:25:40 crc kubenswrapper[4826]: I0129 08:25:40.954354 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:25:40 crc kubenswrapper[4826]: I0129 08:25:40.977382 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf"] Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.073085 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.073252 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.073597 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.073641 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc8hz\" (UniqueName: \"kubernetes.io/projected/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-kube-api-access-xc8hz\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.076419 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8pgv6"] Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.087847 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8pgv6"] Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.175111 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.175211 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.175309 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.175338 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc8hz\" (UniqueName: \"kubernetes.io/projected/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-kube-api-access-xc8hz\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.182928 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.184272 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.186786 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.198471 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc8hz\" (UniqueName: \"kubernetes.io/projected/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-kube-api-access-xc8hz\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.285966 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:25:41 crc kubenswrapper[4826]: I0129 08:25:41.809558 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:25:41 crc kubenswrapper[4826]: E0129 08:25:41.810100 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:25:42 crc kubenswrapper[4826]: I0129 08:25:42.023043 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf"] Jan 29 08:25:42 crc kubenswrapper[4826]: I0129 08:25:42.506944 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" event={"ID":"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9","Type":"ContainerStarted","Data":"c88c35269654350af94fad37445e99129e7bdad69048d0b20ab19db81bfd7a2b"} Jan 29 08:25:42 crc kubenswrapper[4826]: I0129 08:25:42.824346 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ae68fcf-64c2-4107-b160-7466430edc38" path="/var/lib/kubelet/pods/4ae68fcf-64c2-4107-b160-7466430edc38/volumes" Jan 29 08:25:52 crc kubenswrapper[4826]: I0129 08:25:52.605714 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" event={"ID":"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9","Type":"ContainerStarted","Data":"48c9da1bc3d34380911faf96a9b19d377e0474ab95f63d7370d48ccdd9164dae"} Jan 29 08:25:52 crc kubenswrapper[4826]: I0129 08:25:52.636453 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" podStartSLOduration=2.831837115 podStartE2EDuration="12.636434637s" podCreationTimestamp="2026-01-29 08:25:40 +0000 UTC" firstStartedPulling="2026-01-29 08:25:42.030879344 +0000 UTC m=+6125.892672433" lastFinishedPulling="2026-01-29 08:25:51.835476886 +0000 UTC m=+6135.697269955" observedRunningTime="2026-01-29 08:25:52.629271668 +0000 UTC m=+6136.491064747" watchObservedRunningTime="2026-01-29 08:25:52.636434637 +0000 UTC m=+6136.498227706" Jan 29 08:25:54 crc kubenswrapper[4826]: I0129 08:25:54.809432 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:25:54 crc kubenswrapper[4826]: E0129 08:25:54.810494 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:26:05 crc kubenswrapper[4826]: I0129 08:26:05.764363 4826 generic.go:334] "Generic (PLEG): container finished" podID="0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9" containerID="48c9da1bc3d34380911faf96a9b19d377e0474ab95f63d7370d48ccdd9164dae" exitCode=0 Jan 29 08:26:05 crc kubenswrapper[4826]: I0129 08:26:05.764455 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" event={"ID":"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9","Type":"ContainerDied","Data":"48c9da1bc3d34380911faf96a9b19d377e0474ab95f63d7370d48ccdd9164dae"} Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.288884 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.374333 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-pre-adoption-validation-combined-ca-bundle\") pod \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.374532 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-inventory\") pod \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.374698 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc8hz\" (UniqueName: \"kubernetes.io/projected/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-kube-api-access-xc8hz\") pod \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.374806 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-ssh-key-openstack-cell1\") pod \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\" (UID: \"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9\") " Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.379684 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9" (UID: "0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.383763 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-kube-api-access-xc8hz" (OuterVolumeSpecName: "kube-api-access-xc8hz") pod "0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9" (UID: "0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9"). InnerVolumeSpecName "kube-api-access-xc8hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.407629 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9" (UID: "0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.419503 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-inventory" (OuterVolumeSpecName: "inventory") pod "0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9" (UID: "0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.477568 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc8hz\" (UniqueName: \"kubernetes.io/projected/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-kube-api-access-xc8hz\") on node \"crc\" DevicePath \"\"" Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.477607 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.477617 4826 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.477628 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.796802 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.799922 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf" event={"ID":"0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9","Type":"ContainerDied","Data":"c88c35269654350af94fad37445e99129e7bdad69048d0b20ab19db81bfd7a2b"} Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.799987 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88c35269654350af94fad37445e99129e7bdad69048d0b20ab19db81bfd7a2b" Jan 29 08:26:07 crc kubenswrapper[4826]: I0129 08:26:07.809602 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:26:07 crc kubenswrapper[4826]: E0129 08:26:07.809969 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:26:12 crc kubenswrapper[4826]: I0129 08:26:12.054043 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6bd0-account-create-update-7mlzp"] Jan 29 08:26:12 crc kubenswrapper[4826]: I0129 08:26:12.063520 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-z6vw2"] Jan 29 08:26:12 crc kubenswrapper[4826]: I0129 08:26:12.076931 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-z6vw2"] Jan 29 08:26:12 crc kubenswrapper[4826]: I0129 08:26:12.087607 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6bd0-account-create-update-7mlzp"] Jan 29 08:26:12 crc kubenswrapper[4826]: I0129 08:26:12.828875 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6ea4e54-e85e-436c-ba93-ce6c994607be" path="/var/lib/kubelet/pods/a6ea4e54-e85e-436c-ba93-ce6c994607be/volumes" Jan 29 08:26:12 crc kubenswrapper[4826]: I0129 08:26:12.837281 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac1cc5b-8679-46db-8577-0055cf77495f" path="/var/lib/kubelet/pods/fac1cc5b-8679-46db-8577-0055cf77495f/volumes" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.697193 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp"] Jan 29 08:26:13 crc kubenswrapper[4826]: E0129 08:26:13.697847 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.697872 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.698207 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.699374 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.701676 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.701729 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.702102 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.702341 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.725895 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp"] Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.860657 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.860713 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slnnj\" (UniqueName: \"kubernetes.io/projected/c7145c04-cf5e-43b3-8934-0c6397272bb2-kube-api-access-slnnj\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.860939 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.861123 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.963885 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.963975 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.964168 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.964221 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slnnj\" (UniqueName: \"kubernetes.io/projected/c7145c04-cf5e-43b3-8934-0c6397272bb2-kube-api-access-slnnj\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.973993 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.977750 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.984966 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:13 crc kubenswrapper[4826]: I0129 08:26:13.991529 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slnnj\" (UniqueName: \"kubernetes.io/projected/c7145c04-cf5e-43b3-8934-0c6397272bb2-kube-api-access-slnnj\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:14 crc kubenswrapper[4826]: I0129 08:26:14.016500 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:26:14 crc kubenswrapper[4826]: I0129 08:26:14.598050 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp"] Jan 29 08:26:14 crc kubenswrapper[4826]: I0129 08:26:14.889266 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" event={"ID":"c7145c04-cf5e-43b3-8934-0c6397272bb2","Type":"ContainerStarted","Data":"0fd1c4ce5d5f64b7dc62e3965e6ee6d800e7e543ce86eacd920e3d8242085758"} Jan 29 08:26:16 crc kubenswrapper[4826]: I0129 08:26:16.945113 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" event={"ID":"c7145c04-cf5e-43b3-8934-0c6397272bb2","Type":"ContainerStarted","Data":"031b307655fb8f425192db521ae4dd064a412e1b223a44b0ea52279e97298f9e"} Jan 29 08:26:16 crc kubenswrapper[4826]: I0129 08:26:16.973631 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" podStartSLOduration=3.3551517029999998 podStartE2EDuration="3.973612859s" podCreationTimestamp="2026-01-29 08:26:13 +0000 UTC" firstStartedPulling="2026-01-29 08:26:14.607053672 +0000 UTC m=+6158.468846741" lastFinishedPulling="2026-01-29 08:26:15.225514808 +0000 UTC m=+6159.087307897" observedRunningTime="2026-01-29 08:26:16.963627196 +0000 UTC m=+6160.825420265" watchObservedRunningTime="2026-01-29 08:26:16.973612859 +0000 UTC m=+6160.835405928" Jan 29 08:26:22 crc kubenswrapper[4826]: I0129 08:26:22.809401 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:26:22 crc kubenswrapper[4826]: E0129 08:26:22.810310 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:26:22 crc kubenswrapper[4826]: I0129 08:26:22.977550 4826 scope.go:117] "RemoveContainer" containerID="c34e3abd6de1089b2c045b3d2fbd5203cdfe9efb3fbd46be08700b5a7867a4ad" Jan 29 08:26:23 crc kubenswrapper[4826]: I0129 08:26:23.011414 4826 scope.go:117] "RemoveContainer" containerID="89a0c2239b7b5bf0d60734d143951cfc17acf51a6bf83924bef20a68706177d1" Jan 29 08:26:23 crc kubenswrapper[4826]: I0129 08:26:23.076785 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ncbgj"] Jan 29 08:26:23 crc kubenswrapper[4826]: I0129 08:26:23.085184 4826 scope.go:117] "RemoveContainer" containerID="3bc120af49c3b3e7f315fce86c7844d79f69df43166c7937deb247993a4bcf47" Jan 29 08:26:23 crc kubenswrapper[4826]: I0129 08:26:23.090390 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ncbgj"] Jan 29 08:26:24 crc kubenswrapper[4826]: I0129 08:26:24.822468 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c39a8b1f-3356-4105-96ad-529ee04ca228" path="/var/lib/kubelet/pods/c39a8b1f-3356-4105-96ad-529ee04ca228/volumes" Jan 29 08:26:37 crc kubenswrapper[4826]: I0129 08:26:37.859620 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:26:39 crc kubenswrapper[4826]: I0129 08:26:39.235997 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"42007fe8252651c322800cb3362afa885bf869996e754d5e5eded1c4cfbce3ac"} Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.051637 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-sdmv5"] Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.061842 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-be2d-account-create-update-kwqfd"] Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.073859 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-f0a9-account-create-update-xmpzq"] Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.096953 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-16c1-account-create-update-vwfhw"] Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.109715 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-s4sks"] Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.118522 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-nmz88"] Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.126253 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-sdmv5"] Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.138309 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-s4sks"] Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.145957 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-nmz88"] Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.155527 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-be2d-account-create-update-kwqfd"] Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.163638 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-16c1-account-create-update-vwfhw"] Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.170909 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-f0a9-account-create-update-xmpzq"] Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.824629 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="066e5538-568a-4d24-a574-14c40f353fe3" path="/var/lib/kubelet/pods/066e5538-568a-4d24-a574-14c40f353fe3/volumes" Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.826023 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba56186-f1c3-4c15-ab42-ae984dd43507" path="/var/lib/kubelet/pods/2ba56186-f1c3-4c15-ab42-ae984dd43507/volumes" Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.827351 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="319f3f29-51d6-4821-a56b-9178f703d9a3" path="/var/lib/kubelet/pods/319f3f29-51d6-4821-a56b-9178f703d9a3/volumes" Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.828542 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fc0f8f9-c400-47f2-91b9-9a77da288710" path="/var/lib/kubelet/pods/3fc0f8f9-c400-47f2-91b9-9a77da288710/volumes" Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.830365 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a408905f-ba5e-4280-bf59-6e820f34a8cb" path="/var/lib/kubelet/pods/a408905f-ba5e-4280-bf59-6e820f34a8cb/volumes" Jan 29 08:27:22 crc kubenswrapper[4826]: I0129 08:27:22.831557 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8da5547-d767-4dc6-96bd-985b44f4743d" path="/var/lib/kubelet/pods/d8da5547-d767-4dc6-96bd-985b44f4743d/volumes" Jan 29 08:27:23 crc kubenswrapper[4826]: I0129 08:27:23.195249 4826 scope.go:117] "RemoveContainer" containerID="e4f9d0f86ec6defef588b5c84d44ba9afce5bcecf9ea76dbb9be45ff0b723cfb" Jan 29 08:27:23 crc kubenswrapper[4826]: I0129 08:27:23.219105 4826 scope.go:117] "RemoveContainer" containerID="fb752a604a67acd118576aa608891f3d94978c22b0300712d4025c7496fcd2df" Jan 29 08:27:23 crc kubenswrapper[4826]: I0129 08:27:23.294277 4826 scope.go:117] "RemoveContainer" containerID="67f9c55a291f27069cc16fe5085407ab17430634982c82d41d4a8a98b32a3ee3" Jan 29 08:27:23 crc kubenswrapper[4826]: I0129 08:27:23.346790 4826 scope.go:117] "RemoveContainer" containerID="980d041bb735f9039019e759638d426de8239aa7447a01ccf767cf2a5058daa8" Jan 29 08:27:23 crc kubenswrapper[4826]: I0129 08:27:23.383723 4826 scope.go:117] "RemoveContainer" containerID="641eb95a5e74910a1ff4d369cbdb325fe9a7393dd0f037fa7619dd878b9547d0" Jan 29 08:27:23 crc kubenswrapper[4826]: I0129 08:27:23.431519 4826 scope.go:117] "RemoveContainer" containerID="77df29119a7b651a069843831b09b69d6d4e4beb8400878aff1f47cedda2f148" Jan 29 08:27:23 crc kubenswrapper[4826]: I0129 08:27:23.484869 4826 scope.go:117] "RemoveContainer" containerID="ba5249bfe67f8817ad367ba80f43570a10bbdc3502bc4bb4fce5e43abc79165a" Jan 29 08:27:39 crc kubenswrapper[4826]: I0129 08:27:39.036711 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dp85l"] Jan 29 08:27:39 crc kubenswrapper[4826]: I0129 08:27:39.047930 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dp85l"] Jan 29 08:27:40 crc kubenswrapper[4826]: I0129 08:27:40.826811 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="231c03c2-0a51-4052-9d72-3a5c1526181b" path="/var/lib/kubelet/pods/231c03c2-0a51-4052-9d72-3a5c1526181b/volumes" Jan 29 08:27:53 crc kubenswrapper[4826]: I0129 08:27:53.057345 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5xxjk"] Jan 29 08:27:53 crc kubenswrapper[4826]: I0129 08:27:53.079019 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5xxjk"] Jan 29 08:27:54 crc kubenswrapper[4826]: I0129 08:27:54.029891 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7lbbq"] Jan 29 08:27:54 crc kubenswrapper[4826]: I0129 08:27:54.038115 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7lbbq"] Jan 29 08:27:54 crc kubenswrapper[4826]: I0129 08:27:54.829511 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a397374-869e-4bb1-9432-5d96bb65411a" path="/var/lib/kubelet/pods/8a397374-869e-4bb1-9432-5d96bb65411a/volumes" Jan 29 08:27:54 crc kubenswrapper[4826]: I0129 08:27:54.832403 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c22f0edc-d56e-4aaa-b9f4-ec8394a3d880" path="/var/lib/kubelet/pods/c22f0edc-d56e-4aaa-b9f4-ec8394a3d880/volumes" Jan 29 08:28:23 crc kubenswrapper[4826]: I0129 08:28:23.617880 4826 scope.go:117] "RemoveContainer" containerID="dfced1e3782e9d65b1311526b5792aa211e33e29061a920f972cb37dcd9c47ac" Jan 29 08:28:23 crc kubenswrapper[4826]: I0129 08:28:23.673574 4826 scope.go:117] "RemoveContainer" containerID="e313c31e9558789bce757f84d738a188c40a80f3634f1374e57eae6ff170da1e" Jan 29 08:28:23 crc kubenswrapper[4826]: I0129 08:28:23.745579 4826 scope.go:117] "RemoveContainer" containerID="7c8e90e4b7f41076b58b56a42de98d31922925f762b84a4367852a3f63a6ae11" Jan 29 08:28:41 crc kubenswrapper[4826]: I0129 08:28:41.049422 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-zw526"] Jan 29 08:28:41 crc kubenswrapper[4826]: I0129 08:28:41.058532 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-zw526"] Jan 29 08:28:42 crc kubenswrapper[4826]: I0129 08:28:42.821878 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d64cc12e-e9ea-48e4-9f37-525fe27b1b6f" path="/var/lib/kubelet/pods/d64cc12e-e9ea-48e4-9f37-525fe27b1b6f/volumes" Jan 29 08:29:05 crc kubenswrapper[4826]: I0129 08:29:05.656389 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:29:05 crc kubenswrapper[4826]: I0129 08:29:05.657020 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:29:23 crc kubenswrapper[4826]: I0129 08:29:23.875935 4826 scope.go:117] "RemoveContainer" containerID="8bea7c921d13e8f6e379a3ad4ea2542f1fb47be5f9c620480efe08546287f6eb" Jan 29 08:29:35 crc kubenswrapper[4826]: I0129 08:29:35.658237 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:29:35 crc kubenswrapper[4826]: I0129 08:29:35.658784 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.178432 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn"] Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.180635 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.183783 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.183984 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.196031 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn"] Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.242641 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-secret-volume\") pod \"collect-profiles-29494590-99cmn\" (UID: \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.242796 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-config-volume\") pod \"collect-profiles-29494590-99cmn\" (UID: \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.242915 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jckx\" (UniqueName: \"kubernetes.io/projected/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-kube-api-access-4jckx\") pod \"collect-profiles-29494590-99cmn\" (UID: \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.345114 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jckx\" (UniqueName: \"kubernetes.io/projected/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-kube-api-access-4jckx\") pod \"collect-profiles-29494590-99cmn\" (UID: \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.345224 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-secret-volume\") pod \"collect-profiles-29494590-99cmn\" (UID: \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.345353 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-config-volume\") pod \"collect-profiles-29494590-99cmn\" (UID: \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.346705 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-config-volume\") pod \"collect-profiles-29494590-99cmn\" (UID: \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.366732 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-secret-volume\") pod \"collect-profiles-29494590-99cmn\" (UID: \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.368882 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jckx\" (UniqueName: \"kubernetes.io/projected/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-kube-api-access-4jckx\") pod \"collect-profiles-29494590-99cmn\" (UID: \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" Jan 29 08:30:00 crc kubenswrapper[4826]: I0129 08:30:00.510882 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" Jan 29 08:30:01 crc kubenswrapper[4826]: I0129 08:30:01.010331 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn"] Jan 29 08:30:01 crc kubenswrapper[4826]: I0129 08:30:01.525947 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" event={"ID":"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773","Type":"ContainerStarted","Data":"4092bae3e04c42d6237216faa6b767404ae55a635d96baba88ed833c299c5a98"} Jan 29 08:30:01 crc kubenswrapper[4826]: I0129 08:30:01.526196 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" event={"ID":"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773","Type":"ContainerStarted","Data":"15927c26e4136b60ecd1243d8f7327e4b50afa548bcc4c62a9fe3c5111809d48"} Jan 29 08:30:01 crc kubenswrapper[4826]: I0129 08:30:01.543941 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" podStartSLOduration=1.5439268400000001 podStartE2EDuration="1.54392684s" podCreationTimestamp="2026-01-29 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:30:01.541205398 +0000 UTC m=+6385.402998467" watchObservedRunningTime="2026-01-29 08:30:01.54392684 +0000 UTC m=+6385.405719909" Jan 29 08:30:02 crc kubenswrapper[4826]: I0129 08:30:02.551087 4826 generic.go:334] "Generic (PLEG): container finished" podID="1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773" containerID="4092bae3e04c42d6237216faa6b767404ae55a635d96baba88ed833c299c5a98" exitCode=0 Jan 29 08:30:02 crc kubenswrapper[4826]: I0129 08:30:02.551165 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" event={"ID":"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773","Type":"ContainerDied","Data":"4092bae3e04c42d6237216faa6b767404ae55a635d96baba88ed833c299c5a98"} Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.004355 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.140734 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jckx\" (UniqueName: \"kubernetes.io/projected/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-kube-api-access-4jckx\") pod \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\" (UID: \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\") " Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.140899 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-secret-volume\") pod \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\" (UID: \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\") " Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.141125 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-config-volume\") pod \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\" (UID: \"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773\") " Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.141890 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-config-volume" (OuterVolumeSpecName: "config-volume") pod "1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773" (UID: "1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.148133 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-kube-api-access-4jckx" (OuterVolumeSpecName: "kube-api-access-4jckx") pod "1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773" (UID: "1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773"). InnerVolumeSpecName "kube-api-access-4jckx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.148521 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773" (UID: "1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.244397 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jckx\" (UniqueName: \"kubernetes.io/projected/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-kube-api-access-4jckx\") on node \"crc\" DevicePath \"\"" Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.244452 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.244472 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.580984 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" event={"ID":"1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773","Type":"ContainerDied","Data":"15927c26e4136b60ecd1243d8f7327e4b50afa548bcc4c62a9fe3c5111809d48"} Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.581030 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15927c26e4136b60ecd1243d8f7327e4b50afa548bcc4c62a9fe3c5111809d48" Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.581091 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn" Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.635947 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl"] Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.646503 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494545-wk5wl"] Jan 29 08:30:04 crc kubenswrapper[4826]: I0129 08:30:04.822540 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9d9073-69e4-4d4b-92cc-da505d00c9f8" path="/var/lib/kubelet/pods/7a9d9073-69e4-4d4b-92cc-da505d00c9f8/volumes" Jan 29 08:30:05 crc kubenswrapper[4826]: I0129 08:30:05.658986 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:30:05 crc kubenswrapper[4826]: I0129 08:30:05.659412 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:30:05 crc kubenswrapper[4826]: I0129 08:30:05.659473 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 08:30:05 crc kubenswrapper[4826]: I0129 08:30:05.660657 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42007fe8252651c322800cb3362afa885bf869996e754d5e5eded1c4cfbce3ac"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:30:05 crc kubenswrapper[4826]: I0129 08:30:05.660799 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://42007fe8252651c322800cb3362afa885bf869996e754d5e5eded1c4cfbce3ac" gracePeriod=600 Jan 29 08:30:06 crc kubenswrapper[4826]: I0129 08:30:06.608434 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="42007fe8252651c322800cb3362afa885bf869996e754d5e5eded1c4cfbce3ac" exitCode=0 Jan 29 08:30:06 crc kubenswrapper[4826]: I0129 08:30:06.608503 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"42007fe8252651c322800cb3362afa885bf869996e754d5e5eded1c4cfbce3ac"} Jan 29 08:30:06 crc kubenswrapper[4826]: I0129 08:30:06.608551 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf"} Jan 29 08:30:06 crc kubenswrapper[4826]: I0129 08:30:06.608581 4826 scope.go:117] "RemoveContainer" containerID="952b43d896912a3cda5e612a6c0f46d793202eff6352f27fa018ee360258c570" Jan 29 08:30:23 crc kubenswrapper[4826]: I0129 08:30:23.979798 4826 scope.go:117] "RemoveContainer" containerID="0dfe868933abf2f50e60ac2d60fd97f504e8848de44102585ebb7bd454da9c19" Jan 29 08:31:19 crc kubenswrapper[4826]: I0129 08:31:19.040247 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-7smxw"] Jan 29 08:31:19 crc kubenswrapper[4826]: I0129 08:31:19.050634 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-4d83-account-create-update-wmdrs"] Jan 29 08:31:19 crc kubenswrapper[4826]: I0129 08:31:19.059753 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-4d83-account-create-update-wmdrs"] Jan 29 08:31:19 crc kubenswrapper[4826]: I0129 08:31:19.066641 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-7smxw"] Jan 29 08:31:20 crc kubenswrapper[4826]: I0129 08:31:20.820925 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ab6460e-b058-40f7-acda-fc4f6302f922" path="/var/lib/kubelet/pods/8ab6460e-b058-40f7-acda-fc4f6302f922/volumes" Jan 29 08:31:20 crc kubenswrapper[4826]: I0129 08:31:20.821818 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756" path="/var/lib/kubelet/pods/bbfbbfe5-1f19-48f1-8e0b-2602c2e9c756/volumes" Jan 29 08:31:24 crc kubenswrapper[4826]: I0129 08:31:24.066403 4826 scope.go:117] "RemoveContainer" containerID="0da7f093c7b729647cc01a584a2f686c18ac2d298833ca3682d5c5b5892e626e" Jan 29 08:31:24 crc kubenswrapper[4826]: I0129 08:31:24.118474 4826 scope.go:117] "RemoveContainer" containerID="5db4e44ccaa87c2bc759cd4f1675756db71d040c2f2e186eaee3506f62437e8d" Jan 29 08:31:24 crc kubenswrapper[4826]: I0129 08:31:24.165799 4826 scope.go:117] "RemoveContainer" containerID="f1163b3372d444c05f120979d2c9b4406feefb1396311e9c1a40a2c7ea6fa08e" Jan 29 08:31:24 crc kubenswrapper[4826]: I0129 08:31:24.192843 4826 scope.go:117] "RemoveContainer" containerID="5d508854bb6d61318f765e6daf31d7364bc882911143b03420aaa6b9d37b531d" Jan 29 08:31:35 crc kubenswrapper[4826]: I0129 08:31:35.040967 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-djgr7"] Jan 29 08:31:35 crc kubenswrapper[4826]: I0129 08:31:35.049594 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-djgr7"] Jan 29 08:31:36 crc kubenswrapper[4826]: I0129 08:31:36.821723 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46" path="/var/lib/kubelet/pods/c52d7b11-c4b1-4dc3-9ac7-ae8bf7131e46/volumes" Jan 29 08:31:57 crc kubenswrapper[4826]: I0129 08:31:57.835288 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9hbds"] Jan 29 08:31:57 crc kubenswrapper[4826]: E0129 08:31:57.837535 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773" containerName="collect-profiles" Jan 29 08:31:57 crc kubenswrapper[4826]: I0129 08:31:57.837561 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773" containerName="collect-profiles" Jan 29 08:31:57 crc kubenswrapper[4826]: I0129 08:31:57.837838 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773" containerName="collect-profiles" Jan 29 08:31:57 crc kubenswrapper[4826]: I0129 08:31:57.843863 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:31:57 crc kubenswrapper[4826]: I0129 08:31:57.855363 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hbds"] Jan 29 08:31:57 crc kubenswrapper[4826]: I0129 08:31:57.991361 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvkgj\" (UniqueName: \"kubernetes.io/projected/26fb50bf-557f-47fc-9e5f-fa5e8078f182-kube-api-access-tvkgj\") pod \"community-operators-9hbds\" (UID: \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\") " pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:31:57 crc kubenswrapper[4826]: I0129 08:31:57.991827 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fb50bf-557f-47fc-9e5f-fa5e8078f182-catalog-content\") pod \"community-operators-9hbds\" (UID: \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\") " pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:31:57 crc kubenswrapper[4826]: I0129 08:31:57.991901 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fb50bf-557f-47fc-9e5f-fa5e8078f182-utilities\") pod \"community-operators-9hbds\" (UID: \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\") " pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:31:58 crc kubenswrapper[4826]: I0129 08:31:58.094106 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fb50bf-557f-47fc-9e5f-fa5e8078f182-catalog-content\") pod \"community-operators-9hbds\" (UID: \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\") " pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:31:58 crc kubenswrapper[4826]: I0129 08:31:58.094185 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fb50bf-557f-47fc-9e5f-fa5e8078f182-utilities\") pod \"community-operators-9hbds\" (UID: \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\") " pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:31:58 crc kubenswrapper[4826]: I0129 08:31:58.094245 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvkgj\" (UniqueName: \"kubernetes.io/projected/26fb50bf-557f-47fc-9e5f-fa5e8078f182-kube-api-access-tvkgj\") pod \"community-operators-9hbds\" (UID: \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\") " pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:31:58 crc kubenswrapper[4826]: I0129 08:31:58.094967 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fb50bf-557f-47fc-9e5f-fa5e8078f182-catalog-content\") pod \"community-operators-9hbds\" (UID: \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\") " pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:31:58 crc kubenswrapper[4826]: I0129 08:31:58.095361 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fb50bf-557f-47fc-9e5f-fa5e8078f182-utilities\") pod \"community-operators-9hbds\" (UID: \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\") " pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:31:58 crc kubenswrapper[4826]: I0129 08:31:58.117869 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvkgj\" (UniqueName: \"kubernetes.io/projected/26fb50bf-557f-47fc-9e5f-fa5e8078f182-kube-api-access-tvkgj\") pod \"community-operators-9hbds\" (UID: \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\") " pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:31:58 crc kubenswrapper[4826]: I0129 08:31:58.178399 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:31:58 crc kubenswrapper[4826]: I0129 08:31:58.773129 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hbds"] Jan 29 08:31:59 crc kubenswrapper[4826]: I0129 08:31:59.748668 4826 generic.go:334] "Generic (PLEG): container finished" podID="26fb50bf-557f-47fc-9e5f-fa5e8078f182" containerID="8701e7e3ae4fa4e0f008fefc61e5b601012ae831edf202acddddd9afc6b6ac48" exitCode=0 Jan 29 08:31:59 crc kubenswrapper[4826]: I0129 08:31:59.748772 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hbds" event={"ID":"26fb50bf-557f-47fc-9e5f-fa5e8078f182","Type":"ContainerDied","Data":"8701e7e3ae4fa4e0f008fefc61e5b601012ae831edf202acddddd9afc6b6ac48"} Jan 29 08:31:59 crc kubenswrapper[4826]: I0129 08:31:59.749018 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hbds" event={"ID":"26fb50bf-557f-47fc-9e5f-fa5e8078f182","Type":"ContainerStarted","Data":"34a4f0e3539b6d47a3fb7904af0ccb87828b283c17fcadb300927373226ba870"} Jan 29 08:31:59 crc kubenswrapper[4826]: I0129 08:31:59.751025 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:32:01 crc kubenswrapper[4826]: I0129 08:32:01.776203 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hbds" event={"ID":"26fb50bf-557f-47fc-9e5f-fa5e8078f182","Type":"ContainerStarted","Data":"848d2aa7383ec856c8ae1b9af3d9de31b8ca00bd6c56fead55a276c49544b0f7"} Jan 29 08:32:02 crc kubenswrapper[4826]: I0129 08:32:02.795470 4826 generic.go:334] "Generic (PLEG): container finished" podID="26fb50bf-557f-47fc-9e5f-fa5e8078f182" containerID="848d2aa7383ec856c8ae1b9af3d9de31b8ca00bd6c56fead55a276c49544b0f7" exitCode=0 Jan 29 08:32:02 crc kubenswrapper[4826]: I0129 08:32:02.795706 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hbds" event={"ID":"26fb50bf-557f-47fc-9e5f-fa5e8078f182","Type":"ContainerDied","Data":"848d2aa7383ec856c8ae1b9af3d9de31b8ca00bd6c56fead55a276c49544b0f7"} Jan 29 08:32:03 crc kubenswrapper[4826]: I0129 08:32:03.813813 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hbds" event={"ID":"26fb50bf-557f-47fc-9e5f-fa5e8078f182","Type":"ContainerStarted","Data":"ad277248980add729c08c57c09760d80d32dd5006b6b690e81dc565b8ec410e2"} Jan 29 08:32:03 crc kubenswrapper[4826]: I0129 08:32:03.835269 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9hbds" podStartSLOduration=3.36601074 podStartE2EDuration="6.835248254s" podCreationTimestamp="2026-01-29 08:31:57 +0000 UTC" firstStartedPulling="2026-01-29 08:31:59.750696547 +0000 UTC m=+6503.612489626" lastFinishedPulling="2026-01-29 08:32:03.219934031 +0000 UTC m=+6507.081727140" observedRunningTime="2026-01-29 08:32:03.83165387 +0000 UTC m=+6507.693446939" watchObservedRunningTime="2026-01-29 08:32:03.835248254 +0000 UTC m=+6507.697041333" Jan 29 08:32:08 crc kubenswrapper[4826]: I0129 08:32:08.179157 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:32:08 crc kubenswrapper[4826]: I0129 08:32:08.179516 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:32:08 crc kubenswrapper[4826]: I0129 08:32:08.242234 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:32:08 crc kubenswrapper[4826]: I0129 08:32:08.921614 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:32:08 crc kubenswrapper[4826]: I0129 08:32:08.977453 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hbds"] Jan 29 08:32:10 crc kubenswrapper[4826]: I0129 08:32:10.889237 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9hbds" podUID="26fb50bf-557f-47fc-9e5f-fa5e8078f182" containerName="registry-server" containerID="cri-o://ad277248980add729c08c57c09760d80d32dd5006b6b690e81dc565b8ec410e2" gracePeriod=2 Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.450614 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.609892 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fb50bf-557f-47fc-9e5f-fa5e8078f182-utilities\") pod \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\" (UID: \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\") " Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.610173 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fb50bf-557f-47fc-9e5f-fa5e8078f182-catalog-content\") pod \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\" (UID: \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\") " Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.610253 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvkgj\" (UniqueName: \"kubernetes.io/projected/26fb50bf-557f-47fc-9e5f-fa5e8078f182-kube-api-access-tvkgj\") pod \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\" (UID: \"26fb50bf-557f-47fc-9e5f-fa5e8078f182\") " Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.610877 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26fb50bf-557f-47fc-9e5f-fa5e8078f182-utilities" (OuterVolumeSpecName: "utilities") pod "26fb50bf-557f-47fc-9e5f-fa5e8078f182" (UID: "26fb50bf-557f-47fc-9e5f-fa5e8078f182"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.612240 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fb50bf-557f-47fc-9e5f-fa5e8078f182-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.623542 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26fb50bf-557f-47fc-9e5f-fa5e8078f182-kube-api-access-tvkgj" (OuterVolumeSpecName: "kube-api-access-tvkgj") pod "26fb50bf-557f-47fc-9e5f-fa5e8078f182" (UID: "26fb50bf-557f-47fc-9e5f-fa5e8078f182"). InnerVolumeSpecName "kube-api-access-tvkgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.671349 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26fb50bf-557f-47fc-9e5f-fa5e8078f182-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26fb50bf-557f-47fc-9e5f-fa5e8078f182" (UID: "26fb50bf-557f-47fc-9e5f-fa5e8078f182"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.714597 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fb50bf-557f-47fc-9e5f-fa5e8078f182-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.714630 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvkgj\" (UniqueName: \"kubernetes.io/projected/26fb50bf-557f-47fc-9e5f-fa5e8078f182-kube-api-access-tvkgj\") on node \"crc\" DevicePath \"\"" Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.903760 4826 generic.go:334] "Generic (PLEG): container finished" podID="26fb50bf-557f-47fc-9e5f-fa5e8078f182" containerID="ad277248980add729c08c57c09760d80d32dd5006b6b690e81dc565b8ec410e2" exitCode=0 Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.903821 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hbds" event={"ID":"26fb50bf-557f-47fc-9e5f-fa5e8078f182","Type":"ContainerDied","Data":"ad277248980add729c08c57c09760d80d32dd5006b6b690e81dc565b8ec410e2"} Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.903888 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hbds" event={"ID":"26fb50bf-557f-47fc-9e5f-fa5e8078f182","Type":"ContainerDied","Data":"34a4f0e3539b6d47a3fb7904af0ccb87828b283c17fcadb300927373226ba870"} Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.903921 4826 scope.go:117] "RemoveContainer" containerID="ad277248980add729c08c57c09760d80d32dd5006b6b690e81dc565b8ec410e2" Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.903949 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hbds" Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.934580 4826 scope.go:117] "RemoveContainer" containerID="848d2aa7383ec856c8ae1b9af3d9de31b8ca00bd6c56fead55a276c49544b0f7" Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.967878 4826 scope.go:117] "RemoveContainer" containerID="8701e7e3ae4fa4e0f008fefc61e5b601012ae831edf202acddddd9afc6b6ac48" Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.971431 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hbds"] Jan 29 08:32:11 crc kubenswrapper[4826]: I0129 08:32:11.982119 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9hbds"] Jan 29 08:32:12 crc kubenswrapper[4826]: I0129 08:32:12.025993 4826 scope.go:117] "RemoveContainer" containerID="ad277248980add729c08c57c09760d80d32dd5006b6b690e81dc565b8ec410e2" Jan 29 08:32:12 crc kubenswrapper[4826]: E0129 08:32:12.026737 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad277248980add729c08c57c09760d80d32dd5006b6b690e81dc565b8ec410e2\": container with ID starting with ad277248980add729c08c57c09760d80d32dd5006b6b690e81dc565b8ec410e2 not found: ID does not exist" containerID="ad277248980add729c08c57c09760d80d32dd5006b6b690e81dc565b8ec410e2" Jan 29 08:32:12 crc kubenswrapper[4826]: I0129 08:32:12.026790 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad277248980add729c08c57c09760d80d32dd5006b6b690e81dc565b8ec410e2"} err="failed to get container status \"ad277248980add729c08c57c09760d80d32dd5006b6b690e81dc565b8ec410e2\": rpc error: code = NotFound desc = could not find container \"ad277248980add729c08c57c09760d80d32dd5006b6b690e81dc565b8ec410e2\": container with ID starting with ad277248980add729c08c57c09760d80d32dd5006b6b690e81dc565b8ec410e2 not found: ID does not exist" Jan 29 08:32:12 crc kubenswrapper[4826]: I0129 08:32:12.026821 4826 scope.go:117] "RemoveContainer" containerID="848d2aa7383ec856c8ae1b9af3d9de31b8ca00bd6c56fead55a276c49544b0f7" Jan 29 08:32:12 crc kubenswrapper[4826]: E0129 08:32:12.027553 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"848d2aa7383ec856c8ae1b9af3d9de31b8ca00bd6c56fead55a276c49544b0f7\": container with ID starting with 848d2aa7383ec856c8ae1b9af3d9de31b8ca00bd6c56fead55a276c49544b0f7 not found: ID does not exist" containerID="848d2aa7383ec856c8ae1b9af3d9de31b8ca00bd6c56fead55a276c49544b0f7" Jan 29 08:32:12 crc kubenswrapper[4826]: I0129 08:32:12.027628 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"848d2aa7383ec856c8ae1b9af3d9de31b8ca00bd6c56fead55a276c49544b0f7"} err="failed to get container status \"848d2aa7383ec856c8ae1b9af3d9de31b8ca00bd6c56fead55a276c49544b0f7\": rpc error: code = NotFound desc = could not find container \"848d2aa7383ec856c8ae1b9af3d9de31b8ca00bd6c56fead55a276c49544b0f7\": container with ID starting with 848d2aa7383ec856c8ae1b9af3d9de31b8ca00bd6c56fead55a276c49544b0f7 not found: ID does not exist" Jan 29 08:32:12 crc kubenswrapper[4826]: I0129 08:32:12.027671 4826 scope.go:117] "RemoveContainer" containerID="8701e7e3ae4fa4e0f008fefc61e5b601012ae831edf202acddddd9afc6b6ac48" Jan 29 08:32:12 crc kubenswrapper[4826]: E0129 08:32:12.028239 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8701e7e3ae4fa4e0f008fefc61e5b601012ae831edf202acddddd9afc6b6ac48\": container with ID starting with 8701e7e3ae4fa4e0f008fefc61e5b601012ae831edf202acddddd9afc6b6ac48 not found: ID does not exist" containerID="8701e7e3ae4fa4e0f008fefc61e5b601012ae831edf202acddddd9afc6b6ac48" Jan 29 08:32:12 crc kubenswrapper[4826]: I0129 08:32:12.028285 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8701e7e3ae4fa4e0f008fefc61e5b601012ae831edf202acddddd9afc6b6ac48"} err="failed to get container status \"8701e7e3ae4fa4e0f008fefc61e5b601012ae831edf202acddddd9afc6b6ac48\": rpc error: code = NotFound desc = could not find container \"8701e7e3ae4fa4e0f008fefc61e5b601012ae831edf202acddddd9afc6b6ac48\": container with ID starting with 8701e7e3ae4fa4e0f008fefc61e5b601012ae831edf202acddddd9afc6b6ac48 not found: ID does not exist" Jan 29 08:32:12 crc kubenswrapper[4826]: I0129 08:32:12.828483 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26fb50bf-557f-47fc-9e5f-fa5e8078f182" path="/var/lib/kubelet/pods/26fb50bf-557f-47fc-9e5f-fa5e8078f182/volumes" Jan 29 08:32:24 crc kubenswrapper[4826]: I0129 08:32:24.322267 4826 scope.go:117] "RemoveContainer" containerID="7ff02862014c3127f16f72177272d257717af626a4254ea5161357961db2d9cb" Jan 29 08:32:35 crc kubenswrapper[4826]: I0129 08:32:35.664963 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:32:35 crc kubenswrapper[4826]: I0129 08:32:35.665589 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.619580 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qxxt5"] Jan 29 08:32:44 crc kubenswrapper[4826]: E0129 08:32:44.620766 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fb50bf-557f-47fc-9e5f-fa5e8078f182" containerName="registry-server" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.620785 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fb50bf-557f-47fc-9e5f-fa5e8078f182" containerName="registry-server" Jan 29 08:32:44 crc kubenswrapper[4826]: E0129 08:32:44.620831 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fb50bf-557f-47fc-9e5f-fa5e8078f182" containerName="extract-utilities" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.620840 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fb50bf-557f-47fc-9e5f-fa5e8078f182" containerName="extract-utilities" Jan 29 08:32:44 crc kubenswrapper[4826]: E0129 08:32:44.620854 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fb50bf-557f-47fc-9e5f-fa5e8078f182" containerName="extract-content" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.620861 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fb50bf-557f-47fc-9e5f-fa5e8078f182" containerName="extract-content" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.621099 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="26fb50bf-557f-47fc-9e5f-fa5e8078f182" containerName="registry-server" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.662477 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.734933 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxxt5"] Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.827033 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkjk6\" (UniqueName: \"kubernetes.io/projected/c5ac813b-c186-47b3-b126-12cdf48471e2-kube-api-access-lkjk6\") pod \"redhat-marketplace-qxxt5\" (UID: \"c5ac813b-c186-47b3-b126-12cdf48471e2\") " pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.827286 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ac813b-c186-47b3-b126-12cdf48471e2-utilities\") pod \"redhat-marketplace-qxxt5\" (UID: \"c5ac813b-c186-47b3-b126-12cdf48471e2\") " pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.827416 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ac813b-c186-47b3-b126-12cdf48471e2-catalog-content\") pod \"redhat-marketplace-qxxt5\" (UID: \"c5ac813b-c186-47b3-b126-12cdf48471e2\") " pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.929675 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ac813b-c186-47b3-b126-12cdf48471e2-utilities\") pod \"redhat-marketplace-qxxt5\" (UID: \"c5ac813b-c186-47b3-b126-12cdf48471e2\") " pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.929963 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ac813b-c186-47b3-b126-12cdf48471e2-catalog-content\") pod \"redhat-marketplace-qxxt5\" (UID: \"c5ac813b-c186-47b3-b126-12cdf48471e2\") " pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.930071 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkjk6\" (UniqueName: \"kubernetes.io/projected/c5ac813b-c186-47b3-b126-12cdf48471e2-kube-api-access-lkjk6\") pod \"redhat-marketplace-qxxt5\" (UID: \"c5ac813b-c186-47b3-b126-12cdf48471e2\") " pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.930156 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ac813b-c186-47b3-b126-12cdf48471e2-utilities\") pod \"redhat-marketplace-qxxt5\" (UID: \"c5ac813b-c186-47b3-b126-12cdf48471e2\") " pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.930410 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ac813b-c186-47b3-b126-12cdf48471e2-catalog-content\") pod \"redhat-marketplace-qxxt5\" (UID: \"c5ac813b-c186-47b3-b126-12cdf48471e2\") " pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:44 crc kubenswrapper[4826]: I0129 08:32:44.961881 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkjk6\" (UniqueName: \"kubernetes.io/projected/c5ac813b-c186-47b3-b126-12cdf48471e2-kube-api-access-lkjk6\") pod \"redhat-marketplace-qxxt5\" (UID: \"c5ac813b-c186-47b3-b126-12cdf48471e2\") " pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:45 crc kubenswrapper[4826]: I0129 08:32:45.004314 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:45 crc kubenswrapper[4826]: I0129 08:32:45.510039 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxxt5"] Jan 29 08:32:45 crc kubenswrapper[4826]: W0129 08:32:45.526535 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5ac813b_c186_47b3_b126_12cdf48471e2.slice/crio-49bfe5738965c9e0b07590f5936cf635991b349336699782510583d7734e02a7 WatchSource:0}: Error finding container 49bfe5738965c9e0b07590f5936cf635991b349336699782510583d7734e02a7: Status 404 returned error can't find the container with id 49bfe5738965c9e0b07590f5936cf635991b349336699782510583d7734e02a7 Jan 29 08:32:46 crc kubenswrapper[4826]: I0129 08:32:46.290565 4826 generic.go:334] "Generic (PLEG): container finished" podID="c5ac813b-c186-47b3-b126-12cdf48471e2" containerID="4725a894fa30152fb78f31a5df4790c039e268d47e2d56d00f13030c1ccf8fcf" exitCode=0 Jan 29 08:32:46 crc kubenswrapper[4826]: I0129 08:32:46.290684 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxt5" event={"ID":"c5ac813b-c186-47b3-b126-12cdf48471e2","Type":"ContainerDied","Data":"4725a894fa30152fb78f31a5df4790c039e268d47e2d56d00f13030c1ccf8fcf"} Jan 29 08:32:46 crc kubenswrapper[4826]: I0129 08:32:46.290993 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxt5" event={"ID":"c5ac813b-c186-47b3-b126-12cdf48471e2","Type":"ContainerStarted","Data":"49bfe5738965c9e0b07590f5936cf635991b349336699782510583d7734e02a7"} Jan 29 08:32:47 crc kubenswrapper[4826]: I0129 08:32:47.310840 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxt5" event={"ID":"c5ac813b-c186-47b3-b126-12cdf48471e2","Type":"ContainerStarted","Data":"566ce3c490203e2d5bac9c1d08f5bcfbf6b05110eefc80f0c9c923f8b52b08e0"} Jan 29 08:32:48 crc kubenswrapper[4826]: I0129 08:32:48.322734 4826 generic.go:334] "Generic (PLEG): container finished" podID="c5ac813b-c186-47b3-b126-12cdf48471e2" containerID="566ce3c490203e2d5bac9c1d08f5bcfbf6b05110eefc80f0c9c923f8b52b08e0" exitCode=0 Jan 29 08:32:48 crc kubenswrapper[4826]: I0129 08:32:48.322826 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxt5" event={"ID":"c5ac813b-c186-47b3-b126-12cdf48471e2","Type":"ContainerDied","Data":"566ce3c490203e2d5bac9c1d08f5bcfbf6b05110eefc80f0c9c923f8b52b08e0"} Jan 29 08:32:49 crc kubenswrapper[4826]: I0129 08:32:49.337007 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxt5" event={"ID":"c5ac813b-c186-47b3-b126-12cdf48471e2","Type":"ContainerStarted","Data":"a508de8e604e2c71631800bf16321ae5bcf944beb46d65b888894a6fc8c12302"} Jan 29 08:32:49 crc kubenswrapper[4826]: I0129 08:32:49.364496 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qxxt5" podStartSLOduration=2.912426442 podStartE2EDuration="5.36448149s" podCreationTimestamp="2026-01-29 08:32:44 +0000 UTC" firstStartedPulling="2026-01-29 08:32:46.294729846 +0000 UTC m=+6550.156522925" lastFinishedPulling="2026-01-29 08:32:48.746784904 +0000 UTC m=+6552.608577973" observedRunningTime="2026-01-29 08:32:49.356983803 +0000 UTC m=+6553.218776872" watchObservedRunningTime="2026-01-29 08:32:49.36448149 +0000 UTC m=+6553.226274559" Jan 29 08:32:55 crc kubenswrapper[4826]: I0129 08:32:55.009206 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:55 crc kubenswrapper[4826]: I0129 08:32:55.009643 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:55 crc kubenswrapper[4826]: I0129 08:32:55.069227 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:55 crc kubenswrapper[4826]: I0129 08:32:55.457897 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:55 crc kubenswrapper[4826]: I0129 08:32:55.511208 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxxt5"] Jan 29 08:32:57 crc kubenswrapper[4826]: I0129 08:32:57.415421 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qxxt5" podUID="c5ac813b-c186-47b3-b126-12cdf48471e2" containerName="registry-server" containerID="cri-o://a508de8e604e2c71631800bf16321ae5bcf944beb46d65b888894a6fc8c12302" gracePeriod=2 Jan 29 08:32:57 crc kubenswrapper[4826]: I0129 08:32:57.959590 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.104628 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ac813b-c186-47b3-b126-12cdf48471e2-catalog-content\") pod \"c5ac813b-c186-47b3-b126-12cdf48471e2\" (UID: \"c5ac813b-c186-47b3-b126-12cdf48471e2\") " Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.105082 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkjk6\" (UniqueName: \"kubernetes.io/projected/c5ac813b-c186-47b3-b126-12cdf48471e2-kube-api-access-lkjk6\") pod \"c5ac813b-c186-47b3-b126-12cdf48471e2\" (UID: \"c5ac813b-c186-47b3-b126-12cdf48471e2\") " Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.105119 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ac813b-c186-47b3-b126-12cdf48471e2-utilities\") pod \"c5ac813b-c186-47b3-b126-12cdf48471e2\" (UID: \"c5ac813b-c186-47b3-b126-12cdf48471e2\") " Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.107082 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5ac813b-c186-47b3-b126-12cdf48471e2-utilities" (OuterVolumeSpecName: "utilities") pod "c5ac813b-c186-47b3-b126-12cdf48471e2" (UID: "c5ac813b-c186-47b3-b126-12cdf48471e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.107762 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5ac813b-c186-47b3-b126-12cdf48471e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.122694 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5ac813b-c186-47b3-b126-12cdf48471e2-kube-api-access-lkjk6" (OuterVolumeSpecName: "kube-api-access-lkjk6") pod "c5ac813b-c186-47b3-b126-12cdf48471e2" (UID: "c5ac813b-c186-47b3-b126-12cdf48471e2"). InnerVolumeSpecName "kube-api-access-lkjk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.129557 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5ac813b-c186-47b3-b126-12cdf48471e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5ac813b-c186-47b3-b126-12cdf48471e2" (UID: "c5ac813b-c186-47b3-b126-12cdf48471e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.209093 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5ac813b-c186-47b3-b126-12cdf48471e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.209134 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkjk6\" (UniqueName: \"kubernetes.io/projected/c5ac813b-c186-47b3-b126-12cdf48471e2-kube-api-access-lkjk6\") on node \"crc\" DevicePath \"\"" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.428285 4826 generic.go:334] "Generic (PLEG): container finished" podID="c5ac813b-c186-47b3-b126-12cdf48471e2" containerID="a508de8e604e2c71631800bf16321ae5bcf944beb46d65b888894a6fc8c12302" exitCode=0 Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.428355 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxt5" event={"ID":"c5ac813b-c186-47b3-b126-12cdf48471e2","Type":"ContainerDied","Data":"a508de8e604e2c71631800bf16321ae5bcf944beb46d65b888894a6fc8c12302"} Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.428374 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxxt5" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.428400 4826 scope.go:117] "RemoveContainer" containerID="a508de8e604e2c71631800bf16321ae5bcf944beb46d65b888894a6fc8c12302" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.428388 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxt5" event={"ID":"c5ac813b-c186-47b3-b126-12cdf48471e2","Type":"ContainerDied","Data":"49bfe5738965c9e0b07590f5936cf635991b349336699782510583d7734e02a7"} Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.462366 4826 scope.go:117] "RemoveContainer" containerID="566ce3c490203e2d5bac9c1d08f5bcfbf6b05110eefc80f0c9c923f8b52b08e0" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.479278 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxxt5"] Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.487721 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxxt5"] Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.499639 4826 scope.go:117] "RemoveContainer" containerID="4725a894fa30152fb78f31a5df4790c039e268d47e2d56d00f13030c1ccf8fcf" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.563329 4826 scope.go:117] "RemoveContainer" containerID="a508de8e604e2c71631800bf16321ae5bcf944beb46d65b888894a6fc8c12302" Jan 29 08:32:58 crc kubenswrapper[4826]: E0129 08:32:58.563904 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a508de8e604e2c71631800bf16321ae5bcf944beb46d65b888894a6fc8c12302\": container with ID starting with a508de8e604e2c71631800bf16321ae5bcf944beb46d65b888894a6fc8c12302 not found: ID does not exist" containerID="a508de8e604e2c71631800bf16321ae5bcf944beb46d65b888894a6fc8c12302" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.563991 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a508de8e604e2c71631800bf16321ae5bcf944beb46d65b888894a6fc8c12302"} err="failed to get container status \"a508de8e604e2c71631800bf16321ae5bcf944beb46d65b888894a6fc8c12302\": rpc error: code = NotFound desc = could not find container \"a508de8e604e2c71631800bf16321ae5bcf944beb46d65b888894a6fc8c12302\": container with ID starting with a508de8e604e2c71631800bf16321ae5bcf944beb46d65b888894a6fc8c12302 not found: ID does not exist" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.564023 4826 scope.go:117] "RemoveContainer" containerID="566ce3c490203e2d5bac9c1d08f5bcfbf6b05110eefc80f0c9c923f8b52b08e0" Jan 29 08:32:58 crc kubenswrapper[4826]: E0129 08:32:58.564358 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"566ce3c490203e2d5bac9c1d08f5bcfbf6b05110eefc80f0c9c923f8b52b08e0\": container with ID starting with 566ce3c490203e2d5bac9c1d08f5bcfbf6b05110eefc80f0c9c923f8b52b08e0 not found: ID does not exist" containerID="566ce3c490203e2d5bac9c1d08f5bcfbf6b05110eefc80f0c9c923f8b52b08e0" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.564383 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"566ce3c490203e2d5bac9c1d08f5bcfbf6b05110eefc80f0c9c923f8b52b08e0"} err="failed to get container status \"566ce3c490203e2d5bac9c1d08f5bcfbf6b05110eefc80f0c9c923f8b52b08e0\": rpc error: code = NotFound desc = could not find container \"566ce3c490203e2d5bac9c1d08f5bcfbf6b05110eefc80f0c9c923f8b52b08e0\": container with ID starting with 566ce3c490203e2d5bac9c1d08f5bcfbf6b05110eefc80f0c9c923f8b52b08e0 not found: ID does not exist" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.564397 4826 scope.go:117] "RemoveContainer" containerID="4725a894fa30152fb78f31a5df4790c039e268d47e2d56d00f13030c1ccf8fcf" Jan 29 08:32:58 crc kubenswrapper[4826]: E0129 08:32:58.564615 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4725a894fa30152fb78f31a5df4790c039e268d47e2d56d00f13030c1ccf8fcf\": container with ID starting with 4725a894fa30152fb78f31a5df4790c039e268d47e2d56d00f13030c1ccf8fcf not found: ID does not exist" containerID="4725a894fa30152fb78f31a5df4790c039e268d47e2d56d00f13030c1ccf8fcf" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.564636 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4725a894fa30152fb78f31a5df4790c039e268d47e2d56d00f13030c1ccf8fcf"} err="failed to get container status \"4725a894fa30152fb78f31a5df4790c039e268d47e2d56d00f13030c1ccf8fcf\": rpc error: code = NotFound desc = could not find container \"4725a894fa30152fb78f31a5df4790c039e268d47e2d56d00f13030c1ccf8fcf\": container with ID starting with 4725a894fa30152fb78f31a5df4790c039e268d47e2d56d00f13030c1ccf8fcf not found: ID does not exist" Jan 29 08:32:58 crc kubenswrapper[4826]: I0129 08:32:58.821017 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5ac813b-c186-47b3-b126-12cdf48471e2" path="/var/lib/kubelet/pods/c5ac813b-c186-47b3-b126-12cdf48471e2/volumes" Jan 29 08:33:05 crc kubenswrapper[4826]: I0129 08:33:05.655876 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:33:05 crc kubenswrapper[4826]: I0129 08:33:05.656441 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:33:35 crc kubenswrapper[4826]: I0129 08:33:35.656952 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:33:35 crc kubenswrapper[4826]: I0129 08:33:35.657689 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:33:35 crc kubenswrapper[4826]: I0129 08:33:35.657762 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 08:33:35 crc kubenswrapper[4826]: I0129 08:33:35.658994 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:33:35 crc kubenswrapper[4826]: I0129 08:33:35.659088 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" gracePeriod=600 Jan 29 08:33:35 crc kubenswrapper[4826]: I0129 08:33:35.788928 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" exitCode=0 Jan 29 08:33:35 crc kubenswrapper[4826]: I0129 08:33:35.788972 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf"} Jan 29 08:33:35 crc kubenswrapper[4826]: I0129 08:33:35.789004 4826 scope.go:117] "RemoveContainer" containerID="42007fe8252651c322800cb3362afa885bf869996e754d5e5eded1c4cfbce3ac" Jan 29 08:33:35 crc kubenswrapper[4826]: E0129 08:33:35.796832 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:33:36 crc kubenswrapper[4826]: I0129 08:33:36.801820 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:33:36 crc kubenswrapper[4826]: E0129 08:33:36.802148 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.122341 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-njwlg"] Jan 29 08:33:44 crc kubenswrapper[4826]: E0129 08:33:44.123215 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ac813b-c186-47b3-b126-12cdf48471e2" containerName="extract-utilities" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.123227 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ac813b-c186-47b3-b126-12cdf48471e2" containerName="extract-utilities" Jan 29 08:33:44 crc kubenswrapper[4826]: E0129 08:33:44.123238 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ac813b-c186-47b3-b126-12cdf48471e2" containerName="extract-content" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.123243 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ac813b-c186-47b3-b126-12cdf48471e2" containerName="extract-content" Jan 29 08:33:44 crc kubenswrapper[4826]: E0129 08:33:44.123266 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ac813b-c186-47b3-b126-12cdf48471e2" containerName="registry-server" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.123271 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ac813b-c186-47b3-b126-12cdf48471e2" containerName="registry-server" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.123460 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5ac813b-c186-47b3-b126-12cdf48471e2" containerName="registry-server" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.124755 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.138432 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njwlg"] Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.176600 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28e40c2-20a1-4ad2-8790-42712351003d-catalog-content\") pod \"certified-operators-njwlg\" (UID: \"e28e40c2-20a1-4ad2-8790-42712351003d\") " pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.176665 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28e40c2-20a1-4ad2-8790-42712351003d-utilities\") pod \"certified-operators-njwlg\" (UID: \"e28e40c2-20a1-4ad2-8790-42712351003d\") " pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.176701 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw57l\" (UniqueName: \"kubernetes.io/projected/e28e40c2-20a1-4ad2-8790-42712351003d-kube-api-access-cw57l\") pod \"certified-operators-njwlg\" (UID: \"e28e40c2-20a1-4ad2-8790-42712351003d\") " pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.279320 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28e40c2-20a1-4ad2-8790-42712351003d-catalog-content\") pod \"certified-operators-njwlg\" (UID: \"e28e40c2-20a1-4ad2-8790-42712351003d\") " pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.279409 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28e40c2-20a1-4ad2-8790-42712351003d-utilities\") pod \"certified-operators-njwlg\" (UID: \"e28e40c2-20a1-4ad2-8790-42712351003d\") " pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.279446 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw57l\" (UniqueName: \"kubernetes.io/projected/e28e40c2-20a1-4ad2-8790-42712351003d-kube-api-access-cw57l\") pod \"certified-operators-njwlg\" (UID: \"e28e40c2-20a1-4ad2-8790-42712351003d\") " pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.280073 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28e40c2-20a1-4ad2-8790-42712351003d-catalog-content\") pod \"certified-operators-njwlg\" (UID: \"e28e40c2-20a1-4ad2-8790-42712351003d\") " pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.280091 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28e40c2-20a1-4ad2-8790-42712351003d-utilities\") pod \"certified-operators-njwlg\" (UID: \"e28e40c2-20a1-4ad2-8790-42712351003d\") " pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.309510 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw57l\" (UniqueName: \"kubernetes.io/projected/e28e40c2-20a1-4ad2-8790-42712351003d-kube-api-access-cw57l\") pod \"certified-operators-njwlg\" (UID: \"e28e40c2-20a1-4ad2-8790-42712351003d\") " pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:44 crc kubenswrapper[4826]: I0129 08:33:44.490905 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:45 crc kubenswrapper[4826]: I0129 08:33:45.009828 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njwlg"] Jan 29 08:33:45 crc kubenswrapper[4826]: I0129 08:33:45.883139 4826 generic.go:334] "Generic (PLEG): container finished" podID="e28e40c2-20a1-4ad2-8790-42712351003d" containerID="c44ada76a8a5bac5f110b2c0891e051d6276ca71cb66b5a8cf1d3cbe331bb94a" exitCode=0 Jan 29 08:33:45 crc kubenswrapper[4826]: I0129 08:33:45.883348 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njwlg" event={"ID":"e28e40c2-20a1-4ad2-8790-42712351003d","Type":"ContainerDied","Data":"c44ada76a8a5bac5f110b2c0891e051d6276ca71cb66b5a8cf1d3cbe331bb94a"} Jan 29 08:33:45 crc kubenswrapper[4826]: I0129 08:33:45.886205 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njwlg" event={"ID":"e28e40c2-20a1-4ad2-8790-42712351003d","Type":"ContainerStarted","Data":"fb4e7b5dbf381d94e9809e949daecf8b33242191b80fe74132904efdcbaedec1"} Jan 29 08:33:47 crc kubenswrapper[4826]: I0129 08:33:47.907892 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njwlg" event={"ID":"e28e40c2-20a1-4ad2-8790-42712351003d","Type":"ContainerStarted","Data":"b6e4e2cb1cbf6759d25f7b692401f490e23e1b3c88eb962444a7efe313af8a06"} Jan 29 08:33:49 crc kubenswrapper[4826]: I0129 08:33:49.979107 4826 generic.go:334] "Generic (PLEG): container finished" podID="e28e40c2-20a1-4ad2-8790-42712351003d" containerID="b6e4e2cb1cbf6759d25f7b692401f490e23e1b3c88eb962444a7efe313af8a06" exitCode=0 Jan 29 08:33:49 crc kubenswrapper[4826]: I0129 08:33:49.979215 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njwlg" event={"ID":"e28e40c2-20a1-4ad2-8790-42712351003d","Type":"ContainerDied","Data":"b6e4e2cb1cbf6759d25f7b692401f490e23e1b3c88eb962444a7efe313af8a06"} Jan 29 08:33:50 crc kubenswrapper[4826]: I0129 08:33:50.809509 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:33:50 crc kubenswrapper[4826]: E0129 08:33:50.810024 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:33:50 crc kubenswrapper[4826]: I0129 08:33:50.993652 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njwlg" event={"ID":"e28e40c2-20a1-4ad2-8790-42712351003d","Type":"ContainerStarted","Data":"e6c0cebf00fa24bb2880c958e1304efee9c890a3f7590058305f43a5722acdbe"} Jan 29 08:33:51 crc kubenswrapper[4826]: I0129 08:33:51.019753 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-njwlg" podStartSLOduration=2.535855765 podStartE2EDuration="7.019731078s" podCreationTimestamp="2026-01-29 08:33:44 +0000 UTC" firstStartedPulling="2026-01-29 08:33:45.885579881 +0000 UTC m=+6609.747372950" lastFinishedPulling="2026-01-29 08:33:50.369455194 +0000 UTC m=+6614.231248263" observedRunningTime="2026-01-29 08:33:51.013469783 +0000 UTC m=+6614.875262862" watchObservedRunningTime="2026-01-29 08:33:51.019731078 +0000 UTC m=+6614.881524147" Jan 29 08:33:54 crc kubenswrapper[4826]: I0129 08:33:54.492332 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:54 crc kubenswrapper[4826]: I0129 08:33:54.492619 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:54 crc kubenswrapper[4826]: I0129 08:33:54.590396 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:55 crc kubenswrapper[4826]: I0129 08:33:55.095698 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:55 crc kubenswrapper[4826]: I0129 08:33:55.143250 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njwlg"] Jan 29 08:33:57 crc kubenswrapper[4826]: I0129 08:33:57.055342 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-njwlg" podUID="e28e40c2-20a1-4ad2-8790-42712351003d" containerName="registry-server" containerID="cri-o://e6c0cebf00fa24bb2880c958e1304efee9c890a3f7590058305f43a5722acdbe" gracePeriod=2 Jan 29 08:33:57 crc kubenswrapper[4826]: I0129 08:33:57.519804 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:57 crc kubenswrapper[4826]: I0129 08:33:57.649367 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28e40c2-20a1-4ad2-8790-42712351003d-utilities\") pod \"e28e40c2-20a1-4ad2-8790-42712351003d\" (UID: \"e28e40c2-20a1-4ad2-8790-42712351003d\") " Jan 29 08:33:57 crc kubenswrapper[4826]: I0129 08:33:57.649488 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28e40c2-20a1-4ad2-8790-42712351003d-catalog-content\") pod \"e28e40c2-20a1-4ad2-8790-42712351003d\" (UID: \"e28e40c2-20a1-4ad2-8790-42712351003d\") " Jan 29 08:33:57 crc kubenswrapper[4826]: I0129 08:33:57.649735 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw57l\" (UniqueName: \"kubernetes.io/projected/e28e40c2-20a1-4ad2-8790-42712351003d-kube-api-access-cw57l\") pod \"e28e40c2-20a1-4ad2-8790-42712351003d\" (UID: \"e28e40c2-20a1-4ad2-8790-42712351003d\") " Jan 29 08:33:57 crc kubenswrapper[4826]: I0129 08:33:57.650180 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28e40c2-20a1-4ad2-8790-42712351003d-utilities" (OuterVolumeSpecName: "utilities") pod "e28e40c2-20a1-4ad2-8790-42712351003d" (UID: "e28e40c2-20a1-4ad2-8790-42712351003d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:33:57 crc kubenswrapper[4826]: I0129 08:33:57.655389 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28e40c2-20a1-4ad2-8790-42712351003d-kube-api-access-cw57l" (OuterVolumeSpecName: "kube-api-access-cw57l") pod "e28e40c2-20a1-4ad2-8790-42712351003d" (UID: "e28e40c2-20a1-4ad2-8790-42712351003d"). InnerVolumeSpecName "kube-api-access-cw57l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:33:57 crc kubenswrapper[4826]: I0129 08:33:57.695177 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28e40c2-20a1-4ad2-8790-42712351003d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e28e40c2-20a1-4ad2-8790-42712351003d" (UID: "e28e40c2-20a1-4ad2-8790-42712351003d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:33:57 crc kubenswrapper[4826]: I0129 08:33:57.751836 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw57l\" (UniqueName: \"kubernetes.io/projected/e28e40c2-20a1-4ad2-8790-42712351003d-kube-api-access-cw57l\") on node \"crc\" DevicePath \"\"" Jan 29 08:33:57 crc kubenswrapper[4826]: I0129 08:33:57.751865 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28e40c2-20a1-4ad2-8790-42712351003d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:33:57 crc kubenswrapper[4826]: I0129 08:33:57.751875 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28e40c2-20a1-4ad2-8790-42712351003d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.064934 4826 generic.go:334] "Generic (PLEG): container finished" podID="e28e40c2-20a1-4ad2-8790-42712351003d" containerID="e6c0cebf00fa24bb2880c958e1304efee9c890a3f7590058305f43a5722acdbe" exitCode=0 Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.064981 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njwlg" event={"ID":"e28e40c2-20a1-4ad2-8790-42712351003d","Type":"ContainerDied","Data":"e6c0cebf00fa24bb2880c958e1304efee9c890a3f7590058305f43a5722acdbe"} Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.064997 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njwlg" Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.065019 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njwlg" event={"ID":"e28e40c2-20a1-4ad2-8790-42712351003d","Type":"ContainerDied","Data":"fb4e7b5dbf381d94e9809e949daecf8b33242191b80fe74132904efdcbaedec1"} Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.065041 4826 scope.go:117] "RemoveContainer" containerID="e6c0cebf00fa24bb2880c958e1304efee9c890a3f7590058305f43a5722acdbe" Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.084460 4826 scope.go:117] "RemoveContainer" containerID="b6e4e2cb1cbf6759d25f7b692401f490e23e1b3c88eb962444a7efe313af8a06" Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.109319 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njwlg"] Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.124251 4826 scope.go:117] "RemoveContainer" containerID="c44ada76a8a5bac5f110b2c0891e051d6276ca71cb66b5a8cf1d3cbe331bb94a" Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.125565 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-njwlg"] Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.155032 4826 scope.go:117] "RemoveContainer" containerID="e6c0cebf00fa24bb2880c958e1304efee9c890a3f7590058305f43a5722acdbe" Jan 29 08:33:58 crc kubenswrapper[4826]: E0129 08:33:58.155572 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6c0cebf00fa24bb2880c958e1304efee9c890a3f7590058305f43a5722acdbe\": container with ID starting with e6c0cebf00fa24bb2880c958e1304efee9c890a3f7590058305f43a5722acdbe not found: ID does not exist" containerID="e6c0cebf00fa24bb2880c958e1304efee9c890a3f7590058305f43a5722acdbe" Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.155614 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6c0cebf00fa24bb2880c958e1304efee9c890a3f7590058305f43a5722acdbe"} err="failed to get container status \"e6c0cebf00fa24bb2880c958e1304efee9c890a3f7590058305f43a5722acdbe\": rpc error: code = NotFound desc = could not find container \"e6c0cebf00fa24bb2880c958e1304efee9c890a3f7590058305f43a5722acdbe\": container with ID starting with e6c0cebf00fa24bb2880c958e1304efee9c890a3f7590058305f43a5722acdbe not found: ID does not exist" Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.155640 4826 scope.go:117] "RemoveContainer" containerID="b6e4e2cb1cbf6759d25f7b692401f490e23e1b3c88eb962444a7efe313af8a06" Jan 29 08:33:58 crc kubenswrapper[4826]: E0129 08:33:58.155940 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6e4e2cb1cbf6759d25f7b692401f490e23e1b3c88eb962444a7efe313af8a06\": container with ID starting with b6e4e2cb1cbf6759d25f7b692401f490e23e1b3c88eb962444a7efe313af8a06 not found: ID does not exist" containerID="b6e4e2cb1cbf6759d25f7b692401f490e23e1b3c88eb962444a7efe313af8a06" Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.155991 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6e4e2cb1cbf6759d25f7b692401f490e23e1b3c88eb962444a7efe313af8a06"} err="failed to get container status \"b6e4e2cb1cbf6759d25f7b692401f490e23e1b3c88eb962444a7efe313af8a06\": rpc error: code = NotFound desc = could not find container \"b6e4e2cb1cbf6759d25f7b692401f490e23e1b3c88eb962444a7efe313af8a06\": container with ID starting with b6e4e2cb1cbf6759d25f7b692401f490e23e1b3c88eb962444a7efe313af8a06 not found: ID does not exist" Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.156025 4826 scope.go:117] "RemoveContainer" containerID="c44ada76a8a5bac5f110b2c0891e051d6276ca71cb66b5a8cf1d3cbe331bb94a" Jan 29 08:33:58 crc kubenswrapper[4826]: E0129 08:33:58.156325 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c44ada76a8a5bac5f110b2c0891e051d6276ca71cb66b5a8cf1d3cbe331bb94a\": container with ID starting with c44ada76a8a5bac5f110b2c0891e051d6276ca71cb66b5a8cf1d3cbe331bb94a not found: ID does not exist" containerID="c44ada76a8a5bac5f110b2c0891e051d6276ca71cb66b5a8cf1d3cbe331bb94a" Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.156360 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c44ada76a8a5bac5f110b2c0891e051d6276ca71cb66b5a8cf1d3cbe331bb94a"} err="failed to get container status \"c44ada76a8a5bac5f110b2c0891e051d6276ca71cb66b5a8cf1d3cbe331bb94a\": rpc error: code = NotFound desc = could not find container \"c44ada76a8a5bac5f110b2c0891e051d6276ca71cb66b5a8cf1d3cbe331bb94a\": container with ID starting with c44ada76a8a5bac5f110b2c0891e051d6276ca71cb66b5a8cf1d3cbe331bb94a not found: ID does not exist" Jan 29 08:33:58 crc kubenswrapper[4826]: I0129 08:33:58.819900 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28e40c2-20a1-4ad2-8790-42712351003d" path="/var/lib/kubelet/pods/e28e40c2-20a1-4ad2-8790-42712351003d/volumes" Jan 29 08:34:03 crc kubenswrapper[4826]: I0129 08:34:03.809827 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:34:03 crc kubenswrapper[4826]: E0129 08:34:03.810407 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:34:04 crc kubenswrapper[4826]: I0129 08:34:04.101647 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-54q62"] Jan 29 08:34:04 crc kubenswrapper[4826]: I0129 08:34:04.118158 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-02a3-account-create-update-zq6qg"] Jan 29 08:34:04 crc kubenswrapper[4826]: I0129 08:34:04.130249 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-02a3-account-create-update-zq6qg"] Jan 29 08:34:04 crc kubenswrapper[4826]: I0129 08:34:04.141261 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-54q62"] Jan 29 08:34:04 crc kubenswrapper[4826]: I0129 08:34:04.820801 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c9832e-da91-4ee0-b54b-0efbb5f5571c" path="/var/lib/kubelet/pods/92c9832e-da91-4ee0-b54b-0efbb5f5571c/volumes" Jan 29 08:34:04 crc kubenswrapper[4826]: I0129 08:34:04.821736 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95b6faea-69ff-4fbb-9c6c-d5201b093b6c" path="/var/lib/kubelet/pods/95b6faea-69ff-4fbb-9c6c-d5201b093b6c/volumes" Jan 29 08:34:18 crc kubenswrapper[4826]: I0129 08:34:18.809389 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:34:18 crc kubenswrapper[4826]: E0129 08:34:18.810336 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:34:24 crc kubenswrapper[4826]: I0129 08:34:24.482789 4826 scope.go:117] "RemoveContainer" containerID="27954068fd2f68787c91edab1814b6b0b82a761c0118ded0dfacb05f666fce25" Jan 29 08:34:24 crc kubenswrapper[4826]: I0129 08:34:24.510431 4826 scope.go:117] "RemoveContainer" containerID="f11d95c6a3506bac102d28e4d1d6c3d642ea919ea581c730f086e7490d4d3262" Jan 29 08:34:25 crc kubenswrapper[4826]: I0129 08:34:25.051941 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-q46qd"] Jan 29 08:34:25 crc kubenswrapper[4826]: I0129 08:34:25.060536 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-q46qd"] Jan 29 08:34:26 crc kubenswrapper[4826]: I0129 08:34:26.819793 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbce1eaf-5632-4daf-b860-e8bc1199ed0f" path="/var/lib/kubelet/pods/dbce1eaf-5632-4daf-b860-e8bc1199ed0f/volumes" Jan 29 08:34:33 crc kubenswrapper[4826]: I0129 08:34:33.809478 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:34:33 crc kubenswrapper[4826]: E0129 08:34:33.810501 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:34:48 crc kubenswrapper[4826]: I0129 08:34:48.810110 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:34:48 crc kubenswrapper[4826]: E0129 08:34:48.810900 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:35:01 crc kubenswrapper[4826]: I0129 08:35:01.808415 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:35:01 crc kubenswrapper[4826]: E0129 08:35:01.809078 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:35:13 crc kubenswrapper[4826]: I0129 08:35:13.810157 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:35:13 crc kubenswrapper[4826]: E0129 08:35:13.810942 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:35:24 crc kubenswrapper[4826]: I0129 08:35:24.629197 4826 scope.go:117] "RemoveContainer" containerID="a74f087529465eb08354bc0a8d22d14761ad4939e60d21fe424e3a779d980f03" Jan 29 08:35:28 crc kubenswrapper[4826]: I0129 08:35:28.810039 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:35:28 crc kubenswrapper[4826]: E0129 08:35:28.811674 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:35:39 crc kubenswrapper[4826]: I0129 08:35:39.809556 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:35:39 crc kubenswrapper[4826]: E0129 08:35:39.811672 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:35:54 crc kubenswrapper[4826]: I0129 08:35:54.809602 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:35:54 crc kubenswrapper[4826]: E0129 08:35:54.810738 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:36:05 crc kubenswrapper[4826]: I0129 08:36:05.809229 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:36:05 crc kubenswrapper[4826]: E0129 08:36:05.810591 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:36:16 crc kubenswrapper[4826]: I0129 08:36:16.814996 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:36:16 crc kubenswrapper[4826]: E0129 08:36:16.816692 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:36:24 crc kubenswrapper[4826]: I0129 08:36:24.491616 4826 generic.go:334] "Generic (PLEG): container finished" podID="c7145c04-cf5e-43b3-8934-0c6397272bb2" containerID="031b307655fb8f425192db521ae4dd064a412e1b223a44b0ea52279e97298f9e" exitCode=0 Jan 29 08:36:24 crc kubenswrapper[4826]: I0129 08:36:24.491763 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" event={"ID":"c7145c04-cf5e-43b3-8934-0c6397272bb2","Type":"ContainerDied","Data":"031b307655fb8f425192db521ae4dd064a412e1b223a44b0ea52279e97298f9e"} Jan 29 08:36:25 crc kubenswrapper[4826]: I0129 08:36:25.956574 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.098128 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-inventory\") pod \"c7145c04-cf5e-43b3-8934-0c6397272bb2\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.098511 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-tripleo-cleanup-combined-ca-bundle\") pod \"c7145c04-cf5e-43b3-8934-0c6397272bb2\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.098573 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slnnj\" (UniqueName: \"kubernetes.io/projected/c7145c04-cf5e-43b3-8934-0c6397272bb2-kube-api-access-slnnj\") pod \"c7145c04-cf5e-43b3-8934-0c6397272bb2\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.098685 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-ssh-key-openstack-cell1\") pod \"c7145c04-cf5e-43b3-8934-0c6397272bb2\" (UID: \"c7145c04-cf5e-43b3-8934-0c6397272bb2\") " Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.103791 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7145c04-cf5e-43b3-8934-0c6397272bb2-kube-api-access-slnnj" (OuterVolumeSpecName: "kube-api-access-slnnj") pod "c7145c04-cf5e-43b3-8934-0c6397272bb2" (UID: "c7145c04-cf5e-43b3-8934-0c6397272bb2"). InnerVolumeSpecName "kube-api-access-slnnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.104839 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "c7145c04-cf5e-43b3-8934-0c6397272bb2" (UID: "c7145c04-cf5e-43b3-8934-0c6397272bb2"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.129637 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-inventory" (OuterVolumeSpecName: "inventory") pod "c7145c04-cf5e-43b3-8934-0c6397272bb2" (UID: "c7145c04-cf5e-43b3-8934-0c6397272bb2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.130940 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "c7145c04-cf5e-43b3-8934-0c6397272bb2" (UID: "c7145c04-cf5e-43b3-8934-0c6397272bb2"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.201675 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.201710 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.201720 4826 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7145c04-cf5e-43b3-8934-0c6397272bb2-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.201732 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slnnj\" (UniqueName: \"kubernetes.io/projected/c7145c04-cf5e-43b3-8934-0c6397272bb2-kube-api-access-slnnj\") on node \"crc\" DevicePath \"\"" Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.514735 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" event={"ID":"c7145c04-cf5e-43b3-8934-0c6397272bb2","Type":"ContainerDied","Data":"0fd1c4ce5d5f64b7dc62e3965e6ee6d800e7e543ce86eacd920e3d8242085758"} Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.514789 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fd1c4ce5d5f64b7dc62e3965e6ee6d800e7e543ce86eacd920e3d8242085758" Jan 29 08:36:26 crc kubenswrapper[4826]: I0129 08:36:26.515195 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp" Jan 29 08:36:29 crc kubenswrapper[4826]: I0129 08:36:29.808822 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:36:29 crc kubenswrapper[4826]: E0129 08:36:29.809737 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.199227 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-2rqtc"] Jan 29 08:36:32 crc kubenswrapper[4826]: E0129 08:36:32.199833 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28e40c2-20a1-4ad2-8790-42712351003d" containerName="registry-server" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.199845 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28e40c2-20a1-4ad2-8790-42712351003d" containerName="registry-server" Jan 29 08:36:32 crc kubenswrapper[4826]: E0129 08:36:32.199856 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28e40c2-20a1-4ad2-8790-42712351003d" containerName="extract-utilities" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.199862 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28e40c2-20a1-4ad2-8790-42712351003d" containerName="extract-utilities" Jan 29 08:36:32 crc kubenswrapper[4826]: E0129 08:36:32.199877 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7145c04-cf5e-43b3-8934-0c6397272bb2" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.199887 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7145c04-cf5e-43b3-8934-0c6397272bb2" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 29 08:36:32 crc kubenswrapper[4826]: E0129 08:36:32.199898 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28e40c2-20a1-4ad2-8790-42712351003d" containerName="extract-content" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.199903 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28e40c2-20a1-4ad2-8790-42712351003d" containerName="extract-content" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.200079 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7145c04-cf5e-43b3-8934-0c6397272bb2" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.200103 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28e40c2-20a1-4ad2-8790-42712351003d" containerName="registry-server" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.200779 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.204525 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.205552 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.211172 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.214167 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.222323 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-2rqtc"] Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.326990 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-inventory\") pod \"bootstrap-openstack-openstack-cell1-2rqtc\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.327156 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-2rqtc\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.327188 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktb2m\" (UniqueName: \"kubernetes.io/projected/bcf40af3-eef2-454e-a418-dd1d61e7faf3-kube-api-access-ktb2m\") pod \"bootstrap-openstack-openstack-cell1-2rqtc\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.327243 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-2rqtc\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.431776 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-2rqtc\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.431833 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktb2m\" (UniqueName: \"kubernetes.io/projected/bcf40af3-eef2-454e-a418-dd1d61e7faf3-kube-api-access-ktb2m\") pod \"bootstrap-openstack-openstack-cell1-2rqtc\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.431895 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-2rqtc\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.432028 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-inventory\") pod \"bootstrap-openstack-openstack-cell1-2rqtc\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.439511 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-2rqtc\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.453186 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-2rqtc\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.453633 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-inventory\") pod \"bootstrap-openstack-openstack-cell1-2rqtc\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.458398 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktb2m\" (UniqueName: \"kubernetes.io/projected/bcf40af3-eef2-454e-a418-dd1d61e7faf3-kube-api-access-ktb2m\") pod \"bootstrap-openstack-openstack-cell1-2rqtc\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:32 crc kubenswrapper[4826]: I0129 08:36:32.526506 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:36:33 crc kubenswrapper[4826]: I0129 08:36:33.102998 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-2rqtc"] Jan 29 08:36:33 crc kubenswrapper[4826]: I0129 08:36:33.587032 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" event={"ID":"bcf40af3-eef2-454e-a418-dd1d61e7faf3","Type":"ContainerStarted","Data":"1b76c02b40d4c71ab7ac7566aa8d5d3f125364159adef8c56586b329a2932564"} Jan 29 08:36:34 crc kubenswrapper[4826]: I0129 08:36:34.595068 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" event={"ID":"bcf40af3-eef2-454e-a418-dd1d61e7faf3","Type":"ContainerStarted","Data":"56d23aa6962d5bd4bff1eb19cc17fdd6395cf84c710553595da0686fe07eaaa4"} Jan 29 08:36:34 crc kubenswrapper[4826]: I0129 08:36:34.614968 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" podStartSLOduration=2.161620469 podStartE2EDuration="2.614934701s" podCreationTimestamp="2026-01-29 08:36:32 +0000 UTC" firstStartedPulling="2026-01-29 08:36:33.108525702 +0000 UTC m=+6776.970318771" lastFinishedPulling="2026-01-29 08:36:33.561839934 +0000 UTC m=+6777.423633003" observedRunningTime="2026-01-29 08:36:34.609367343 +0000 UTC m=+6778.471160412" watchObservedRunningTime="2026-01-29 08:36:34.614934701 +0000 UTC m=+6778.476727820" Jan 29 08:36:43 crc kubenswrapper[4826]: I0129 08:36:43.808575 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:36:43 crc kubenswrapper[4826]: E0129 08:36:43.809372 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:36:50 crc kubenswrapper[4826]: I0129 08:36:50.920385 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-848h5"] Jan 29 08:36:50 crc kubenswrapper[4826]: I0129 08:36:50.924040 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:36:50 crc kubenswrapper[4826]: I0129 08:36:50.935887 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-848h5"] Jan 29 08:36:51 crc kubenswrapper[4826]: I0129 08:36:51.024069 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-catalog-content\") pod \"redhat-operators-848h5\" (UID: \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\") " pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:36:51 crc kubenswrapper[4826]: I0129 08:36:51.024201 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzczz\" (UniqueName: \"kubernetes.io/projected/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-kube-api-access-dzczz\") pod \"redhat-operators-848h5\" (UID: \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\") " pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:36:51 crc kubenswrapper[4826]: I0129 08:36:51.024230 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-utilities\") pod \"redhat-operators-848h5\" (UID: \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\") " pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:36:51 crc kubenswrapper[4826]: I0129 08:36:51.126399 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzczz\" (UniqueName: \"kubernetes.io/projected/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-kube-api-access-dzczz\") pod \"redhat-operators-848h5\" (UID: \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\") " pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:36:51 crc kubenswrapper[4826]: I0129 08:36:51.126459 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-utilities\") pod \"redhat-operators-848h5\" (UID: \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\") " pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:36:51 crc kubenswrapper[4826]: I0129 08:36:51.126650 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-catalog-content\") pod \"redhat-operators-848h5\" (UID: \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\") " pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:36:51 crc kubenswrapper[4826]: I0129 08:36:51.127259 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-utilities\") pod \"redhat-operators-848h5\" (UID: \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\") " pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:36:51 crc kubenswrapper[4826]: I0129 08:36:51.127352 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-catalog-content\") pod \"redhat-operators-848h5\" (UID: \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\") " pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:36:51 crc kubenswrapper[4826]: I0129 08:36:51.162437 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzczz\" (UniqueName: \"kubernetes.io/projected/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-kube-api-access-dzczz\") pod \"redhat-operators-848h5\" (UID: \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\") " pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:36:51 crc kubenswrapper[4826]: I0129 08:36:51.283366 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:36:51 crc kubenswrapper[4826]: I0129 08:36:51.769474 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-848h5"] Jan 29 08:36:52 crc kubenswrapper[4826]: I0129 08:36:52.770949 4826 generic.go:334] "Generic (PLEG): container finished" podID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" containerID="00e5880f48772bbab229f09cfdf98d222b9a1c9baf93c8d08c4f4634421fcb62" exitCode=0 Jan 29 08:36:52 crc kubenswrapper[4826]: I0129 08:36:52.771002 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848h5" event={"ID":"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1","Type":"ContainerDied","Data":"00e5880f48772bbab229f09cfdf98d222b9a1c9baf93c8d08c4f4634421fcb62"} Jan 29 08:36:52 crc kubenswrapper[4826]: I0129 08:36:52.771266 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848h5" event={"ID":"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1","Type":"ContainerStarted","Data":"024934bf935a7d63d9575fc13046fd49e6c6b7ae481b3ef2418dc4c909feb1d0"} Jan 29 08:36:53 crc kubenswrapper[4826]: I0129 08:36:53.781765 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848h5" event={"ID":"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1","Type":"ContainerStarted","Data":"c625207ddf2af45e35f164cf0044945ac33c7e3bac7533d375ec453e97960be7"} Jan 29 08:36:56 crc kubenswrapper[4826]: I0129 08:36:56.819256 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:36:56 crc kubenswrapper[4826]: E0129 08:36:56.819918 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:37:00 crc kubenswrapper[4826]: I0129 08:37:00.847541 4826 generic.go:334] "Generic (PLEG): container finished" podID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" containerID="c625207ddf2af45e35f164cf0044945ac33c7e3bac7533d375ec453e97960be7" exitCode=0 Jan 29 08:37:00 crc kubenswrapper[4826]: I0129 08:37:00.847626 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848h5" event={"ID":"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1","Type":"ContainerDied","Data":"c625207ddf2af45e35f164cf0044945ac33c7e3bac7533d375ec453e97960be7"} Jan 29 08:37:00 crc kubenswrapper[4826]: I0129 08:37:00.850538 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:37:02 crc kubenswrapper[4826]: I0129 08:37:02.869135 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848h5" event={"ID":"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1","Type":"ContainerStarted","Data":"bdd80cc8701aa470d7ba20413039b39ab36c53eabc2413e2635a5710da4a2ed5"} Jan 29 08:37:02 crc kubenswrapper[4826]: I0129 08:37:02.900890 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-848h5" podStartSLOduration=3.642572414 podStartE2EDuration="12.900867695s" podCreationTimestamp="2026-01-29 08:36:50 +0000 UTC" firstStartedPulling="2026-01-29 08:36:52.773071351 +0000 UTC m=+6796.634864420" lastFinishedPulling="2026-01-29 08:37:02.031366632 +0000 UTC m=+6805.893159701" observedRunningTime="2026-01-29 08:37:02.894032335 +0000 UTC m=+6806.755825414" watchObservedRunningTime="2026-01-29 08:37:02.900867695 +0000 UTC m=+6806.762660764" Jan 29 08:37:09 crc kubenswrapper[4826]: I0129 08:37:09.809450 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:37:09 crc kubenswrapper[4826]: E0129 08:37:09.810438 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:37:11 crc kubenswrapper[4826]: I0129 08:37:11.284485 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:37:11 crc kubenswrapper[4826]: I0129 08:37:11.284799 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:37:12 crc kubenswrapper[4826]: I0129 08:37:12.333485 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-848h5" podUID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" containerName="registry-server" probeResult="failure" output=< Jan 29 08:37:12 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 08:37:12 crc kubenswrapper[4826]: > Jan 29 08:37:20 crc kubenswrapper[4826]: I0129 08:37:20.808809 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:37:20 crc kubenswrapper[4826]: E0129 08:37:20.810600 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:37:22 crc kubenswrapper[4826]: I0129 08:37:22.333437 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-848h5" podUID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" containerName="registry-server" probeResult="failure" output=< Jan 29 08:37:22 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 08:37:22 crc kubenswrapper[4826]: > Jan 29 08:37:31 crc kubenswrapper[4826]: I0129 08:37:31.346545 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:37:31 crc kubenswrapper[4826]: I0129 08:37:31.408426 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:37:31 crc kubenswrapper[4826]: I0129 08:37:31.997191 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-848h5"] Jan 29 08:37:32 crc kubenswrapper[4826]: I0129 08:37:32.683764 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-848h5" podUID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" containerName="registry-server" containerID="cri-o://bdd80cc8701aa470d7ba20413039b39ab36c53eabc2413e2635a5710da4a2ed5" gracePeriod=2 Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.114818 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.282292 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-utilities\") pod \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\" (UID: \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\") " Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.282382 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzczz\" (UniqueName: \"kubernetes.io/projected/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-kube-api-access-dzczz\") pod \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\" (UID: \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\") " Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.282447 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-catalog-content\") pod \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\" (UID: \"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1\") " Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.282999 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-utilities" (OuterVolumeSpecName: "utilities") pod "5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" (UID: "5b30b6c6-0a2f-4e33-bff5-f3f0096401c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.288406 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-kube-api-access-dzczz" (OuterVolumeSpecName: "kube-api-access-dzczz") pod "5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" (UID: "5b30b6c6-0a2f-4e33-bff5-f3f0096401c1"). InnerVolumeSpecName "kube-api-access-dzczz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.385179 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.385210 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzczz\" (UniqueName: \"kubernetes.io/projected/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-kube-api-access-dzczz\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.412970 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" (UID: "5b30b6c6-0a2f-4e33-bff5-f3f0096401c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.487190 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.692940 4826 generic.go:334] "Generic (PLEG): container finished" podID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" containerID="bdd80cc8701aa470d7ba20413039b39ab36c53eabc2413e2635a5710da4a2ed5" exitCode=0 Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.692976 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848h5" event={"ID":"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1","Type":"ContainerDied","Data":"bdd80cc8701aa470d7ba20413039b39ab36c53eabc2413e2635a5710da4a2ed5"} Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.693003 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-848h5" event={"ID":"5b30b6c6-0a2f-4e33-bff5-f3f0096401c1","Type":"ContainerDied","Data":"024934bf935a7d63d9575fc13046fd49e6c6b7ae481b3ef2418dc4c909feb1d0"} Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.693034 4826 scope.go:117] "RemoveContainer" containerID="bdd80cc8701aa470d7ba20413039b39ab36c53eabc2413e2635a5710da4a2ed5" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.693043 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-848h5" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.717817 4826 scope.go:117] "RemoveContainer" containerID="c625207ddf2af45e35f164cf0044945ac33c7e3bac7533d375ec453e97960be7" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.736572 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-848h5"] Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.746329 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-848h5"] Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.747962 4826 scope.go:117] "RemoveContainer" containerID="00e5880f48772bbab229f09cfdf98d222b9a1c9baf93c8d08c4f4634421fcb62" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.798472 4826 scope.go:117] "RemoveContainer" containerID="bdd80cc8701aa470d7ba20413039b39ab36c53eabc2413e2635a5710da4a2ed5" Jan 29 08:37:33 crc kubenswrapper[4826]: E0129 08:37:33.799160 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdd80cc8701aa470d7ba20413039b39ab36c53eabc2413e2635a5710da4a2ed5\": container with ID starting with bdd80cc8701aa470d7ba20413039b39ab36c53eabc2413e2635a5710da4a2ed5 not found: ID does not exist" containerID="bdd80cc8701aa470d7ba20413039b39ab36c53eabc2413e2635a5710da4a2ed5" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.799198 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdd80cc8701aa470d7ba20413039b39ab36c53eabc2413e2635a5710da4a2ed5"} err="failed to get container status \"bdd80cc8701aa470d7ba20413039b39ab36c53eabc2413e2635a5710da4a2ed5\": rpc error: code = NotFound desc = could not find container \"bdd80cc8701aa470d7ba20413039b39ab36c53eabc2413e2635a5710da4a2ed5\": container with ID starting with bdd80cc8701aa470d7ba20413039b39ab36c53eabc2413e2635a5710da4a2ed5 not found: ID does not exist" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.799224 4826 scope.go:117] "RemoveContainer" containerID="c625207ddf2af45e35f164cf0044945ac33c7e3bac7533d375ec453e97960be7" Jan 29 08:37:33 crc kubenswrapper[4826]: E0129 08:37:33.799615 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c625207ddf2af45e35f164cf0044945ac33c7e3bac7533d375ec453e97960be7\": container with ID starting with c625207ddf2af45e35f164cf0044945ac33c7e3bac7533d375ec453e97960be7 not found: ID does not exist" containerID="c625207ddf2af45e35f164cf0044945ac33c7e3bac7533d375ec453e97960be7" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.799650 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c625207ddf2af45e35f164cf0044945ac33c7e3bac7533d375ec453e97960be7"} err="failed to get container status \"c625207ddf2af45e35f164cf0044945ac33c7e3bac7533d375ec453e97960be7\": rpc error: code = NotFound desc = could not find container \"c625207ddf2af45e35f164cf0044945ac33c7e3bac7533d375ec453e97960be7\": container with ID starting with c625207ddf2af45e35f164cf0044945ac33c7e3bac7533d375ec453e97960be7 not found: ID does not exist" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.799675 4826 scope.go:117] "RemoveContainer" containerID="00e5880f48772bbab229f09cfdf98d222b9a1c9baf93c8d08c4f4634421fcb62" Jan 29 08:37:33 crc kubenswrapper[4826]: E0129 08:37:33.799913 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00e5880f48772bbab229f09cfdf98d222b9a1c9baf93c8d08c4f4634421fcb62\": container with ID starting with 00e5880f48772bbab229f09cfdf98d222b9a1c9baf93c8d08c4f4634421fcb62 not found: ID does not exist" containerID="00e5880f48772bbab229f09cfdf98d222b9a1c9baf93c8d08c4f4634421fcb62" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.799945 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e5880f48772bbab229f09cfdf98d222b9a1c9baf93c8d08c4f4634421fcb62"} err="failed to get container status \"00e5880f48772bbab229f09cfdf98d222b9a1c9baf93c8d08c4f4634421fcb62\": rpc error: code = NotFound desc = could not find container \"00e5880f48772bbab229f09cfdf98d222b9a1c9baf93c8d08c4f4634421fcb62\": container with ID starting with 00e5880f48772bbab229f09cfdf98d222b9a1c9baf93c8d08c4f4634421fcb62 not found: ID does not exist" Jan 29 08:37:33 crc kubenswrapper[4826]: I0129 08:37:33.809169 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:37:33 crc kubenswrapper[4826]: E0129 08:37:33.809468 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:37:34 crc kubenswrapper[4826]: I0129 08:37:34.821660 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" path="/var/lib/kubelet/pods/5b30b6c6-0a2f-4e33-bff5-f3f0096401c1/volumes" Jan 29 08:37:45 crc kubenswrapper[4826]: I0129 08:37:45.809256 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:37:45 crc kubenswrapper[4826]: E0129 08:37:45.809836 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:37:56 crc kubenswrapper[4826]: I0129 08:37:56.817189 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:37:56 crc kubenswrapper[4826]: E0129 08:37:56.817916 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:38:08 crc kubenswrapper[4826]: I0129 08:38:08.809458 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:38:08 crc kubenswrapper[4826]: E0129 08:38:08.810243 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:38:21 crc kubenswrapper[4826]: I0129 08:38:21.810081 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:38:21 crc kubenswrapper[4826]: E0129 08:38:21.810869 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:38:33 crc kubenswrapper[4826]: I0129 08:38:33.808643 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:38:33 crc kubenswrapper[4826]: E0129 08:38:33.809458 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:38:44 crc kubenswrapper[4826]: I0129 08:38:44.808798 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:38:45 crc kubenswrapper[4826]: I0129 08:38:45.428693 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"b809ff53cab99f09fc7da60942dfc4b83911f47c2076e84cdb939f3a1322cacc"} Jan 29 08:39:43 crc kubenswrapper[4826]: I0129 08:39:43.960695 4826 generic.go:334] "Generic (PLEG): container finished" podID="bcf40af3-eef2-454e-a418-dd1d61e7faf3" containerID="56d23aa6962d5bd4bff1eb19cc17fdd6395cf84c710553595da0686fe07eaaa4" exitCode=0 Jan 29 08:39:43 crc kubenswrapper[4826]: I0129 08:39:43.960780 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" event={"ID":"bcf40af3-eef2-454e-a418-dd1d61e7faf3","Type":"ContainerDied","Data":"56d23aa6962d5bd4bff1eb19cc17fdd6395cf84c710553595da0686fe07eaaa4"} Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.432696 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.551832 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktb2m\" (UniqueName: \"kubernetes.io/projected/bcf40af3-eef2-454e-a418-dd1d61e7faf3-kube-api-access-ktb2m\") pod \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.551937 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-inventory\") pod \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.552135 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-bootstrap-combined-ca-bundle\") pod \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.552230 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-ssh-key-openstack-cell1\") pod \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\" (UID: \"bcf40af3-eef2-454e-a418-dd1d61e7faf3\") " Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.557118 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "bcf40af3-eef2-454e-a418-dd1d61e7faf3" (UID: "bcf40af3-eef2-454e-a418-dd1d61e7faf3"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.557125 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf40af3-eef2-454e-a418-dd1d61e7faf3-kube-api-access-ktb2m" (OuterVolumeSpecName: "kube-api-access-ktb2m") pod "bcf40af3-eef2-454e-a418-dd1d61e7faf3" (UID: "bcf40af3-eef2-454e-a418-dd1d61e7faf3"). InnerVolumeSpecName "kube-api-access-ktb2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.580001 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-inventory" (OuterVolumeSpecName: "inventory") pod "bcf40af3-eef2-454e-a418-dd1d61e7faf3" (UID: "bcf40af3-eef2-454e-a418-dd1d61e7faf3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.585720 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "bcf40af3-eef2-454e-a418-dd1d61e7faf3" (UID: "bcf40af3-eef2-454e-a418-dd1d61e7faf3"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.655610 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.655686 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktb2m\" (UniqueName: \"kubernetes.io/projected/bcf40af3-eef2-454e-a418-dd1d61e7faf3-kube-api-access-ktb2m\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.655702 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.655714 4826 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcf40af3-eef2-454e-a418-dd1d61e7faf3-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.981491 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" event={"ID":"bcf40af3-eef2-454e-a418-dd1d61e7faf3","Type":"ContainerDied","Data":"1b76c02b40d4c71ab7ac7566aa8d5d3f125364159adef8c56586b329a2932564"} Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.981544 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b76c02b40d4c71ab7ac7566aa8d5d3f125364159adef8c56586b329a2932564" Jan 29 08:39:45 crc kubenswrapper[4826]: I0129 08:39:45.981596 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-2rqtc" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.065249 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-sxx2f"] Jan 29 08:39:46 crc kubenswrapper[4826]: E0129 08:39:46.066042 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" containerName="extract-content" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.066092 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" containerName="extract-content" Jan 29 08:39:46 crc kubenswrapper[4826]: E0129 08:39:46.066148 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" containerName="registry-server" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.066167 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" containerName="registry-server" Jan 29 08:39:46 crc kubenswrapper[4826]: E0129 08:39:46.066231 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf40af3-eef2-454e-a418-dd1d61e7faf3" containerName="bootstrap-openstack-openstack-cell1" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.066252 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf40af3-eef2-454e-a418-dd1d61e7faf3" containerName="bootstrap-openstack-openstack-cell1" Jan 29 08:39:46 crc kubenswrapper[4826]: E0129 08:39:46.066354 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" containerName="extract-utilities" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.066370 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" containerName="extract-utilities" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.066749 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf40af3-eef2-454e-a418-dd1d61e7faf3" containerName="bootstrap-openstack-openstack-cell1" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.066797 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b30b6c6-0a2f-4e33-bff5-f3f0096401c1" containerName="registry-server" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.068894 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.071482 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.071873 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.072090 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.072279 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.076503 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-sxx2f"] Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.165065 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-inventory\") pod \"download-cache-openstack-openstack-cell1-sxx2f\" (UID: \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\") " pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.165381 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-sxx2f\" (UID: \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\") " pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.165440 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhkj4\" (UniqueName: \"kubernetes.io/projected/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-kube-api-access-bhkj4\") pod \"download-cache-openstack-openstack-cell1-sxx2f\" (UID: \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\") " pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.268126 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhkj4\" (UniqueName: \"kubernetes.io/projected/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-kube-api-access-bhkj4\") pod \"download-cache-openstack-openstack-cell1-sxx2f\" (UID: \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\") " pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.268745 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-inventory\") pod \"download-cache-openstack-openstack-cell1-sxx2f\" (UID: \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\") " pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.268967 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-sxx2f\" (UID: \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\") " pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.272184 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-sxx2f\" (UID: \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\") " pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.272812 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-inventory\") pod \"download-cache-openstack-openstack-cell1-sxx2f\" (UID: \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\") " pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.290266 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhkj4\" (UniqueName: \"kubernetes.io/projected/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-kube-api-access-bhkj4\") pod \"download-cache-openstack-openstack-cell1-sxx2f\" (UID: \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\") " pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.430248 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.973696 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-sxx2f"] Jan 29 08:39:46 crc kubenswrapper[4826]: I0129 08:39:46.989042 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" event={"ID":"ee5fb0b2-f4b6-4533-b873-b40fef6a9747","Type":"ContainerStarted","Data":"010bf1ce09acf73827f6db06ebd8036e807c101f83fb81c01dbadd3969adbdaa"} Jan 29 08:39:48 crc kubenswrapper[4826]: I0129 08:39:48.005223 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" event={"ID":"ee5fb0b2-f4b6-4533-b873-b40fef6a9747","Type":"ContainerStarted","Data":"1b22d82973aec2970091f3aa6d21fcb89ca06c5385fc269aad5f18db137971f2"} Jan 29 08:39:48 crc kubenswrapper[4826]: I0129 08:39:48.030032 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" podStartSLOduration=1.604825882 podStartE2EDuration="2.030008421s" podCreationTimestamp="2026-01-29 08:39:46 +0000 UTC" firstStartedPulling="2026-01-29 08:39:46.977844379 +0000 UTC m=+6970.839637448" lastFinishedPulling="2026-01-29 08:39:47.403026918 +0000 UTC m=+6971.264819987" observedRunningTime="2026-01-29 08:39:48.024879696 +0000 UTC m=+6971.886672785" watchObservedRunningTime="2026-01-29 08:39:48.030008421 +0000 UTC m=+6971.891801490" Jan 29 08:41:05 crc kubenswrapper[4826]: I0129 08:41:05.655959 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:41:05 crc kubenswrapper[4826]: I0129 08:41:05.656552 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:41:21 crc kubenswrapper[4826]: I0129 08:41:21.933573 4826 generic.go:334] "Generic (PLEG): container finished" podID="ee5fb0b2-f4b6-4533-b873-b40fef6a9747" containerID="1b22d82973aec2970091f3aa6d21fcb89ca06c5385fc269aad5f18db137971f2" exitCode=0 Jan 29 08:41:21 crc kubenswrapper[4826]: I0129 08:41:21.933800 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" event={"ID":"ee5fb0b2-f4b6-4533-b873-b40fef6a9747","Type":"ContainerDied","Data":"1b22d82973aec2970091f3aa6d21fcb89ca06c5385fc269aad5f18db137971f2"} Jan 29 08:41:23 crc kubenswrapper[4826]: I0129 08:41:23.406380 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" Jan 29 08:41:23 crc kubenswrapper[4826]: I0129 08:41:23.570268 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-inventory\") pod \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\" (UID: \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\") " Jan 29 08:41:23 crc kubenswrapper[4826]: I0129 08:41:23.570396 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-ssh-key-openstack-cell1\") pod \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\" (UID: \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\") " Jan 29 08:41:23 crc kubenswrapper[4826]: I0129 08:41:23.570511 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhkj4\" (UniqueName: \"kubernetes.io/projected/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-kube-api-access-bhkj4\") pod \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\" (UID: \"ee5fb0b2-f4b6-4533-b873-b40fef6a9747\") " Jan 29 08:41:23 crc kubenswrapper[4826]: I0129 08:41:23.578390 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-kube-api-access-bhkj4" (OuterVolumeSpecName: "kube-api-access-bhkj4") pod "ee5fb0b2-f4b6-4533-b873-b40fef6a9747" (UID: "ee5fb0b2-f4b6-4533-b873-b40fef6a9747"). InnerVolumeSpecName "kube-api-access-bhkj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:41:23 crc kubenswrapper[4826]: I0129 08:41:23.607003 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-inventory" (OuterVolumeSpecName: "inventory") pod "ee5fb0b2-f4b6-4533-b873-b40fef6a9747" (UID: "ee5fb0b2-f4b6-4533-b873-b40fef6a9747"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:23 crc kubenswrapper[4826]: I0129 08:41:23.623170 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ee5fb0b2-f4b6-4533-b873-b40fef6a9747" (UID: "ee5fb0b2-f4b6-4533-b873-b40fef6a9747"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:41:23 crc kubenswrapper[4826]: I0129 08:41:23.673748 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:23 crc kubenswrapper[4826]: I0129 08:41:23.673810 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhkj4\" (UniqueName: \"kubernetes.io/projected/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-kube-api-access-bhkj4\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:23 crc kubenswrapper[4826]: I0129 08:41:23.673834 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee5fb0b2-f4b6-4533-b873-b40fef6a9747-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:41:23 crc kubenswrapper[4826]: I0129 08:41:23.959669 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" event={"ID":"ee5fb0b2-f4b6-4533-b873-b40fef6a9747","Type":"ContainerDied","Data":"010bf1ce09acf73827f6db06ebd8036e807c101f83fb81c01dbadd3969adbdaa"} Jan 29 08:41:23 crc kubenswrapper[4826]: I0129 08:41:23.959719 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="010bf1ce09acf73827f6db06ebd8036e807c101f83fb81c01dbadd3969adbdaa" Jan 29 08:41:23 crc kubenswrapper[4826]: I0129 08:41:23.959750 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-sxx2f" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.048686 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-k6stv"] Jan 29 08:41:24 crc kubenswrapper[4826]: E0129 08:41:24.049210 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5fb0b2-f4b6-4533-b873-b40fef6a9747" containerName="download-cache-openstack-openstack-cell1" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.049236 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5fb0b2-f4b6-4533-b873-b40fef6a9747" containerName="download-cache-openstack-openstack-cell1" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.049506 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5fb0b2-f4b6-4533-b873-b40fef6a9747" containerName="download-cache-openstack-openstack-cell1" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.050363 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-k6stv" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.053404 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.053826 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.053963 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.053987 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.059252 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-k6stv"] Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.184964 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bcbe02-3680-427f-a469-ba23dc092343-inventory\") pod \"configure-network-openstack-openstack-cell1-k6stv\" (UID: \"31bcbe02-3680-427f-a469-ba23dc092343\") " pod="openstack/configure-network-openstack-openstack-cell1-k6stv" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.185673 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/31bcbe02-3680-427f-a469-ba23dc092343-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-k6stv\" (UID: \"31bcbe02-3680-427f-a469-ba23dc092343\") " pod="openstack/configure-network-openstack-openstack-cell1-k6stv" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.185797 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnpch\" (UniqueName: \"kubernetes.io/projected/31bcbe02-3680-427f-a469-ba23dc092343-kube-api-access-dnpch\") pod \"configure-network-openstack-openstack-cell1-k6stv\" (UID: \"31bcbe02-3680-427f-a469-ba23dc092343\") " pod="openstack/configure-network-openstack-openstack-cell1-k6stv" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.288331 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bcbe02-3680-427f-a469-ba23dc092343-inventory\") pod \"configure-network-openstack-openstack-cell1-k6stv\" (UID: \"31bcbe02-3680-427f-a469-ba23dc092343\") " pod="openstack/configure-network-openstack-openstack-cell1-k6stv" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.288526 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/31bcbe02-3680-427f-a469-ba23dc092343-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-k6stv\" (UID: \"31bcbe02-3680-427f-a469-ba23dc092343\") " pod="openstack/configure-network-openstack-openstack-cell1-k6stv" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.288614 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnpch\" (UniqueName: \"kubernetes.io/projected/31bcbe02-3680-427f-a469-ba23dc092343-kube-api-access-dnpch\") pod \"configure-network-openstack-openstack-cell1-k6stv\" (UID: \"31bcbe02-3680-427f-a469-ba23dc092343\") " pod="openstack/configure-network-openstack-openstack-cell1-k6stv" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.292927 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bcbe02-3680-427f-a469-ba23dc092343-inventory\") pod \"configure-network-openstack-openstack-cell1-k6stv\" (UID: \"31bcbe02-3680-427f-a469-ba23dc092343\") " pod="openstack/configure-network-openstack-openstack-cell1-k6stv" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.293172 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/31bcbe02-3680-427f-a469-ba23dc092343-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-k6stv\" (UID: \"31bcbe02-3680-427f-a469-ba23dc092343\") " pod="openstack/configure-network-openstack-openstack-cell1-k6stv" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.305813 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnpch\" (UniqueName: \"kubernetes.io/projected/31bcbe02-3680-427f-a469-ba23dc092343-kube-api-access-dnpch\") pod \"configure-network-openstack-openstack-cell1-k6stv\" (UID: \"31bcbe02-3680-427f-a469-ba23dc092343\") " pod="openstack/configure-network-openstack-openstack-cell1-k6stv" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.406628 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-k6stv" Jan 29 08:41:24 crc kubenswrapper[4826]: I0129 08:41:24.960909 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-k6stv"] Jan 29 08:41:25 crc kubenswrapper[4826]: I0129 08:41:25.982052 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-k6stv" event={"ID":"31bcbe02-3680-427f-a469-ba23dc092343","Type":"ContainerStarted","Data":"c3207935f666e75649b2fd36e04246358e2c623560609e277b199a2709103b85"} Jan 29 08:41:25 crc kubenswrapper[4826]: I0129 08:41:25.983022 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-k6stv" event={"ID":"31bcbe02-3680-427f-a469-ba23dc092343","Type":"ContainerStarted","Data":"06bca99462229b2b6d2c0711fd95e2d430308e36bbf22c95da018d78708a323e"} Jan 29 08:41:26 crc kubenswrapper[4826]: I0129 08:41:26.018973 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-k6stv" podStartSLOduration=1.576308695 podStartE2EDuration="2.018944045s" podCreationTimestamp="2026-01-29 08:41:24 +0000 UTC" firstStartedPulling="2026-01-29 08:41:24.965975722 +0000 UTC m=+7068.827768791" lastFinishedPulling="2026-01-29 08:41:25.408611072 +0000 UTC m=+7069.270404141" observedRunningTime="2026-01-29 08:41:26.010913783 +0000 UTC m=+7069.872706852" watchObservedRunningTime="2026-01-29 08:41:26.018944045 +0000 UTC m=+7069.880737114" Jan 29 08:41:35 crc kubenswrapper[4826]: I0129 08:41:35.656976 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:41:35 crc kubenswrapper[4826]: I0129 08:41:35.657961 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:42:05 crc kubenswrapper[4826]: I0129 08:42:05.656586 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:42:05 crc kubenswrapper[4826]: I0129 08:42:05.657777 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:42:05 crc kubenswrapper[4826]: I0129 08:42:05.657978 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 08:42:05 crc kubenswrapper[4826]: I0129 08:42:05.659203 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b809ff53cab99f09fc7da60942dfc4b83911f47c2076e84cdb939f3a1322cacc"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:42:05 crc kubenswrapper[4826]: I0129 08:42:05.659280 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://b809ff53cab99f09fc7da60942dfc4b83911f47c2076e84cdb939f3a1322cacc" gracePeriod=600 Jan 29 08:42:06 crc kubenswrapper[4826]: I0129 08:42:06.340084 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="b809ff53cab99f09fc7da60942dfc4b83911f47c2076e84cdb939f3a1322cacc" exitCode=0 Jan 29 08:42:06 crc kubenswrapper[4826]: I0129 08:42:06.340168 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"b809ff53cab99f09fc7da60942dfc4b83911f47c2076e84cdb939f3a1322cacc"} Jan 29 08:42:06 crc kubenswrapper[4826]: I0129 08:42:06.340771 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88"} Jan 29 08:42:06 crc kubenswrapper[4826]: I0129 08:42:06.340806 4826 scope.go:117] "RemoveContainer" containerID="725f4aa4f9ef4129cc44e12f5f1026f8860669230135655f272ac1666d528dcf" Jan 29 08:42:12 crc kubenswrapper[4826]: I0129 08:42:12.865838 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2ngpb"] Jan 29 08:42:12 crc kubenswrapper[4826]: I0129 08:42:12.872422 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:12 crc kubenswrapper[4826]: I0129 08:42:12.878588 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2ngpb"] Jan 29 08:42:13 crc kubenswrapper[4826]: I0129 08:42:13.033193 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ab664ea-7765-400c-8092-d5e27334c99c-catalog-content\") pod \"community-operators-2ngpb\" (UID: \"2ab664ea-7765-400c-8092-d5e27334c99c\") " pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:13 crc kubenswrapper[4826]: I0129 08:42:13.033280 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ff5k\" (UniqueName: \"kubernetes.io/projected/2ab664ea-7765-400c-8092-d5e27334c99c-kube-api-access-6ff5k\") pod \"community-operators-2ngpb\" (UID: \"2ab664ea-7765-400c-8092-d5e27334c99c\") " pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:13 crc kubenswrapper[4826]: I0129 08:42:13.033448 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ab664ea-7765-400c-8092-d5e27334c99c-utilities\") pod \"community-operators-2ngpb\" (UID: \"2ab664ea-7765-400c-8092-d5e27334c99c\") " pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:13 crc kubenswrapper[4826]: I0129 08:42:13.136113 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ab664ea-7765-400c-8092-d5e27334c99c-catalog-content\") pod \"community-operators-2ngpb\" (UID: \"2ab664ea-7765-400c-8092-d5e27334c99c\") " pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:13 crc kubenswrapper[4826]: I0129 08:42:13.136223 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ff5k\" (UniqueName: \"kubernetes.io/projected/2ab664ea-7765-400c-8092-d5e27334c99c-kube-api-access-6ff5k\") pod \"community-operators-2ngpb\" (UID: \"2ab664ea-7765-400c-8092-d5e27334c99c\") " pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:13 crc kubenswrapper[4826]: I0129 08:42:13.136325 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ab664ea-7765-400c-8092-d5e27334c99c-utilities\") pod \"community-operators-2ngpb\" (UID: \"2ab664ea-7765-400c-8092-d5e27334c99c\") " pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:13 crc kubenswrapper[4826]: I0129 08:42:13.136809 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ab664ea-7765-400c-8092-d5e27334c99c-catalog-content\") pod \"community-operators-2ngpb\" (UID: \"2ab664ea-7765-400c-8092-d5e27334c99c\") " pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:13 crc kubenswrapper[4826]: I0129 08:42:13.136861 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ab664ea-7765-400c-8092-d5e27334c99c-utilities\") pod \"community-operators-2ngpb\" (UID: \"2ab664ea-7765-400c-8092-d5e27334c99c\") " pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:13 crc kubenswrapper[4826]: I0129 08:42:13.169441 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ff5k\" (UniqueName: \"kubernetes.io/projected/2ab664ea-7765-400c-8092-d5e27334c99c-kube-api-access-6ff5k\") pod \"community-operators-2ngpb\" (UID: \"2ab664ea-7765-400c-8092-d5e27334c99c\") " pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:13 crc kubenswrapper[4826]: I0129 08:42:13.196847 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:13 crc kubenswrapper[4826]: I0129 08:42:13.804965 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2ngpb"] Jan 29 08:42:14 crc kubenswrapper[4826]: I0129 08:42:14.428229 4826 generic.go:334] "Generic (PLEG): container finished" podID="2ab664ea-7765-400c-8092-d5e27334c99c" containerID="95d2fcb8353bf051c3506fbcd09b3b40ff5dcae86490255ab03f92ee3acfc0c9" exitCode=0 Jan 29 08:42:14 crc kubenswrapper[4826]: I0129 08:42:14.428382 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ngpb" event={"ID":"2ab664ea-7765-400c-8092-d5e27334c99c","Type":"ContainerDied","Data":"95d2fcb8353bf051c3506fbcd09b3b40ff5dcae86490255ab03f92ee3acfc0c9"} Jan 29 08:42:14 crc kubenswrapper[4826]: I0129 08:42:14.429312 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ngpb" event={"ID":"2ab664ea-7765-400c-8092-d5e27334c99c","Type":"ContainerStarted","Data":"152ee6370cad96d46adf412f8e3849991a3bee447d838e269d8347b803233fd0"} Jan 29 08:42:14 crc kubenswrapper[4826]: I0129 08:42:14.431501 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:42:15 crc kubenswrapper[4826]: I0129 08:42:15.443014 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ngpb" event={"ID":"2ab664ea-7765-400c-8092-d5e27334c99c","Type":"ContainerStarted","Data":"54b27f9081c775d30197bde70b534108af50d3059c037cc3b12ac05bfd371f22"} Jan 29 08:42:17 crc kubenswrapper[4826]: I0129 08:42:17.460096 4826 generic.go:334] "Generic (PLEG): container finished" podID="2ab664ea-7765-400c-8092-d5e27334c99c" containerID="54b27f9081c775d30197bde70b534108af50d3059c037cc3b12ac05bfd371f22" exitCode=0 Jan 29 08:42:17 crc kubenswrapper[4826]: I0129 08:42:17.460192 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ngpb" event={"ID":"2ab664ea-7765-400c-8092-d5e27334c99c","Type":"ContainerDied","Data":"54b27f9081c775d30197bde70b534108af50d3059c037cc3b12ac05bfd371f22"} Jan 29 08:42:18 crc kubenswrapper[4826]: I0129 08:42:18.471873 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ngpb" event={"ID":"2ab664ea-7765-400c-8092-d5e27334c99c","Type":"ContainerStarted","Data":"1d3bd791e92caae62f0105331a46e8cc067a4ecc7bdd8c06c59a5d987a471c1f"} Jan 29 08:42:18 crc kubenswrapper[4826]: I0129 08:42:18.494898 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2ngpb" podStartSLOduration=3.015549008 podStartE2EDuration="6.494883376s" podCreationTimestamp="2026-01-29 08:42:12 +0000 UTC" firstStartedPulling="2026-01-29 08:42:14.431064718 +0000 UTC m=+7118.292857787" lastFinishedPulling="2026-01-29 08:42:17.910399066 +0000 UTC m=+7121.772192155" observedRunningTime="2026-01-29 08:42:18.492893174 +0000 UTC m=+7122.354686233" watchObservedRunningTime="2026-01-29 08:42:18.494883376 +0000 UTC m=+7122.356676445" Jan 29 08:42:23 crc kubenswrapper[4826]: I0129 08:42:23.197167 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:23 crc kubenswrapper[4826]: I0129 08:42:23.197714 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:23 crc kubenswrapper[4826]: I0129 08:42:23.245840 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:23 crc kubenswrapper[4826]: I0129 08:42:23.576117 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:23 crc kubenswrapper[4826]: I0129 08:42:23.634451 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2ngpb"] Jan 29 08:42:25 crc kubenswrapper[4826]: I0129 08:42:25.539598 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2ngpb" podUID="2ab664ea-7765-400c-8092-d5e27334c99c" containerName="registry-server" containerID="cri-o://1d3bd791e92caae62f0105331a46e8cc067a4ecc7bdd8c06c59a5d987a471c1f" gracePeriod=2 Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.058759 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.211661 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ff5k\" (UniqueName: \"kubernetes.io/projected/2ab664ea-7765-400c-8092-d5e27334c99c-kube-api-access-6ff5k\") pod \"2ab664ea-7765-400c-8092-d5e27334c99c\" (UID: \"2ab664ea-7765-400c-8092-d5e27334c99c\") " Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.211798 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ab664ea-7765-400c-8092-d5e27334c99c-utilities\") pod \"2ab664ea-7765-400c-8092-d5e27334c99c\" (UID: \"2ab664ea-7765-400c-8092-d5e27334c99c\") " Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.211883 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ab664ea-7765-400c-8092-d5e27334c99c-catalog-content\") pod \"2ab664ea-7765-400c-8092-d5e27334c99c\" (UID: \"2ab664ea-7765-400c-8092-d5e27334c99c\") " Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.213135 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ab664ea-7765-400c-8092-d5e27334c99c-utilities" (OuterVolumeSpecName: "utilities") pod "2ab664ea-7765-400c-8092-d5e27334c99c" (UID: "2ab664ea-7765-400c-8092-d5e27334c99c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.218033 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab664ea-7765-400c-8092-d5e27334c99c-kube-api-access-6ff5k" (OuterVolumeSpecName: "kube-api-access-6ff5k") pod "2ab664ea-7765-400c-8092-d5e27334c99c" (UID: "2ab664ea-7765-400c-8092-d5e27334c99c"). InnerVolumeSpecName "kube-api-access-6ff5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.314636 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ab664ea-7765-400c-8092-d5e27334c99c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.314674 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ff5k\" (UniqueName: \"kubernetes.io/projected/2ab664ea-7765-400c-8092-d5e27334c99c-kube-api-access-6ff5k\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.551169 4826 generic.go:334] "Generic (PLEG): container finished" podID="2ab664ea-7765-400c-8092-d5e27334c99c" containerID="1d3bd791e92caae62f0105331a46e8cc067a4ecc7bdd8c06c59a5d987a471c1f" exitCode=0 Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.551208 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ngpb" event={"ID":"2ab664ea-7765-400c-8092-d5e27334c99c","Type":"ContainerDied","Data":"1d3bd791e92caae62f0105331a46e8cc067a4ecc7bdd8c06c59a5d987a471c1f"} Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.551236 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ngpb" event={"ID":"2ab664ea-7765-400c-8092-d5e27334c99c","Type":"ContainerDied","Data":"152ee6370cad96d46adf412f8e3849991a3bee447d838e269d8347b803233fd0"} Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.551270 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2ngpb" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.551270 4826 scope.go:117] "RemoveContainer" containerID="1d3bd791e92caae62f0105331a46e8cc067a4ecc7bdd8c06c59a5d987a471c1f" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.574821 4826 scope.go:117] "RemoveContainer" containerID="54b27f9081c775d30197bde70b534108af50d3059c037cc3b12ac05bfd371f22" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.595587 4826 scope.go:117] "RemoveContainer" containerID="95d2fcb8353bf051c3506fbcd09b3b40ff5dcae86490255ab03f92ee3acfc0c9" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.659285 4826 scope.go:117] "RemoveContainer" containerID="1d3bd791e92caae62f0105331a46e8cc067a4ecc7bdd8c06c59a5d987a471c1f" Jan 29 08:42:26 crc kubenswrapper[4826]: E0129 08:42:26.659840 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d3bd791e92caae62f0105331a46e8cc067a4ecc7bdd8c06c59a5d987a471c1f\": container with ID starting with 1d3bd791e92caae62f0105331a46e8cc067a4ecc7bdd8c06c59a5d987a471c1f not found: ID does not exist" containerID="1d3bd791e92caae62f0105331a46e8cc067a4ecc7bdd8c06c59a5d987a471c1f" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.659894 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d3bd791e92caae62f0105331a46e8cc067a4ecc7bdd8c06c59a5d987a471c1f"} err="failed to get container status \"1d3bd791e92caae62f0105331a46e8cc067a4ecc7bdd8c06c59a5d987a471c1f\": rpc error: code = NotFound desc = could not find container \"1d3bd791e92caae62f0105331a46e8cc067a4ecc7bdd8c06c59a5d987a471c1f\": container with ID starting with 1d3bd791e92caae62f0105331a46e8cc067a4ecc7bdd8c06c59a5d987a471c1f not found: ID does not exist" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.659928 4826 scope.go:117] "RemoveContainer" containerID="54b27f9081c775d30197bde70b534108af50d3059c037cc3b12ac05bfd371f22" Jan 29 08:42:26 crc kubenswrapper[4826]: E0129 08:42:26.660423 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54b27f9081c775d30197bde70b534108af50d3059c037cc3b12ac05bfd371f22\": container with ID starting with 54b27f9081c775d30197bde70b534108af50d3059c037cc3b12ac05bfd371f22 not found: ID does not exist" containerID="54b27f9081c775d30197bde70b534108af50d3059c037cc3b12ac05bfd371f22" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.660459 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54b27f9081c775d30197bde70b534108af50d3059c037cc3b12ac05bfd371f22"} err="failed to get container status \"54b27f9081c775d30197bde70b534108af50d3059c037cc3b12ac05bfd371f22\": rpc error: code = NotFound desc = could not find container \"54b27f9081c775d30197bde70b534108af50d3059c037cc3b12ac05bfd371f22\": container with ID starting with 54b27f9081c775d30197bde70b534108af50d3059c037cc3b12ac05bfd371f22 not found: ID does not exist" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.660489 4826 scope.go:117] "RemoveContainer" containerID="95d2fcb8353bf051c3506fbcd09b3b40ff5dcae86490255ab03f92ee3acfc0c9" Jan 29 08:42:26 crc kubenswrapper[4826]: E0129 08:42:26.661101 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95d2fcb8353bf051c3506fbcd09b3b40ff5dcae86490255ab03f92ee3acfc0c9\": container with ID starting with 95d2fcb8353bf051c3506fbcd09b3b40ff5dcae86490255ab03f92ee3acfc0c9 not found: ID does not exist" containerID="95d2fcb8353bf051c3506fbcd09b3b40ff5dcae86490255ab03f92ee3acfc0c9" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.661141 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d2fcb8353bf051c3506fbcd09b3b40ff5dcae86490255ab03f92ee3acfc0c9"} err="failed to get container status \"95d2fcb8353bf051c3506fbcd09b3b40ff5dcae86490255ab03f92ee3acfc0c9\": rpc error: code = NotFound desc = could not find container \"95d2fcb8353bf051c3506fbcd09b3b40ff5dcae86490255ab03f92ee3acfc0c9\": container with ID starting with 95d2fcb8353bf051c3506fbcd09b3b40ff5dcae86490255ab03f92ee3acfc0c9 not found: ID does not exist" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.923469 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ab664ea-7765-400c-8092-d5e27334c99c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ab664ea-7765-400c-8092-d5e27334c99c" (UID: "2ab664ea-7765-400c-8092-d5e27334c99c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:42:26 crc kubenswrapper[4826]: I0129 08:42:26.927821 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ab664ea-7765-400c-8092-d5e27334c99c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:27 crc kubenswrapper[4826]: I0129 08:42:27.212632 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2ngpb"] Jan 29 08:42:27 crc kubenswrapper[4826]: I0129 08:42:27.220543 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2ngpb"] Jan 29 08:42:28 crc kubenswrapper[4826]: I0129 08:42:28.825689 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ab664ea-7765-400c-8092-d5e27334c99c" path="/var/lib/kubelet/pods/2ab664ea-7765-400c-8092-d5e27334c99c/volumes" Jan 29 08:42:47 crc kubenswrapper[4826]: I0129 08:42:47.765917 4826 generic.go:334] "Generic (PLEG): container finished" podID="31bcbe02-3680-427f-a469-ba23dc092343" containerID="c3207935f666e75649b2fd36e04246358e2c623560609e277b199a2709103b85" exitCode=0 Jan 29 08:42:47 crc kubenswrapper[4826]: I0129 08:42:47.765997 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-k6stv" event={"ID":"31bcbe02-3680-427f-a469-ba23dc092343","Type":"ContainerDied","Data":"c3207935f666e75649b2fd36e04246358e2c623560609e277b199a2709103b85"} Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.238408 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-k6stv" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.365197 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnpch\" (UniqueName: \"kubernetes.io/projected/31bcbe02-3680-427f-a469-ba23dc092343-kube-api-access-dnpch\") pod \"31bcbe02-3680-427f-a469-ba23dc092343\" (UID: \"31bcbe02-3680-427f-a469-ba23dc092343\") " Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.365472 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bcbe02-3680-427f-a469-ba23dc092343-inventory\") pod \"31bcbe02-3680-427f-a469-ba23dc092343\" (UID: \"31bcbe02-3680-427f-a469-ba23dc092343\") " Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.365572 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/31bcbe02-3680-427f-a469-ba23dc092343-ssh-key-openstack-cell1\") pod \"31bcbe02-3680-427f-a469-ba23dc092343\" (UID: \"31bcbe02-3680-427f-a469-ba23dc092343\") " Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.372018 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bcbe02-3680-427f-a469-ba23dc092343-kube-api-access-dnpch" (OuterVolumeSpecName: "kube-api-access-dnpch") pod "31bcbe02-3680-427f-a469-ba23dc092343" (UID: "31bcbe02-3680-427f-a469-ba23dc092343"). InnerVolumeSpecName "kube-api-access-dnpch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.396670 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bcbe02-3680-427f-a469-ba23dc092343-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "31bcbe02-3680-427f-a469-ba23dc092343" (UID: "31bcbe02-3680-427f-a469-ba23dc092343"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.399279 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bcbe02-3680-427f-a469-ba23dc092343-inventory" (OuterVolumeSpecName: "inventory") pod "31bcbe02-3680-427f-a469-ba23dc092343" (UID: "31bcbe02-3680-427f-a469-ba23dc092343"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.468026 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnpch\" (UniqueName: \"kubernetes.io/projected/31bcbe02-3680-427f-a469-ba23dc092343-kube-api-access-dnpch\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.468098 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/31bcbe02-3680-427f-a469-ba23dc092343-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.468112 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/31bcbe02-3680-427f-a469-ba23dc092343-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.787683 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-k6stv" event={"ID":"31bcbe02-3680-427f-a469-ba23dc092343","Type":"ContainerDied","Data":"06bca99462229b2b6d2c0711fd95e2d430308e36bbf22c95da018d78708a323e"} Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.787730 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06bca99462229b2b6d2c0711fd95e2d430308e36bbf22c95da018d78708a323e" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.787736 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-k6stv" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.895043 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wqs9k"] Jan 29 08:42:49 crc kubenswrapper[4826]: E0129 08:42:49.896884 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab664ea-7765-400c-8092-d5e27334c99c" containerName="registry-server" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.896915 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab664ea-7765-400c-8092-d5e27334c99c" containerName="registry-server" Jan 29 08:42:49 crc kubenswrapper[4826]: E0129 08:42:49.896986 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31bcbe02-3680-427f-a469-ba23dc092343" containerName="configure-network-openstack-openstack-cell1" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.896996 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="31bcbe02-3680-427f-a469-ba23dc092343" containerName="configure-network-openstack-openstack-cell1" Jan 29 08:42:49 crc kubenswrapper[4826]: E0129 08:42:49.897009 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab664ea-7765-400c-8092-d5e27334c99c" containerName="extract-utilities" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.897027 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab664ea-7765-400c-8092-d5e27334c99c" containerName="extract-utilities" Jan 29 08:42:49 crc kubenswrapper[4826]: E0129 08:42:49.897063 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab664ea-7765-400c-8092-d5e27334c99c" containerName="extract-content" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.897073 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab664ea-7765-400c-8092-d5e27334c99c" containerName="extract-content" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.898135 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="31bcbe02-3680-427f-a469-ba23dc092343" containerName="configure-network-openstack-openstack-cell1" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.898184 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab664ea-7765-400c-8092-d5e27334c99c" containerName="registry-server" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.900286 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.903085 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.903324 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.903859 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.904010 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.921767 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wqs9k"] Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.985436 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7daf0d5e-a5ef-4a39-8288-4f186ba60572-inventory\") pod \"validate-network-openstack-openstack-cell1-wqs9k\" (UID: \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\") " pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.985750 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7daf0d5e-a5ef-4a39-8288-4f186ba60572-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-wqs9k\" (UID: \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\") " pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" Jan 29 08:42:49 crc kubenswrapper[4826]: I0129 08:42:49.985957 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf67l\" (UniqueName: \"kubernetes.io/projected/7daf0d5e-a5ef-4a39-8288-4f186ba60572-kube-api-access-bf67l\") pod \"validate-network-openstack-openstack-cell1-wqs9k\" (UID: \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\") " pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" Jan 29 08:42:50 crc kubenswrapper[4826]: I0129 08:42:50.087561 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf67l\" (UniqueName: \"kubernetes.io/projected/7daf0d5e-a5ef-4a39-8288-4f186ba60572-kube-api-access-bf67l\") pod \"validate-network-openstack-openstack-cell1-wqs9k\" (UID: \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\") " pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" Jan 29 08:42:50 crc kubenswrapper[4826]: I0129 08:42:50.087716 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7daf0d5e-a5ef-4a39-8288-4f186ba60572-inventory\") pod \"validate-network-openstack-openstack-cell1-wqs9k\" (UID: \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\") " pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" Jan 29 08:42:50 crc kubenswrapper[4826]: I0129 08:42:50.087820 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7daf0d5e-a5ef-4a39-8288-4f186ba60572-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-wqs9k\" (UID: \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\") " pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" Jan 29 08:42:50 crc kubenswrapper[4826]: I0129 08:42:50.094893 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7daf0d5e-a5ef-4a39-8288-4f186ba60572-inventory\") pod \"validate-network-openstack-openstack-cell1-wqs9k\" (UID: \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\") " pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" Jan 29 08:42:50 crc kubenswrapper[4826]: I0129 08:42:50.095153 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7daf0d5e-a5ef-4a39-8288-4f186ba60572-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-wqs9k\" (UID: \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\") " pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" Jan 29 08:42:50 crc kubenswrapper[4826]: I0129 08:42:50.108812 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf67l\" (UniqueName: \"kubernetes.io/projected/7daf0d5e-a5ef-4a39-8288-4f186ba60572-kube-api-access-bf67l\") pod \"validate-network-openstack-openstack-cell1-wqs9k\" (UID: \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\") " pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" Jan 29 08:42:50 crc kubenswrapper[4826]: I0129 08:42:50.236249 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" Jan 29 08:42:50 crc kubenswrapper[4826]: I0129 08:42:50.657018 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wqs9k"] Jan 29 08:42:50 crc kubenswrapper[4826]: W0129 08:42:50.666201 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7daf0d5e_a5ef_4a39_8288_4f186ba60572.slice/crio-5c42fce659316607396b05508faeb0e5f007afccad84494357aa399ad0a51a9b WatchSource:0}: Error finding container 5c42fce659316607396b05508faeb0e5f007afccad84494357aa399ad0a51a9b: Status 404 returned error can't find the container with id 5c42fce659316607396b05508faeb0e5f007afccad84494357aa399ad0a51a9b Jan 29 08:42:50 crc kubenswrapper[4826]: I0129 08:42:50.800549 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" event={"ID":"7daf0d5e-a5ef-4a39-8288-4f186ba60572","Type":"ContainerStarted","Data":"5c42fce659316607396b05508faeb0e5f007afccad84494357aa399ad0a51a9b"} Jan 29 08:42:52 crc kubenswrapper[4826]: I0129 08:42:52.820935 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" event={"ID":"7daf0d5e-a5ef-4a39-8288-4f186ba60572","Type":"ContainerStarted","Data":"7331c62617dd7c53ad8cdf1a55e34081753220b41368b02879ebb1b296758688"} Jan 29 08:42:52 crc kubenswrapper[4826]: I0129 08:42:52.837358 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" podStartSLOduration=2.782590098 podStartE2EDuration="3.837342748s" podCreationTimestamp="2026-01-29 08:42:49 +0000 UTC" firstStartedPulling="2026-01-29 08:42:50.671753126 +0000 UTC m=+7154.533546205" lastFinishedPulling="2026-01-29 08:42:51.726505786 +0000 UTC m=+7155.588298855" observedRunningTime="2026-01-29 08:42:52.835602482 +0000 UTC m=+7156.697395551" watchObservedRunningTime="2026-01-29 08:42:52.837342748 +0000 UTC m=+7156.699135817" Jan 29 08:42:56 crc kubenswrapper[4826]: I0129 08:42:56.858627 4826 generic.go:334] "Generic (PLEG): container finished" podID="7daf0d5e-a5ef-4a39-8288-4f186ba60572" containerID="7331c62617dd7c53ad8cdf1a55e34081753220b41368b02879ebb1b296758688" exitCode=0 Jan 29 08:42:56 crc kubenswrapper[4826]: I0129 08:42:56.858676 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" event={"ID":"7daf0d5e-a5ef-4a39-8288-4f186ba60572","Type":"ContainerDied","Data":"7331c62617dd7c53ad8cdf1a55e34081753220b41368b02879ebb1b296758688"} Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.278085 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.392756 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7daf0d5e-a5ef-4a39-8288-4f186ba60572-ssh-key-openstack-cell1\") pod \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\" (UID: \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\") " Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.393276 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf67l\" (UniqueName: \"kubernetes.io/projected/7daf0d5e-a5ef-4a39-8288-4f186ba60572-kube-api-access-bf67l\") pod \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\" (UID: \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\") " Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.393471 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7daf0d5e-a5ef-4a39-8288-4f186ba60572-inventory\") pod \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\" (UID: \"7daf0d5e-a5ef-4a39-8288-4f186ba60572\") " Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.403678 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7daf0d5e-a5ef-4a39-8288-4f186ba60572-kube-api-access-bf67l" (OuterVolumeSpecName: "kube-api-access-bf67l") pod "7daf0d5e-a5ef-4a39-8288-4f186ba60572" (UID: "7daf0d5e-a5ef-4a39-8288-4f186ba60572"). InnerVolumeSpecName "kube-api-access-bf67l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.428493 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7daf0d5e-a5ef-4a39-8288-4f186ba60572-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "7daf0d5e-a5ef-4a39-8288-4f186ba60572" (UID: "7daf0d5e-a5ef-4a39-8288-4f186ba60572"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.435954 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7daf0d5e-a5ef-4a39-8288-4f186ba60572-inventory" (OuterVolumeSpecName: "inventory") pod "7daf0d5e-a5ef-4a39-8288-4f186ba60572" (UID: "7daf0d5e-a5ef-4a39-8288-4f186ba60572"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.496425 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7daf0d5e-a5ef-4a39-8288-4f186ba60572-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.496753 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf67l\" (UniqueName: \"kubernetes.io/projected/7daf0d5e-a5ef-4a39-8288-4f186ba60572-kube-api-access-bf67l\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.496843 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7daf0d5e-a5ef-4a39-8288-4f186ba60572-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.883716 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" event={"ID":"7daf0d5e-a5ef-4a39-8288-4f186ba60572","Type":"ContainerDied","Data":"5c42fce659316607396b05508faeb0e5f007afccad84494357aa399ad0a51a9b"} Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.884230 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c42fce659316607396b05508faeb0e5f007afccad84494357aa399ad0a51a9b" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.883789 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wqs9k" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.949146 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-2h8kn"] Jan 29 08:42:58 crc kubenswrapper[4826]: E0129 08:42:58.949570 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7daf0d5e-a5ef-4a39-8288-4f186ba60572" containerName="validate-network-openstack-openstack-cell1" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.949591 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7daf0d5e-a5ef-4a39-8288-4f186ba60572" containerName="validate-network-openstack-openstack-cell1" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.949806 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7daf0d5e-a5ef-4a39-8288-4f186ba60572" containerName="validate-network-openstack-openstack-cell1" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.950558 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-2h8kn" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.956200 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.956451 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.956550 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.956838 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:42:58 crc kubenswrapper[4826]: I0129 08:42:58.969219 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-2h8kn"] Jan 29 08:42:59 crc kubenswrapper[4826]: I0129 08:42:59.008810 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btj9x\" (UniqueName: \"kubernetes.io/projected/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-kube-api-access-btj9x\") pod \"install-os-openstack-openstack-cell1-2h8kn\" (UID: \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\") " pod="openstack/install-os-openstack-openstack-cell1-2h8kn" Jan 29 08:42:59 crc kubenswrapper[4826]: I0129 08:42:59.008972 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-2h8kn\" (UID: \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\") " pod="openstack/install-os-openstack-openstack-cell1-2h8kn" Jan 29 08:42:59 crc kubenswrapper[4826]: I0129 08:42:59.009374 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-inventory\") pod \"install-os-openstack-openstack-cell1-2h8kn\" (UID: \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\") " pod="openstack/install-os-openstack-openstack-cell1-2h8kn" Jan 29 08:42:59 crc kubenswrapper[4826]: I0129 08:42:59.112099 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btj9x\" (UniqueName: \"kubernetes.io/projected/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-kube-api-access-btj9x\") pod \"install-os-openstack-openstack-cell1-2h8kn\" (UID: \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\") " pod="openstack/install-os-openstack-openstack-cell1-2h8kn" Jan 29 08:42:59 crc kubenswrapper[4826]: I0129 08:42:59.112319 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-2h8kn\" (UID: \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\") " pod="openstack/install-os-openstack-openstack-cell1-2h8kn" Jan 29 08:42:59 crc kubenswrapper[4826]: I0129 08:42:59.112560 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-inventory\") pod \"install-os-openstack-openstack-cell1-2h8kn\" (UID: \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\") " pod="openstack/install-os-openstack-openstack-cell1-2h8kn" Jan 29 08:42:59 crc kubenswrapper[4826]: I0129 08:42:59.119075 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-2h8kn\" (UID: \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\") " pod="openstack/install-os-openstack-openstack-cell1-2h8kn" Jan 29 08:42:59 crc kubenswrapper[4826]: I0129 08:42:59.121043 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-inventory\") pod \"install-os-openstack-openstack-cell1-2h8kn\" (UID: \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\") " pod="openstack/install-os-openstack-openstack-cell1-2h8kn" Jan 29 08:42:59 crc kubenswrapper[4826]: I0129 08:42:59.139988 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btj9x\" (UniqueName: \"kubernetes.io/projected/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-kube-api-access-btj9x\") pod \"install-os-openstack-openstack-cell1-2h8kn\" (UID: \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\") " pod="openstack/install-os-openstack-openstack-cell1-2h8kn" Jan 29 08:42:59 crc kubenswrapper[4826]: I0129 08:42:59.270056 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-2h8kn" Jan 29 08:42:59 crc kubenswrapper[4826]: I0129 08:42:59.820787 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-2h8kn"] Jan 29 08:42:59 crc kubenswrapper[4826]: I0129 08:42:59.894128 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-2h8kn" event={"ID":"ac81a767-6eb1-4270-9d1f-824d8b6ce16b","Type":"ContainerStarted","Data":"d1a9c1ad8acf1a6b92efdd547c5802fae8ec716bf04c3c6798e7e8b48ac1b073"} Jan 29 08:43:00 crc kubenswrapper[4826]: I0129 08:43:00.905327 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-2h8kn" event={"ID":"ac81a767-6eb1-4270-9d1f-824d8b6ce16b","Type":"ContainerStarted","Data":"8ed200c38a4c3e922bd27d42bea9fe8716765d61b65a81baafdf5e92a0d6db5d"} Jan 29 08:43:00 crc kubenswrapper[4826]: I0129 08:43:00.946806 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-2h8kn" podStartSLOduration=2.425763841 podStartE2EDuration="2.946771403s" podCreationTimestamp="2026-01-29 08:42:58 +0000 UTC" firstStartedPulling="2026-01-29 08:42:59.826053809 +0000 UTC m=+7163.687846878" lastFinishedPulling="2026-01-29 08:43:00.347061371 +0000 UTC m=+7164.208854440" observedRunningTime="2026-01-29 08:43:00.931397167 +0000 UTC m=+7164.793190266" watchObservedRunningTime="2026-01-29 08:43:00.946771403 +0000 UTC m=+7164.808564512" Jan 29 08:43:45 crc kubenswrapper[4826]: I0129 08:43:45.351854 4826 generic.go:334] "Generic (PLEG): container finished" podID="ac81a767-6eb1-4270-9d1f-824d8b6ce16b" containerID="8ed200c38a4c3e922bd27d42bea9fe8716765d61b65a81baafdf5e92a0d6db5d" exitCode=0 Jan 29 08:43:45 crc kubenswrapper[4826]: I0129 08:43:45.351976 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-2h8kn" event={"ID":"ac81a767-6eb1-4270-9d1f-824d8b6ce16b","Type":"ContainerDied","Data":"8ed200c38a4c3e922bd27d42bea9fe8716765d61b65a81baafdf5e92a0d6db5d"} Jan 29 08:43:46 crc kubenswrapper[4826]: I0129 08:43:46.780618 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-2h8kn" Jan 29 08:43:46 crc kubenswrapper[4826]: I0129 08:43:46.972705 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-ssh-key-openstack-cell1\") pod \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\" (UID: \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\") " Jan 29 08:43:46 crc kubenswrapper[4826]: I0129 08:43:46.972795 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btj9x\" (UniqueName: \"kubernetes.io/projected/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-kube-api-access-btj9x\") pod \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\" (UID: \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\") " Jan 29 08:43:46 crc kubenswrapper[4826]: I0129 08:43:46.972837 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-inventory\") pod \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\" (UID: \"ac81a767-6eb1-4270-9d1f-824d8b6ce16b\") " Jan 29 08:43:46 crc kubenswrapper[4826]: I0129 08:43:46.981584 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-kube-api-access-btj9x" (OuterVolumeSpecName: "kube-api-access-btj9x") pod "ac81a767-6eb1-4270-9d1f-824d8b6ce16b" (UID: "ac81a767-6eb1-4270-9d1f-824d8b6ce16b"). InnerVolumeSpecName "kube-api-access-btj9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.000516 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ac81a767-6eb1-4270-9d1f-824d8b6ce16b" (UID: "ac81a767-6eb1-4270-9d1f-824d8b6ce16b"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.006767 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-inventory" (OuterVolumeSpecName: "inventory") pod "ac81a767-6eb1-4270-9d1f-824d8b6ce16b" (UID: "ac81a767-6eb1-4270-9d1f-824d8b6ce16b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.075441 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.075480 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btj9x\" (UniqueName: \"kubernetes.io/projected/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-kube-api-access-btj9x\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.075499 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac81a767-6eb1-4270-9d1f-824d8b6ce16b-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.371229 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-2h8kn" event={"ID":"ac81a767-6eb1-4270-9d1f-824d8b6ce16b","Type":"ContainerDied","Data":"d1a9c1ad8acf1a6b92efdd547c5802fae8ec716bf04c3c6798e7e8b48ac1b073"} Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.371265 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1a9c1ad8acf1a6b92efdd547c5802fae8ec716bf04c3c6798e7e8b48ac1b073" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.371605 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-2h8kn" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.464319 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-86qsz"] Jan 29 08:43:47 crc kubenswrapper[4826]: E0129 08:43:47.464908 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac81a767-6eb1-4270-9d1f-824d8b6ce16b" containerName="install-os-openstack-openstack-cell1" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.464938 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac81a767-6eb1-4270-9d1f-824d8b6ce16b" containerName="install-os-openstack-openstack-cell1" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.465236 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac81a767-6eb1-4270-9d1f-824d8b6ce16b" containerName="install-os-openstack-openstack-cell1" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.466202 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-86qsz" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.477743 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.477843 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.478005 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.478127 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.503770 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-86qsz"] Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.600933 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xh8l\" (UniqueName: \"kubernetes.io/projected/67a18cd8-1442-487e-bd2a-92692793a734-kube-api-access-5xh8l\") pod \"configure-os-openstack-openstack-cell1-86qsz\" (UID: \"67a18cd8-1442-487e-bd2a-92692793a734\") " pod="openstack/configure-os-openstack-openstack-cell1-86qsz" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.601016 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/67a18cd8-1442-487e-bd2a-92692793a734-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-86qsz\" (UID: \"67a18cd8-1442-487e-bd2a-92692793a734\") " pod="openstack/configure-os-openstack-openstack-cell1-86qsz" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.601043 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67a18cd8-1442-487e-bd2a-92692793a734-inventory\") pod \"configure-os-openstack-openstack-cell1-86qsz\" (UID: \"67a18cd8-1442-487e-bd2a-92692793a734\") " pod="openstack/configure-os-openstack-openstack-cell1-86qsz" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.702879 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/67a18cd8-1442-487e-bd2a-92692793a734-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-86qsz\" (UID: \"67a18cd8-1442-487e-bd2a-92692793a734\") " pod="openstack/configure-os-openstack-openstack-cell1-86qsz" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.703195 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67a18cd8-1442-487e-bd2a-92692793a734-inventory\") pod \"configure-os-openstack-openstack-cell1-86qsz\" (UID: \"67a18cd8-1442-487e-bd2a-92692793a734\") " pod="openstack/configure-os-openstack-openstack-cell1-86qsz" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.703353 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xh8l\" (UniqueName: \"kubernetes.io/projected/67a18cd8-1442-487e-bd2a-92692793a734-kube-api-access-5xh8l\") pod \"configure-os-openstack-openstack-cell1-86qsz\" (UID: \"67a18cd8-1442-487e-bd2a-92692793a734\") " pod="openstack/configure-os-openstack-openstack-cell1-86qsz" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.711435 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/67a18cd8-1442-487e-bd2a-92692793a734-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-86qsz\" (UID: \"67a18cd8-1442-487e-bd2a-92692793a734\") " pod="openstack/configure-os-openstack-openstack-cell1-86qsz" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.712175 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67a18cd8-1442-487e-bd2a-92692793a734-inventory\") pod \"configure-os-openstack-openstack-cell1-86qsz\" (UID: \"67a18cd8-1442-487e-bd2a-92692793a734\") " pod="openstack/configure-os-openstack-openstack-cell1-86qsz" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.719681 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xh8l\" (UniqueName: \"kubernetes.io/projected/67a18cd8-1442-487e-bd2a-92692793a734-kube-api-access-5xh8l\") pod \"configure-os-openstack-openstack-cell1-86qsz\" (UID: \"67a18cd8-1442-487e-bd2a-92692793a734\") " pod="openstack/configure-os-openstack-openstack-cell1-86qsz" Jan 29 08:43:47 crc kubenswrapper[4826]: I0129 08:43:47.807407 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-86qsz" Jan 29 08:43:48 crc kubenswrapper[4826]: I0129 08:43:48.347699 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-86qsz"] Jan 29 08:43:48 crc kubenswrapper[4826]: W0129 08:43:48.350327 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67a18cd8_1442_487e_bd2a_92692793a734.slice/crio-7e8d6d48e8f26e664eb89f13c9ac2665ae03f3239e83db4e7fea2a52be7a42b8 WatchSource:0}: Error finding container 7e8d6d48e8f26e664eb89f13c9ac2665ae03f3239e83db4e7fea2a52be7a42b8: Status 404 returned error can't find the container with id 7e8d6d48e8f26e664eb89f13c9ac2665ae03f3239e83db4e7fea2a52be7a42b8 Jan 29 08:43:48 crc kubenswrapper[4826]: I0129 08:43:48.432148 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-86qsz" event={"ID":"67a18cd8-1442-487e-bd2a-92692793a734","Type":"ContainerStarted","Data":"7e8d6d48e8f26e664eb89f13c9ac2665ae03f3239e83db4e7fea2a52be7a42b8"} Jan 29 08:43:51 crc kubenswrapper[4826]: I0129 08:43:51.478468 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-86qsz" event={"ID":"67a18cd8-1442-487e-bd2a-92692793a734","Type":"ContainerStarted","Data":"6f7d9c933bbe341108f42dcf4e6f41acf036892b1a6548981b756b58655e283b"} Jan 29 08:43:51 crc kubenswrapper[4826]: I0129 08:43:51.500730 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-86qsz" podStartSLOduration=2.775679922 podStartE2EDuration="4.500701028s" podCreationTimestamp="2026-01-29 08:43:47 +0000 UTC" firstStartedPulling="2026-01-29 08:43:48.360019832 +0000 UTC m=+7212.221812911" lastFinishedPulling="2026-01-29 08:43:50.085040948 +0000 UTC m=+7213.946834017" observedRunningTime="2026-01-29 08:43:51.493086317 +0000 UTC m=+7215.354879386" watchObservedRunningTime="2026-01-29 08:43:51.500701028 +0000 UTC m=+7215.362494137" Jan 29 08:43:54 crc kubenswrapper[4826]: I0129 08:43:54.959814 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7q462"] Jan 29 08:43:54 crc kubenswrapper[4826]: I0129 08:43:54.966571 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:43:54 crc kubenswrapper[4826]: I0129 08:43:54.967531 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr7zd\" (UniqueName: \"kubernetes.io/projected/4611e4e1-60be-453f-bf65-27d3fca3f303-kube-api-access-dr7zd\") pod \"redhat-marketplace-7q462\" (UID: \"4611e4e1-60be-453f-bf65-27d3fca3f303\") " pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:43:54 crc kubenswrapper[4826]: I0129 08:43:54.967712 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4611e4e1-60be-453f-bf65-27d3fca3f303-utilities\") pod \"redhat-marketplace-7q462\" (UID: \"4611e4e1-60be-453f-bf65-27d3fca3f303\") " pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:43:54 crc kubenswrapper[4826]: I0129 08:43:54.967823 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4611e4e1-60be-453f-bf65-27d3fca3f303-catalog-content\") pod \"redhat-marketplace-7q462\" (UID: \"4611e4e1-60be-453f-bf65-27d3fca3f303\") " pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:43:54 crc kubenswrapper[4826]: I0129 08:43:54.970629 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q462"] Jan 29 08:43:55 crc kubenswrapper[4826]: I0129 08:43:55.069563 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr7zd\" (UniqueName: \"kubernetes.io/projected/4611e4e1-60be-453f-bf65-27d3fca3f303-kube-api-access-dr7zd\") pod \"redhat-marketplace-7q462\" (UID: \"4611e4e1-60be-453f-bf65-27d3fca3f303\") " pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:43:55 crc kubenswrapper[4826]: I0129 08:43:55.069658 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4611e4e1-60be-453f-bf65-27d3fca3f303-utilities\") pod \"redhat-marketplace-7q462\" (UID: \"4611e4e1-60be-453f-bf65-27d3fca3f303\") " pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:43:55 crc kubenswrapper[4826]: I0129 08:43:55.069679 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4611e4e1-60be-453f-bf65-27d3fca3f303-catalog-content\") pod \"redhat-marketplace-7q462\" (UID: \"4611e4e1-60be-453f-bf65-27d3fca3f303\") " pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:43:55 crc kubenswrapper[4826]: I0129 08:43:55.070224 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4611e4e1-60be-453f-bf65-27d3fca3f303-catalog-content\") pod \"redhat-marketplace-7q462\" (UID: \"4611e4e1-60be-453f-bf65-27d3fca3f303\") " pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:43:55 crc kubenswrapper[4826]: I0129 08:43:55.071144 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4611e4e1-60be-453f-bf65-27d3fca3f303-utilities\") pod \"redhat-marketplace-7q462\" (UID: \"4611e4e1-60be-453f-bf65-27d3fca3f303\") " pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:43:55 crc kubenswrapper[4826]: I0129 08:43:55.092118 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr7zd\" (UniqueName: \"kubernetes.io/projected/4611e4e1-60be-453f-bf65-27d3fca3f303-kube-api-access-dr7zd\") pod \"redhat-marketplace-7q462\" (UID: \"4611e4e1-60be-453f-bf65-27d3fca3f303\") " pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:43:55 crc kubenswrapper[4826]: I0129 08:43:55.312842 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:43:55 crc kubenswrapper[4826]: I0129 08:43:55.870529 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q462"] Jan 29 08:43:56 crc kubenswrapper[4826]: I0129 08:43:56.531790 4826 generic.go:334] "Generic (PLEG): container finished" podID="4611e4e1-60be-453f-bf65-27d3fca3f303" containerID="b85589ad49d4fb7a6ba3baf2c5217999903198e76ba3e70339d13f9dd6dfd303" exitCode=0 Jan 29 08:43:56 crc kubenswrapper[4826]: I0129 08:43:56.531855 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q462" event={"ID":"4611e4e1-60be-453f-bf65-27d3fca3f303","Type":"ContainerDied","Data":"b85589ad49d4fb7a6ba3baf2c5217999903198e76ba3e70339d13f9dd6dfd303"} Jan 29 08:43:56 crc kubenswrapper[4826]: I0129 08:43:56.531922 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q462" event={"ID":"4611e4e1-60be-453f-bf65-27d3fca3f303","Type":"ContainerStarted","Data":"91bd3c406a36bff8ca3f7462b492bdb852991e0edb6b6bfd1184324a84810ba1"} Jan 29 08:43:58 crc kubenswrapper[4826]: I0129 08:43:58.554485 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q462" event={"ID":"4611e4e1-60be-453f-bf65-27d3fca3f303","Type":"ContainerStarted","Data":"8544a1d25537e900da034ab90eaf48183a929c33b8de31dc1d7b67eeb61e528d"} Jan 29 08:43:59 crc kubenswrapper[4826]: I0129 08:43:59.567436 4826 generic.go:334] "Generic (PLEG): container finished" podID="4611e4e1-60be-453f-bf65-27d3fca3f303" containerID="8544a1d25537e900da034ab90eaf48183a929c33b8de31dc1d7b67eeb61e528d" exitCode=0 Jan 29 08:43:59 crc kubenswrapper[4826]: I0129 08:43:59.567532 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q462" event={"ID":"4611e4e1-60be-453f-bf65-27d3fca3f303","Type":"ContainerDied","Data":"8544a1d25537e900da034ab90eaf48183a929c33b8de31dc1d7b67eeb61e528d"} Jan 29 08:44:00 crc kubenswrapper[4826]: I0129 08:44:00.586096 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q462" event={"ID":"4611e4e1-60be-453f-bf65-27d3fca3f303","Type":"ContainerStarted","Data":"6259f4865ec472c8737fcc3674eb98d8768d2a5d67470bfe39821591ef2d335a"} Jan 29 08:44:00 crc kubenswrapper[4826]: I0129 08:44:00.615533 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7q462" podStartSLOduration=3.041724852 podStartE2EDuration="6.615512626s" podCreationTimestamp="2026-01-29 08:43:54 +0000 UTC" firstStartedPulling="2026-01-29 08:43:56.536833547 +0000 UTC m=+7220.398626626" lastFinishedPulling="2026-01-29 08:44:00.110621331 +0000 UTC m=+7223.972414400" observedRunningTime="2026-01-29 08:44:00.608906272 +0000 UTC m=+7224.470699341" watchObservedRunningTime="2026-01-29 08:44:00.615512626 +0000 UTC m=+7224.477305695" Jan 29 08:44:05 crc kubenswrapper[4826]: I0129 08:44:05.313994 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:44:05 crc kubenswrapper[4826]: I0129 08:44:05.315443 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:44:05 crc kubenswrapper[4826]: I0129 08:44:05.363414 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:44:05 crc kubenswrapper[4826]: I0129 08:44:05.656709 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:44:05 crc kubenswrapper[4826]: I0129 08:44:05.656765 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:44:05 crc kubenswrapper[4826]: I0129 08:44:05.684439 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:44:05 crc kubenswrapper[4826]: I0129 08:44:05.733823 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q462"] Jan 29 08:44:07 crc kubenswrapper[4826]: I0129 08:44:07.649285 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7q462" podUID="4611e4e1-60be-453f-bf65-27d3fca3f303" containerName="registry-server" containerID="cri-o://6259f4865ec472c8737fcc3674eb98d8768d2a5d67470bfe39821591ef2d335a" gracePeriod=2 Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.079538 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.169359 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4611e4e1-60be-453f-bf65-27d3fca3f303-catalog-content\") pod \"4611e4e1-60be-453f-bf65-27d3fca3f303\" (UID: \"4611e4e1-60be-453f-bf65-27d3fca3f303\") " Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.169478 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4611e4e1-60be-453f-bf65-27d3fca3f303-utilities\") pod \"4611e4e1-60be-453f-bf65-27d3fca3f303\" (UID: \"4611e4e1-60be-453f-bf65-27d3fca3f303\") " Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.169620 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr7zd\" (UniqueName: \"kubernetes.io/projected/4611e4e1-60be-453f-bf65-27d3fca3f303-kube-api-access-dr7zd\") pod \"4611e4e1-60be-453f-bf65-27d3fca3f303\" (UID: \"4611e4e1-60be-453f-bf65-27d3fca3f303\") " Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.170433 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4611e4e1-60be-453f-bf65-27d3fca3f303-utilities" (OuterVolumeSpecName: "utilities") pod "4611e4e1-60be-453f-bf65-27d3fca3f303" (UID: "4611e4e1-60be-453f-bf65-27d3fca3f303"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.177168 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4611e4e1-60be-453f-bf65-27d3fca3f303-kube-api-access-dr7zd" (OuterVolumeSpecName: "kube-api-access-dr7zd") pod "4611e4e1-60be-453f-bf65-27d3fca3f303" (UID: "4611e4e1-60be-453f-bf65-27d3fca3f303"). InnerVolumeSpecName "kube-api-access-dr7zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.192974 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4611e4e1-60be-453f-bf65-27d3fca3f303-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4611e4e1-60be-453f-bf65-27d3fca3f303" (UID: "4611e4e1-60be-453f-bf65-27d3fca3f303"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.272013 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4611e4e1-60be-453f-bf65-27d3fca3f303-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.272050 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr7zd\" (UniqueName: \"kubernetes.io/projected/4611e4e1-60be-453f-bf65-27d3fca3f303-kube-api-access-dr7zd\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.272061 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4611e4e1-60be-453f-bf65-27d3fca3f303-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.664108 4826 generic.go:334] "Generic (PLEG): container finished" podID="4611e4e1-60be-453f-bf65-27d3fca3f303" containerID="6259f4865ec472c8737fcc3674eb98d8768d2a5d67470bfe39821591ef2d335a" exitCode=0 Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.664150 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q462" event={"ID":"4611e4e1-60be-453f-bf65-27d3fca3f303","Type":"ContainerDied","Data":"6259f4865ec472c8737fcc3674eb98d8768d2a5d67470bfe39821591ef2d335a"} Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.664181 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q462" event={"ID":"4611e4e1-60be-453f-bf65-27d3fca3f303","Type":"ContainerDied","Data":"91bd3c406a36bff8ca3f7462b492bdb852991e0edb6b6bfd1184324a84810ba1"} Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.664200 4826 scope.go:117] "RemoveContainer" containerID="6259f4865ec472c8737fcc3674eb98d8768d2a5d67470bfe39821591ef2d335a" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.664231 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7q462" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.693460 4826 scope.go:117] "RemoveContainer" containerID="8544a1d25537e900da034ab90eaf48183a929c33b8de31dc1d7b67eeb61e528d" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.699473 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q462"] Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.709314 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q462"] Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.718132 4826 scope.go:117] "RemoveContainer" containerID="b85589ad49d4fb7a6ba3baf2c5217999903198e76ba3e70339d13f9dd6dfd303" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.765380 4826 scope.go:117] "RemoveContainer" containerID="6259f4865ec472c8737fcc3674eb98d8768d2a5d67470bfe39821591ef2d335a" Jan 29 08:44:08 crc kubenswrapper[4826]: E0129 08:44:08.766104 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6259f4865ec472c8737fcc3674eb98d8768d2a5d67470bfe39821591ef2d335a\": container with ID starting with 6259f4865ec472c8737fcc3674eb98d8768d2a5d67470bfe39821591ef2d335a not found: ID does not exist" containerID="6259f4865ec472c8737fcc3674eb98d8768d2a5d67470bfe39821591ef2d335a" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.766163 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6259f4865ec472c8737fcc3674eb98d8768d2a5d67470bfe39821591ef2d335a"} err="failed to get container status \"6259f4865ec472c8737fcc3674eb98d8768d2a5d67470bfe39821591ef2d335a\": rpc error: code = NotFound desc = could not find container \"6259f4865ec472c8737fcc3674eb98d8768d2a5d67470bfe39821591ef2d335a\": container with ID starting with 6259f4865ec472c8737fcc3674eb98d8768d2a5d67470bfe39821591ef2d335a not found: ID does not exist" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.766195 4826 scope.go:117] "RemoveContainer" containerID="8544a1d25537e900da034ab90eaf48183a929c33b8de31dc1d7b67eeb61e528d" Jan 29 08:44:08 crc kubenswrapper[4826]: E0129 08:44:08.766688 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8544a1d25537e900da034ab90eaf48183a929c33b8de31dc1d7b67eeb61e528d\": container with ID starting with 8544a1d25537e900da034ab90eaf48183a929c33b8de31dc1d7b67eeb61e528d not found: ID does not exist" containerID="8544a1d25537e900da034ab90eaf48183a929c33b8de31dc1d7b67eeb61e528d" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.766762 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8544a1d25537e900da034ab90eaf48183a929c33b8de31dc1d7b67eeb61e528d"} err="failed to get container status \"8544a1d25537e900da034ab90eaf48183a929c33b8de31dc1d7b67eeb61e528d\": rpc error: code = NotFound desc = could not find container \"8544a1d25537e900da034ab90eaf48183a929c33b8de31dc1d7b67eeb61e528d\": container with ID starting with 8544a1d25537e900da034ab90eaf48183a929c33b8de31dc1d7b67eeb61e528d not found: ID does not exist" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.766805 4826 scope.go:117] "RemoveContainer" containerID="b85589ad49d4fb7a6ba3baf2c5217999903198e76ba3e70339d13f9dd6dfd303" Jan 29 08:44:08 crc kubenswrapper[4826]: E0129 08:44:08.767185 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b85589ad49d4fb7a6ba3baf2c5217999903198e76ba3e70339d13f9dd6dfd303\": container with ID starting with b85589ad49d4fb7a6ba3baf2c5217999903198e76ba3e70339d13f9dd6dfd303 not found: ID does not exist" containerID="b85589ad49d4fb7a6ba3baf2c5217999903198e76ba3e70339d13f9dd6dfd303" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.767221 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b85589ad49d4fb7a6ba3baf2c5217999903198e76ba3e70339d13f9dd6dfd303"} err="failed to get container status \"b85589ad49d4fb7a6ba3baf2c5217999903198e76ba3e70339d13f9dd6dfd303\": rpc error: code = NotFound desc = could not find container \"b85589ad49d4fb7a6ba3baf2c5217999903198e76ba3e70339d13f9dd6dfd303\": container with ID starting with b85589ad49d4fb7a6ba3baf2c5217999903198e76ba3e70339d13f9dd6dfd303 not found: ID does not exist" Jan 29 08:44:08 crc kubenswrapper[4826]: I0129 08:44:08.822705 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4611e4e1-60be-453f-bf65-27d3fca3f303" path="/var/lib/kubelet/pods/4611e4e1-60be-453f-bf65-27d3fca3f303/volumes" Jan 29 08:44:33 crc kubenswrapper[4826]: I0129 08:44:33.894521 4826 generic.go:334] "Generic (PLEG): container finished" podID="67a18cd8-1442-487e-bd2a-92692793a734" containerID="6f7d9c933bbe341108f42dcf4e6f41acf036892b1a6548981b756b58655e283b" exitCode=0 Jan 29 08:44:33 crc kubenswrapper[4826]: I0129 08:44:33.894562 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-86qsz" event={"ID":"67a18cd8-1442-487e-bd2a-92692793a734","Type":"ContainerDied","Data":"6f7d9c933bbe341108f42dcf4e6f41acf036892b1a6548981b756b58655e283b"} Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.284277 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-86qsz" Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.451421 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xh8l\" (UniqueName: \"kubernetes.io/projected/67a18cd8-1442-487e-bd2a-92692793a734-kube-api-access-5xh8l\") pod \"67a18cd8-1442-487e-bd2a-92692793a734\" (UID: \"67a18cd8-1442-487e-bd2a-92692793a734\") " Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.452530 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/67a18cd8-1442-487e-bd2a-92692793a734-ssh-key-openstack-cell1\") pod \"67a18cd8-1442-487e-bd2a-92692793a734\" (UID: \"67a18cd8-1442-487e-bd2a-92692793a734\") " Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.452689 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67a18cd8-1442-487e-bd2a-92692793a734-inventory\") pod \"67a18cd8-1442-487e-bd2a-92692793a734\" (UID: \"67a18cd8-1442-487e-bd2a-92692793a734\") " Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.457112 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a18cd8-1442-487e-bd2a-92692793a734-kube-api-access-5xh8l" (OuterVolumeSpecName: "kube-api-access-5xh8l") pod "67a18cd8-1442-487e-bd2a-92692793a734" (UID: "67a18cd8-1442-487e-bd2a-92692793a734"). InnerVolumeSpecName "kube-api-access-5xh8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.479950 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a18cd8-1442-487e-bd2a-92692793a734-inventory" (OuterVolumeSpecName: "inventory") pod "67a18cd8-1442-487e-bd2a-92692793a734" (UID: "67a18cd8-1442-487e-bd2a-92692793a734"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.485982 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a18cd8-1442-487e-bd2a-92692793a734-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "67a18cd8-1442-487e-bd2a-92692793a734" (UID: "67a18cd8-1442-487e-bd2a-92692793a734"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.554632 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xh8l\" (UniqueName: \"kubernetes.io/projected/67a18cd8-1442-487e-bd2a-92692793a734-kube-api-access-5xh8l\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.554677 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/67a18cd8-1442-487e-bd2a-92692793a734-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.554692 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67a18cd8-1442-487e-bd2a-92692793a734-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.656182 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.656282 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.915446 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-86qsz" event={"ID":"67a18cd8-1442-487e-bd2a-92692793a734","Type":"ContainerDied","Data":"7e8d6d48e8f26e664eb89f13c9ac2665ae03f3239e83db4e7fea2a52be7a42b8"} Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.915494 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e8d6d48e8f26e664eb89f13c9ac2665ae03f3239e83db4e7fea2a52be7a42b8" Jan 29 08:44:35 crc kubenswrapper[4826]: I0129 08:44:35.915966 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-86qsz" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.021615 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-7h28l"] Jan 29 08:44:36 crc kubenswrapper[4826]: E0129 08:44:36.022418 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4611e4e1-60be-453f-bf65-27d3fca3f303" containerName="extract-utilities" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.022442 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4611e4e1-60be-453f-bf65-27d3fca3f303" containerName="extract-utilities" Jan 29 08:44:36 crc kubenswrapper[4826]: E0129 08:44:36.022466 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a18cd8-1442-487e-bd2a-92692793a734" containerName="configure-os-openstack-openstack-cell1" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.022476 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a18cd8-1442-487e-bd2a-92692793a734" containerName="configure-os-openstack-openstack-cell1" Jan 29 08:44:36 crc kubenswrapper[4826]: E0129 08:44:36.022490 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4611e4e1-60be-453f-bf65-27d3fca3f303" containerName="registry-server" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.022499 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4611e4e1-60be-453f-bf65-27d3fca3f303" containerName="registry-server" Jan 29 08:44:36 crc kubenswrapper[4826]: E0129 08:44:36.022522 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4611e4e1-60be-453f-bf65-27d3fca3f303" containerName="extract-content" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.022529 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4611e4e1-60be-453f-bf65-27d3fca3f303" containerName="extract-content" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.022825 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4611e4e1-60be-453f-bf65-27d3fca3f303" containerName="registry-server" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.022858 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a18cd8-1442-487e-bd2a-92692793a734" containerName="configure-os-openstack-openstack-cell1" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.023787 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-7h28l" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.026330 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.026556 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.026745 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.026838 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.032377 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-7h28l"] Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.062946 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/de556388-7d07-4ee5-9e5f-d47c47f8437e-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-7h28l\" (UID: \"de556388-7d07-4ee5-9e5f-d47c47f8437e\") " pod="openstack/ssh-known-hosts-openstack-7h28l" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.063022 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/de556388-7d07-4ee5-9e5f-d47c47f8437e-inventory-0\") pod \"ssh-known-hosts-openstack-7h28l\" (UID: \"de556388-7d07-4ee5-9e5f-d47c47f8437e\") " pod="openstack/ssh-known-hosts-openstack-7h28l" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.063055 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xssw2\" (UniqueName: \"kubernetes.io/projected/de556388-7d07-4ee5-9e5f-d47c47f8437e-kube-api-access-xssw2\") pod \"ssh-known-hosts-openstack-7h28l\" (UID: \"de556388-7d07-4ee5-9e5f-d47c47f8437e\") " pod="openstack/ssh-known-hosts-openstack-7h28l" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.165398 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/de556388-7d07-4ee5-9e5f-d47c47f8437e-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-7h28l\" (UID: \"de556388-7d07-4ee5-9e5f-d47c47f8437e\") " pod="openstack/ssh-known-hosts-openstack-7h28l" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.165526 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/de556388-7d07-4ee5-9e5f-d47c47f8437e-inventory-0\") pod \"ssh-known-hosts-openstack-7h28l\" (UID: \"de556388-7d07-4ee5-9e5f-d47c47f8437e\") " pod="openstack/ssh-known-hosts-openstack-7h28l" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.165602 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xssw2\" (UniqueName: \"kubernetes.io/projected/de556388-7d07-4ee5-9e5f-d47c47f8437e-kube-api-access-xssw2\") pod \"ssh-known-hosts-openstack-7h28l\" (UID: \"de556388-7d07-4ee5-9e5f-d47c47f8437e\") " pod="openstack/ssh-known-hosts-openstack-7h28l" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.170271 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/de556388-7d07-4ee5-9e5f-d47c47f8437e-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-7h28l\" (UID: \"de556388-7d07-4ee5-9e5f-d47c47f8437e\") " pod="openstack/ssh-known-hosts-openstack-7h28l" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.171370 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/de556388-7d07-4ee5-9e5f-d47c47f8437e-inventory-0\") pod \"ssh-known-hosts-openstack-7h28l\" (UID: \"de556388-7d07-4ee5-9e5f-d47c47f8437e\") " pod="openstack/ssh-known-hosts-openstack-7h28l" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.195896 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xssw2\" (UniqueName: \"kubernetes.io/projected/de556388-7d07-4ee5-9e5f-d47c47f8437e-kube-api-access-xssw2\") pod \"ssh-known-hosts-openstack-7h28l\" (UID: \"de556388-7d07-4ee5-9e5f-d47c47f8437e\") " pod="openstack/ssh-known-hosts-openstack-7h28l" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.338925 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-7h28l" Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.906443 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-7h28l"] Jan 29 08:44:36 crc kubenswrapper[4826]: I0129 08:44:36.926532 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-7h28l" event={"ID":"de556388-7d07-4ee5-9e5f-d47c47f8437e","Type":"ContainerStarted","Data":"20fd4365aabc17e2fc5db7ad37407f08dca7c468a0894fccfa87bbfd086852fa"} Jan 29 08:44:38 crc kubenswrapper[4826]: I0129 08:44:38.079472 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:44:38 crc kubenswrapper[4826]: I0129 08:44:38.946651 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-7h28l" event={"ID":"de556388-7d07-4ee5-9e5f-d47c47f8437e","Type":"ContainerStarted","Data":"da27a7599a5f7c8f25b5d5c23366259e01061c991a404cbb1d77a910cdb92c1b"} Jan 29 08:44:38 crc kubenswrapper[4826]: I0129 08:44:38.969366 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-7h28l" podStartSLOduration=2.795003872 podStartE2EDuration="3.969347944s" podCreationTimestamp="2026-01-29 08:44:35 +0000 UTC" firstStartedPulling="2026-01-29 08:44:36.900837657 +0000 UTC m=+7260.762630726" lastFinishedPulling="2026-01-29 08:44:38.075181719 +0000 UTC m=+7261.936974798" observedRunningTime="2026-01-29 08:44:38.966724305 +0000 UTC m=+7262.828517374" watchObservedRunningTime="2026-01-29 08:44:38.969347944 +0000 UTC m=+7262.831141013" Jan 29 08:44:48 crc kubenswrapper[4826]: I0129 08:44:48.038369 4826 generic.go:334] "Generic (PLEG): container finished" podID="de556388-7d07-4ee5-9e5f-d47c47f8437e" containerID="da27a7599a5f7c8f25b5d5c23366259e01061c991a404cbb1d77a910cdb92c1b" exitCode=0 Jan 29 08:44:48 crc kubenswrapper[4826]: I0129 08:44:48.038401 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-7h28l" event={"ID":"de556388-7d07-4ee5-9e5f-d47c47f8437e","Type":"ContainerDied","Data":"da27a7599a5f7c8f25b5d5c23366259e01061c991a404cbb1d77a910cdb92c1b"} Jan 29 08:44:49 crc kubenswrapper[4826]: I0129 08:44:49.519244 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-7h28l" Jan 29 08:44:49 crc kubenswrapper[4826]: I0129 08:44:49.649202 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/de556388-7d07-4ee5-9e5f-d47c47f8437e-inventory-0\") pod \"de556388-7d07-4ee5-9e5f-d47c47f8437e\" (UID: \"de556388-7d07-4ee5-9e5f-d47c47f8437e\") " Jan 29 08:44:49 crc kubenswrapper[4826]: I0129 08:44:49.649383 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/de556388-7d07-4ee5-9e5f-d47c47f8437e-ssh-key-openstack-cell1\") pod \"de556388-7d07-4ee5-9e5f-d47c47f8437e\" (UID: \"de556388-7d07-4ee5-9e5f-d47c47f8437e\") " Jan 29 08:44:49 crc kubenswrapper[4826]: I0129 08:44:49.649617 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xssw2\" (UniqueName: \"kubernetes.io/projected/de556388-7d07-4ee5-9e5f-d47c47f8437e-kube-api-access-xssw2\") pod \"de556388-7d07-4ee5-9e5f-d47c47f8437e\" (UID: \"de556388-7d07-4ee5-9e5f-d47c47f8437e\") " Jan 29 08:44:49 crc kubenswrapper[4826]: I0129 08:44:49.654638 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de556388-7d07-4ee5-9e5f-d47c47f8437e-kube-api-access-xssw2" (OuterVolumeSpecName: "kube-api-access-xssw2") pod "de556388-7d07-4ee5-9e5f-d47c47f8437e" (UID: "de556388-7d07-4ee5-9e5f-d47c47f8437e"). InnerVolumeSpecName "kube-api-access-xssw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:44:49 crc kubenswrapper[4826]: I0129 08:44:49.676528 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de556388-7d07-4ee5-9e5f-d47c47f8437e-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "de556388-7d07-4ee5-9e5f-d47c47f8437e" (UID: "de556388-7d07-4ee5-9e5f-d47c47f8437e"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:44:49 crc kubenswrapper[4826]: I0129 08:44:49.677577 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de556388-7d07-4ee5-9e5f-d47c47f8437e-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "de556388-7d07-4ee5-9e5f-d47c47f8437e" (UID: "de556388-7d07-4ee5-9e5f-d47c47f8437e"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:44:49 crc kubenswrapper[4826]: I0129 08:44:49.755901 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xssw2\" (UniqueName: \"kubernetes.io/projected/de556388-7d07-4ee5-9e5f-d47c47f8437e-kube-api-access-xssw2\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:49 crc kubenswrapper[4826]: I0129 08:44:49.755941 4826 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/de556388-7d07-4ee5-9e5f-d47c47f8437e-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:49 crc kubenswrapper[4826]: I0129 08:44:49.755955 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/de556388-7d07-4ee5-9e5f-d47c47f8437e-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.069682 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-7h28l" event={"ID":"de556388-7d07-4ee5-9e5f-d47c47f8437e","Type":"ContainerDied","Data":"20fd4365aabc17e2fc5db7ad37407f08dca7c468a0894fccfa87bbfd086852fa"} Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.069738 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20fd4365aabc17e2fc5db7ad37407f08dca7c468a0894fccfa87bbfd086852fa" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.070165 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-7h28l" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.143947 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-rzd7v"] Jan 29 08:44:50 crc kubenswrapper[4826]: E0129 08:44:50.144506 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de556388-7d07-4ee5-9e5f-d47c47f8437e" containerName="ssh-known-hosts-openstack" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.144535 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="de556388-7d07-4ee5-9e5f-d47c47f8437e" containerName="ssh-known-hosts-openstack" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.144796 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="de556388-7d07-4ee5-9e5f-d47c47f8437e" containerName="ssh-known-hosts-openstack" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.146570 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-rzd7v" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.152114 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.152473 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.152876 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.153097 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.163151 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-rzd7v"] Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.269071 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6557b04-094c-42ff-ab7f-0f42b40fc942-inventory\") pod \"run-os-openstack-openstack-cell1-rzd7v\" (UID: \"e6557b04-094c-42ff-ab7f-0f42b40fc942\") " pod="openstack/run-os-openstack-openstack-cell1-rzd7v" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.269322 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e6557b04-094c-42ff-ab7f-0f42b40fc942-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-rzd7v\" (UID: \"e6557b04-094c-42ff-ab7f-0f42b40fc942\") " pod="openstack/run-os-openstack-openstack-cell1-rzd7v" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.269681 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6zg4\" (UniqueName: \"kubernetes.io/projected/e6557b04-094c-42ff-ab7f-0f42b40fc942-kube-api-access-r6zg4\") pod \"run-os-openstack-openstack-cell1-rzd7v\" (UID: \"e6557b04-094c-42ff-ab7f-0f42b40fc942\") " pod="openstack/run-os-openstack-openstack-cell1-rzd7v" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.371621 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6zg4\" (UniqueName: \"kubernetes.io/projected/e6557b04-094c-42ff-ab7f-0f42b40fc942-kube-api-access-r6zg4\") pod \"run-os-openstack-openstack-cell1-rzd7v\" (UID: \"e6557b04-094c-42ff-ab7f-0f42b40fc942\") " pod="openstack/run-os-openstack-openstack-cell1-rzd7v" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.371981 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6557b04-094c-42ff-ab7f-0f42b40fc942-inventory\") pod \"run-os-openstack-openstack-cell1-rzd7v\" (UID: \"e6557b04-094c-42ff-ab7f-0f42b40fc942\") " pod="openstack/run-os-openstack-openstack-cell1-rzd7v" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.372192 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e6557b04-094c-42ff-ab7f-0f42b40fc942-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-rzd7v\" (UID: \"e6557b04-094c-42ff-ab7f-0f42b40fc942\") " pod="openstack/run-os-openstack-openstack-cell1-rzd7v" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.377767 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e6557b04-094c-42ff-ab7f-0f42b40fc942-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-rzd7v\" (UID: \"e6557b04-094c-42ff-ab7f-0f42b40fc942\") " pod="openstack/run-os-openstack-openstack-cell1-rzd7v" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.377785 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6557b04-094c-42ff-ab7f-0f42b40fc942-inventory\") pod \"run-os-openstack-openstack-cell1-rzd7v\" (UID: \"e6557b04-094c-42ff-ab7f-0f42b40fc942\") " pod="openstack/run-os-openstack-openstack-cell1-rzd7v" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.390651 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6zg4\" (UniqueName: \"kubernetes.io/projected/e6557b04-094c-42ff-ab7f-0f42b40fc942-kube-api-access-r6zg4\") pod \"run-os-openstack-openstack-cell1-rzd7v\" (UID: \"e6557b04-094c-42ff-ab7f-0f42b40fc942\") " pod="openstack/run-os-openstack-openstack-cell1-rzd7v" Jan 29 08:44:50 crc kubenswrapper[4826]: I0129 08:44:50.469642 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-rzd7v" Jan 29 08:44:51 crc kubenswrapper[4826]: I0129 08:44:51.064951 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-rzd7v"] Jan 29 08:44:51 crc kubenswrapper[4826]: I0129 08:44:51.080512 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-rzd7v" event={"ID":"e6557b04-094c-42ff-ab7f-0f42b40fc942","Type":"ContainerStarted","Data":"9f866e367cc3bda0667c2955384bb71f703e22ab7d14fab24fc9851cc330c7cd"} Jan 29 08:44:53 crc kubenswrapper[4826]: I0129 08:44:53.104358 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-rzd7v" event={"ID":"e6557b04-094c-42ff-ab7f-0f42b40fc942","Type":"ContainerStarted","Data":"de2b8409f2082cd160bcda6b25c92d261ec91e5bd95fcf4d60a3ec38d1792dff"} Jan 29 08:44:53 crc kubenswrapper[4826]: I0129 08:44:53.125559 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-rzd7v" podStartSLOduration=2.109053853 podStartE2EDuration="3.125539022s" podCreationTimestamp="2026-01-29 08:44:50 +0000 UTC" firstStartedPulling="2026-01-29 08:44:51.071766134 +0000 UTC m=+7274.933559213" lastFinishedPulling="2026-01-29 08:44:52.088251313 +0000 UTC m=+7275.950044382" observedRunningTime="2026-01-29 08:44:53.121321781 +0000 UTC m=+7276.983114860" watchObservedRunningTime="2026-01-29 08:44:53.125539022 +0000 UTC m=+7276.987332101" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.145025 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7"] Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.147325 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.149584 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.149733 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.156663 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7"] Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.184919 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbbnc\" (UniqueName: \"kubernetes.io/projected/89a7c15c-7c85-4698-92eb-9041de234300-kube-api-access-kbbnc\") pod \"collect-profiles-29494605-xqpp7\" (UID: \"89a7c15c-7c85-4698-92eb-9041de234300\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.184981 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89a7c15c-7c85-4698-92eb-9041de234300-config-volume\") pod \"collect-profiles-29494605-xqpp7\" (UID: \"89a7c15c-7c85-4698-92eb-9041de234300\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.185197 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89a7c15c-7c85-4698-92eb-9041de234300-secret-volume\") pod \"collect-profiles-29494605-xqpp7\" (UID: \"89a7c15c-7c85-4698-92eb-9041de234300\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.187034 4826 generic.go:334] "Generic (PLEG): container finished" podID="e6557b04-094c-42ff-ab7f-0f42b40fc942" containerID="de2b8409f2082cd160bcda6b25c92d261ec91e5bd95fcf4d60a3ec38d1792dff" exitCode=0 Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.187087 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-rzd7v" event={"ID":"e6557b04-094c-42ff-ab7f-0f42b40fc942","Type":"ContainerDied","Data":"de2b8409f2082cd160bcda6b25c92d261ec91e5bd95fcf4d60a3ec38d1792dff"} Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.287168 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89a7c15c-7c85-4698-92eb-9041de234300-secret-volume\") pod \"collect-profiles-29494605-xqpp7\" (UID: \"89a7c15c-7c85-4698-92eb-9041de234300\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.287267 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbbnc\" (UniqueName: \"kubernetes.io/projected/89a7c15c-7c85-4698-92eb-9041de234300-kube-api-access-kbbnc\") pod \"collect-profiles-29494605-xqpp7\" (UID: \"89a7c15c-7c85-4698-92eb-9041de234300\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.287370 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89a7c15c-7c85-4698-92eb-9041de234300-config-volume\") pod \"collect-profiles-29494605-xqpp7\" (UID: \"89a7c15c-7c85-4698-92eb-9041de234300\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.288501 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89a7c15c-7c85-4698-92eb-9041de234300-config-volume\") pod \"collect-profiles-29494605-xqpp7\" (UID: \"89a7c15c-7c85-4698-92eb-9041de234300\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.302376 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89a7c15c-7c85-4698-92eb-9041de234300-secret-volume\") pod \"collect-profiles-29494605-xqpp7\" (UID: \"89a7c15c-7c85-4698-92eb-9041de234300\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.305310 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbbnc\" (UniqueName: \"kubernetes.io/projected/89a7c15c-7c85-4698-92eb-9041de234300-kube-api-access-kbbnc\") pod \"collect-profiles-29494605-xqpp7\" (UID: \"89a7c15c-7c85-4698-92eb-9041de234300\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.505691 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" Jan 29 08:45:00 crc kubenswrapper[4826]: I0129 08:45:00.970366 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7"] Jan 29 08:45:01 crc kubenswrapper[4826]: I0129 08:45:01.206387 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" event={"ID":"89a7c15c-7c85-4698-92eb-9041de234300","Type":"ContainerStarted","Data":"a587129ea619384752147ca0b6245ff3245e14509fbd28a391fe14e6ccd35c56"} Jan 29 08:45:01 crc kubenswrapper[4826]: I0129 08:45:01.206445 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" event={"ID":"89a7c15c-7c85-4698-92eb-9041de234300","Type":"ContainerStarted","Data":"56adf9caa8fc282d324cdf34cba8e3f56a44cb72a048b0831adfcf4f25e3849c"} Jan 29 08:45:01 crc kubenswrapper[4826]: I0129 08:45:01.228244 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" podStartSLOduration=1.228223098 podStartE2EDuration="1.228223098s" podCreationTimestamp="2026-01-29 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 08:45:01.225993139 +0000 UTC m=+7285.087786208" watchObservedRunningTime="2026-01-29 08:45:01.228223098 +0000 UTC m=+7285.090016167" Jan 29 08:45:01 crc kubenswrapper[4826]: I0129 08:45:01.674409 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-rzd7v" Jan 29 08:45:01 crc kubenswrapper[4826]: I0129 08:45:01.712236 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e6557b04-094c-42ff-ab7f-0f42b40fc942-ssh-key-openstack-cell1\") pod \"e6557b04-094c-42ff-ab7f-0f42b40fc942\" (UID: \"e6557b04-094c-42ff-ab7f-0f42b40fc942\") " Jan 29 08:45:01 crc kubenswrapper[4826]: I0129 08:45:01.712429 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6zg4\" (UniqueName: \"kubernetes.io/projected/e6557b04-094c-42ff-ab7f-0f42b40fc942-kube-api-access-r6zg4\") pod \"e6557b04-094c-42ff-ab7f-0f42b40fc942\" (UID: \"e6557b04-094c-42ff-ab7f-0f42b40fc942\") " Jan 29 08:45:01 crc kubenswrapper[4826]: I0129 08:45:01.712486 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6557b04-094c-42ff-ab7f-0f42b40fc942-inventory\") pod \"e6557b04-094c-42ff-ab7f-0f42b40fc942\" (UID: \"e6557b04-094c-42ff-ab7f-0f42b40fc942\") " Jan 29 08:45:01 crc kubenswrapper[4826]: I0129 08:45:01.717979 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6557b04-094c-42ff-ab7f-0f42b40fc942-kube-api-access-r6zg4" (OuterVolumeSpecName: "kube-api-access-r6zg4") pod "e6557b04-094c-42ff-ab7f-0f42b40fc942" (UID: "e6557b04-094c-42ff-ab7f-0f42b40fc942"). InnerVolumeSpecName "kube-api-access-r6zg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:01 crc kubenswrapper[4826]: I0129 08:45:01.741753 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6557b04-094c-42ff-ab7f-0f42b40fc942-inventory" (OuterVolumeSpecName: "inventory") pod "e6557b04-094c-42ff-ab7f-0f42b40fc942" (UID: "e6557b04-094c-42ff-ab7f-0f42b40fc942"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:01 crc kubenswrapper[4826]: I0129 08:45:01.741892 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6557b04-094c-42ff-ab7f-0f42b40fc942-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "e6557b04-094c-42ff-ab7f-0f42b40fc942" (UID: "e6557b04-094c-42ff-ab7f-0f42b40fc942"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:01 crc kubenswrapper[4826]: I0129 08:45:01.815150 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e6557b04-094c-42ff-ab7f-0f42b40fc942-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:01 crc kubenswrapper[4826]: I0129 08:45:01.815429 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6zg4\" (UniqueName: \"kubernetes.io/projected/e6557b04-094c-42ff-ab7f-0f42b40fc942-kube-api-access-r6zg4\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:01 crc kubenswrapper[4826]: I0129 08:45:01.815439 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6557b04-094c-42ff-ab7f-0f42b40fc942-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.218594 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-rzd7v" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.220424 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-rzd7v" event={"ID":"e6557b04-094c-42ff-ab7f-0f42b40fc942","Type":"ContainerDied","Data":"9f866e367cc3bda0667c2955384bb71f703e22ab7d14fab24fc9851cc330c7cd"} Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.220474 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f866e367cc3bda0667c2955384bb71f703e22ab7d14fab24fc9851cc330c7cd" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.220811 4826 generic.go:334] "Generic (PLEG): container finished" podID="89a7c15c-7c85-4698-92eb-9041de234300" containerID="a587129ea619384752147ca0b6245ff3245e14509fbd28a391fe14e6ccd35c56" exitCode=0 Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.220836 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" event={"ID":"89a7c15c-7c85-4698-92eb-9041de234300","Type":"ContainerDied","Data":"a587129ea619384752147ca0b6245ff3245e14509fbd28a391fe14e6ccd35c56"} Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.305145 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-p7s2n"] Jan 29 08:45:02 crc kubenswrapper[4826]: E0129 08:45:02.305746 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6557b04-094c-42ff-ab7f-0f42b40fc942" containerName="run-os-openstack-openstack-cell1" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.305793 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6557b04-094c-42ff-ab7f-0f42b40fc942" containerName="run-os-openstack-openstack-cell1" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.306075 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6557b04-094c-42ff-ab7f-0f42b40fc942" containerName="run-os-openstack-openstack-cell1" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.306971 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.309496 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.309675 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.309719 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.309895 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.334043 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-p7s2n"] Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.426011 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gscfq\" (UniqueName: \"kubernetes.io/projected/b2da9029-df06-4171-8057-4a8e1908deb5-kube-api-access-gscfq\") pod \"reboot-os-openstack-openstack-cell1-p7s2n\" (UID: \"b2da9029-df06-4171-8057-4a8e1908deb5\") " pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.426063 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2da9029-df06-4171-8057-4a8e1908deb5-inventory\") pod \"reboot-os-openstack-openstack-cell1-p7s2n\" (UID: \"b2da9029-df06-4171-8057-4a8e1908deb5\") " pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.426311 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b2da9029-df06-4171-8057-4a8e1908deb5-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-p7s2n\" (UID: \"b2da9029-df06-4171-8057-4a8e1908deb5\") " pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.527991 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gscfq\" (UniqueName: \"kubernetes.io/projected/b2da9029-df06-4171-8057-4a8e1908deb5-kube-api-access-gscfq\") pod \"reboot-os-openstack-openstack-cell1-p7s2n\" (UID: \"b2da9029-df06-4171-8057-4a8e1908deb5\") " pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.528039 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2da9029-df06-4171-8057-4a8e1908deb5-inventory\") pod \"reboot-os-openstack-openstack-cell1-p7s2n\" (UID: \"b2da9029-df06-4171-8057-4a8e1908deb5\") " pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.528188 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b2da9029-df06-4171-8057-4a8e1908deb5-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-p7s2n\" (UID: \"b2da9029-df06-4171-8057-4a8e1908deb5\") " pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.533155 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2da9029-df06-4171-8057-4a8e1908deb5-inventory\") pod \"reboot-os-openstack-openstack-cell1-p7s2n\" (UID: \"b2da9029-df06-4171-8057-4a8e1908deb5\") " pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.534393 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b2da9029-df06-4171-8057-4a8e1908deb5-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-p7s2n\" (UID: \"b2da9029-df06-4171-8057-4a8e1908deb5\") " pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.555012 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gscfq\" (UniqueName: \"kubernetes.io/projected/b2da9029-df06-4171-8057-4a8e1908deb5-kube-api-access-gscfq\") pod \"reboot-os-openstack-openstack-cell1-p7s2n\" (UID: \"b2da9029-df06-4171-8057-4a8e1908deb5\") " pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" Jan 29 08:45:02 crc kubenswrapper[4826]: I0129 08:45:02.629507 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" Jan 29 08:45:03 crc kubenswrapper[4826]: I0129 08:45:03.283094 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-p7s2n"] Jan 29 08:45:03 crc kubenswrapper[4826]: I0129 08:45:03.637076 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" Jan 29 08:45:03 crc kubenswrapper[4826]: I0129 08:45:03.754773 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbbnc\" (UniqueName: \"kubernetes.io/projected/89a7c15c-7c85-4698-92eb-9041de234300-kube-api-access-kbbnc\") pod \"89a7c15c-7c85-4698-92eb-9041de234300\" (UID: \"89a7c15c-7c85-4698-92eb-9041de234300\") " Jan 29 08:45:03 crc kubenswrapper[4826]: I0129 08:45:03.754963 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89a7c15c-7c85-4698-92eb-9041de234300-secret-volume\") pod \"89a7c15c-7c85-4698-92eb-9041de234300\" (UID: \"89a7c15c-7c85-4698-92eb-9041de234300\") " Jan 29 08:45:03 crc kubenswrapper[4826]: I0129 08:45:03.755178 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89a7c15c-7c85-4698-92eb-9041de234300-config-volume\") pod \"89a7c15c-7c85-4698-92eb-9041de234300\" (UID: \"89a7c15c-7c85-4698-92eb-9041de234300\") " Jan 29 08:45:03 crc kubenswrapper[4826]: I0129 08:45:03.756009 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89a7c15c-7c85-4698-92eb-9041de234300-config-volume" (OuterVolumeSpecName: "config-volume") pod "89a7c15c-7c85-4698-92eb-9041de234300" (UID: "89a7c15c-7c85-4698-92eb-9041de234300"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:45:03 crc kubenswrapper[4826]: I0129 08:45:03.760906 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a7c15c-7c85-4698-92eb-9041de234300-kube-api-access-kbbnc" (OuterVolumeSpecName: "kube-api-access-kbbnc") pod "89a7c15c-7c85-4698-92eb-9041de234300" (UID: "89a7c15c-7c85-4698-92eb-9041de234300"). InnerVolumeSpecName "kube-api-access-kbbnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:03 crc kubenswrapper[4826]: I0129 08:45:03.761823 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a7c15c-7c85-4698-92eb-9041de234300-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "89a7c15c-7c85-4698-92eb-9041de234300" (UID: "89a7c15c-7c85-4698-92eb-9041de234300"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:03 crc kubenswrapper[4826]: I0129 08:45:03.860617 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89a7c15c-7c85-4698-92eb-9041de234300-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:03 crc kubenswrapper[4826]: I0129 08:45:03.860659 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbbnc\" (UniqueName: \"kubernetes.io/projected/89a7c15c-7c85-4698-92eb-9041de234300-kube-api-access-kbbnc\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:03 crc kubenswrapper[4826]: I0129 08:45:03.860679 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89a7c15c-7c85-4698-92eb-9041de234300-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:04 crc kubenswrapper[4826]: I0129 08:45:04.239614 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" event={"ID":"b2da9029-df06-4171-8057-4a8e1908deb5","Type":"ContainerStarted","Data":"3579a3c97fb30589bf517cb13988116b7a12002ff78b43f6ddaffb6a53063ef2"} Jan 29 08:45:04 crc kubenswrapper[4826]: I0129 08:45:04.240024 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" event={"ID":"b2da9029-df06-4171-8057-4a8e1908deb5","Type":"ContainerStarted","Data":"b4f2aea036b6484cec7d1056c8ffbff57a26e0f958d271cf55c9c22dd6cd4466"} Jan 29 08:45:04 crc kubenswrapper[4826]: I0129 08:45:04.241975 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" event={"ID":"89a7c15c-7c85-4698-92eb-9041de234300","Type":"ContainerDied","Data":"56adf9caa8fc282d324cdf34cba8e3f56a44cb72a048b0831adfcf4f25e3849c"} Jan 29 08:45:04 crc kubenswrapper[4826]: I0129 08:45:04.242006 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56adf9caa8fc282d324cdf34cba8e3f56a44cb72a048b0831adfcf4f25e3849c" Jan 29 08:45:04 crc kubenswrapper[4826]: I0129 08:45:04.242052 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7" Jan 29 08:45:04 crc kubenswrapper[4826]: I0129 08:45:04.275591 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" podStartSLOduration=1.707294227 podStartE2EDuration="2.275554608s" podCreationTimestamp="2026-01-29 08:45:02 +0000 UTC" firstStartedPulling="2026-01-29 08:45:03.278727159 +0000 UTC m=+7287.140520228" lastFinishedPulling="2026-01-29 08:45:03.84698754 +0000 UTC m=+7287.708780609" observedRunningTime="2026-01-29 08:45:04.258454526 +0000 UTC m=+7288.120247595" watchObservedRunningTime="2026-01-29 08:45:04.275554608 +0000 UTC m=+7288.137347677" Jan 29 08:45:04 crc kubenswrapper[4826]: I0129 08:45:04.328585 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5"] Jan 29 08:45:04 crc kubenswrapper[4826]: I0129 08:45:04.338285 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494560-rzll5"] Jan 29 08:45:04 crc kubenswrapper[4826]: I0129 08:45:04.819584 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca60dfe-57df-4127-830d-692ddd8e3c7b" path="/var/lib/kubelet/pods/6ca60dfe-57df-4127-830d-692ddd8e3c7b/volumes" Jan 29 08:45:05 crc kubenswrapper[4826]: I0129 08:45:05.656114 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:45:05 crc kubenswrapper[4826]: I0129 08:45:05.656681 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:45:05 crc kubenswrapper[4826]: I0129 08:45:05.656765 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 08:45:05 crc kubenswrapper[4826]: I0129 08:45:05.658291 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:45:05 crc kubenswrapper[4826]: I0129 08:45:05.658398 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" gracePeriod=600 Jan 29 08:45:06 crc kubenswrapper[4826]: I0129 08:45:06.260748 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" exitCode=0 Jan 29 08:45:06 crc kubenswrapper[4826]: I0129 08:45:06.260799 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88"} Jan 29 08:45:06 crc kubenswrapper[4826]: I0129 08:45:06.260846 4826 scope.go:117] "RemoveContainer" containerID="b809ff53cab99f09fc7da60942dfc4b83911f47c2076e84cdb939f3a1322cacc" Jan 29 08:45:06 crc kubenswrapper[4826]: E0129 08:45:06.354168 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:45:07 crc kubenswrapper[4826]: I0129 08:45:07.273393 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:45:07 crc kubenswrapper[4826]: E0129 08:45:07.274147 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:45:19 crc kubenswrapper[4826]: I0129 08:45:19.808377 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:45:19 crc kubenswrapper[4826]: E0129 08:45:19.809107 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:45:20 crc kubenswrapper[4826]: I0129 08:45:20.411028 4826 generic.go:334] "Generic (PLEG): container finished" podID="b2da9029-df06-4171-8057-4a8e1908deb5" containerID="3579a3c97fb30589bf517cb13988116b7a12002ff78b43f6ddaffb6a53063ef2" exitCode=0 Jan 29 08:45:20 crc kubenswrapper[4826]: I0129 08:45:20.411075 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" event={"ID":"b2da9029-df06-4171-8057-4a8e1908deb5","Type":"ContainerDied","Data":"3579a3c97fb30589bf517cb13988116b7a12002ff78b43f6ddaffb6a53063ef2"} Jan 29 08:45:21 crc kubenswrapper[4826]: I0129 08:45:21.906918 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" Jan 29 08:45:21 crc kubenswrapper[4826]: I0129 08:45:21.963674 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b2da9029-df06-4171-8057-4a8e1908deb5-ssh-key-openstack-cell1\") pod \"b2da9029-df06-4171-8057-4a8e1908deb5\" (UID: \"b2da9029-df06-4171-8057-4a8e1908deb5\") " Jan 29 08:45:21 crc kubenswrapper[4826]: I0129 08:45:21.964097 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2da9029-df06-4171-8057-4a8e1908deb5-inventory\") pod \"b2da9029-df06-4171-8057-4a8e1908deb5\" (UID: \"b2da9029-df06-4171-8057-4a8e1908deb5\") " Jan 29 08:45:21 crc kubenswrapper[4826]: I0129 08:45:21.964372 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gscfq\" (UniqueName: \"kubernetes.io/projected/b2da9029-df06-4171-8057-4a8e1908deb5-kube-api-access-gscfq\") pod \"b2da9029-df06-4171-8057-4a8e1908deb5\" (UID: \"b2da9029-df06-4171-8057-4a8e1908deb5\") " Jan 29 08:45:21 crc kubenswrapper[4826]: I0129 08:45:21.977676 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2da9029-df06-4171-8057-4a8e1908deb5-kube-api-access-gscfq" (OuterVolumeSpecName: "kube-api-access-gscfq") pod "b2da9029-df06-4171-8057-4a8e1908deb5" (UID: "b2da9029-df06-4171-8057-4a8e1908deb5"). InnerVolumeSpecName "kube-api-access-gscfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:45:21 crc kubenswrapper[4826]: I0129 08:45:21.999787 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2da9029-df06-4171-8057-4a8e1908deb5-inventory" (OuterVolumeSpecName: "inventory") pod "b2da9029-df06-4171-8057-4a8e1908deb5" (UID: "b2da9029-df06-4171-8057-4a8e1908deb5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.004660 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2da9029-df06-4171-8057-4a8e1908deb5-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "b2da9029-df06-4171-8057-4a8e1908deb5" (UID: "b2da9029-df06-4171-8057-4a8e1908deb5"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.068003 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b2da9029-df06-4171-8057-4a8e1908deb5-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.068037 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2da9029-df06-4171-8057-4a8e1908deb5-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.068049 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gscfq\" (UniqueName: \"kubernetes.io/projected/b2da9029-df06-4171-8057-4a8e1908deb5-kube-api-access-gscfq\") on node \"crc\" DevicePath \"\"" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.441871 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" event={"ID":"b2da9029-df06-4171-8057-4a8e1908deb5","Type":"ContainerDied","Data":"b4f2aea036b6484cec7d1056c8ffbff57a26e0f958d271cf55c9c22dd6cd4466"} Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.441934 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4f2aea036b6484cec7d1056c8ffbff57a26e0f958d271cf55c9c22dd6cd4466" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.441986 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-p7s2n" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.527539 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-2qqxd"] Jan 29 08:45:22 crc kubenswrapper[4826]: E0129 08:45:22.527932 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2da9029-df06-4171-8057-4a8e1908deb5" containerName="reboot-os-openstack-openstack-cell1" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.527950 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2da9029-df06-4171-8057-4a8e1908deb5" containerName="reboot-os-openstack-openstack-cell1" Jan 29 08:45:22 crc kubenswrapper[4826]: E0129 08:45:22.527978 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a7c15c-7c85-4698-92eb-9041de234300" containerName="collect-profiles" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.527984 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a7c15c-7c85-4698-92eb-9041de234300" containerName="collect-profiles" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.528176 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a7c15c-7c85-4698-92eb-9041de234300" containerName="collect-profiles" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.528201 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2da9029-df06-4171-8057-4a8e1908deb5" containerName="reboot-os-openstack-openstack-cell1" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.528904 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.534622 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-telemetry-default-certs-0" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.534841 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.534842 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-neutron-metadata-default-certs-0" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.537569 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.537827 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.537585 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-libvirt-default-certs-0" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.538092 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.540738 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-ovn-default-certs-0" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.549941 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-2qqxd"] Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.580554 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws84f\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-kube-api-access-ws84f\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.580658 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.580718 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.580739 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.580818 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.580941 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.580992 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.581100 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.581216 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.581293 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.581389 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.581462 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.581524 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.581572 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-inventory\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.581607 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.683795 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.683869 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.683927 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.683954 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.683986 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.684011 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.684035 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-inventory\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.684056 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.684084 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws84f\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-kube-api-access-ws84f\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.684128 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.684180 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.684197 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.684247 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.684278 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.684307 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.690407 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.690696 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.691806 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.692195 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.692217 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.692864 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.693550 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.694932 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.695956 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-inventory\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.696788 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.697010 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.697265 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.698476 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.698896 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.706531 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws84f\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-kube-api-access-ws84f\") pod \"install-certs-openstack-openstack-cell1-2qqxd\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:22 crc kubenswrapper[4826]: I0129 08:45:22.851729 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:45:23 crc kubenswrapper[4826]: I0129 08:45:23.380203 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-2qqxd"] Jan 29 08:45:23 crc kubenswrapper[4826]: I0129 08:45:23.452606 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" event={"ID":"07208b6c-07ec-458e-b7c5-6460916bb061","Type":"ContainerStarted","Data":"67476dec6facc7fcc9528bc101bf0ce1813574488a8c0f7677e1f7ea78d15035"} Jan 29 08:45:24 crc kubenswrapper[4826]: I0129 08:45:24.464061 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" event={"ID":"07208b6c-07ec-458e-b7c5-6460916bb061","Type":"ContainerStarted","Data":"ea6ac3b5cad57d428211c1bf14d71a07f2f3ffe95a5159f62e0b171eabe102eb"} Jan 29 08:45:24 crc kubenswrapper[4826]: I0129 08:45:24.495314 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" podStartSLOduration=1.802420745 podStartE2EDuration="2.495277589s" podCreationTimestamp="2026-01-29 08:45:22 +0000 UTC" firstStartedPulling="2026-01-29 08:45:23.387795235 +0000 UTC m=+7307.249588304" lastFinishedPulling="2026-01-29 08:45:24.080652059 +0000 UTC m=+7307.942445148" observedRunningTime="2026-01-29 08:45:24.485977253 +0000 UTC m=+7308.347770322" watchObservedRunningTime="2026-01-29 08:45:24.495277589 +0000 UTC m=+7308.357070658" Jan 29 08:45:24 crc kubenswrapper[4826]: I0129 08:45:24.891711 4826 scope.go:117] "RemoveContainer" containerID="cad5124349c793b4ad8d4893fa5b0248fc922e32ac2801f8622a14d757dcce50" Jan 29 08:45:31 crc kubenswrapper[4826]: I0129 08:45:31.808506 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:45:31 crc kubenswrapper[4826]: E0129 08:45:31.809190 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:45:43 crc kubenswrapper[4826]: I0129 08:45:43.809143 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:45:43 crc kubenswrapper[4826]: E0129 08:45:43.810244 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:45:58 crc kubenswrapper[4826]: I0129 08:45:58.809253 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:45:58 crc kubenswrapper[4826]: E0129 08:45:58.810551 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:46:01 crc kubenswrapper[4826]: I0129 08:46:01.833636 4826 generic.go:334] "Generic (PLEG): container finished" podID="07208b6c-07ec-458e-b7c5-6460916bb061" containerID="ea6ac3b5cad57d428211c1bf14d71a07f2f3ffe95a5159f62e0b171eabe102eb" exitCode=0 Jan 29 08:46:01 crc kubenswrapper[4826]: I0129 08:46:01.833747 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" event={"ID":"07208b6c-07ec-458e-b7c5-6460916bb061","Type":"ContainerDied","Data":"ea6ac3b5cad57d428211c1bf14d71a07f2f3ffe95a5159f62e0b171eabe102eb"} Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.216696 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.296646 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-dhcp-combined-ca-bundle\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.297446 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-neutron-metadata-default-certs-0\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.297563 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-libvirt-combined-ca-bundle\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.297609 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-libvirt-default-certs-0\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.297647 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws84f\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-kube-api-access-ws84f\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.297671 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-ovn-default-certs-0\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.297698 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-metadata-combined-ca-bundle\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.297756 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-bootstrap-combined-ca-bundle\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.297783 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-telemetry-combined-ca-bundle\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.297859 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-inventory\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.297945 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-ovn-combined-ca-bundle\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.298001 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-nova-combined-ca-bundle\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.298122 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-telemetry-default-certs-0\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.298192 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-sriov-combined-ca-bundle\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.298379 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-ssh-key-openstack-cell1\") pod \"07208b6c-07ec-458e-b7c5-6460916bb061\" (UID: \"07208b6c-07ec-458e-b7c5-6460916bb061\") " Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.303802 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.305826 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-ovn-default-certs-0") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "openstack-cell1-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.307249 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-neutron-metadata-default-certs-0") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "openstack-cell1-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.308697 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-libvirt-default-certs-0") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "openstack-cell1-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.309223 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.309630 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.309908 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.309982 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-telemetry-default-certs-0") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "openstack-cell1-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.310274 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.310610 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.311052 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.318614 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-kube-api-access-ws84f" (OuterVolumeSpecName: "kube-api-access-ws84f") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "kube-api-access-ws84f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.320203 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.332224 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.340463 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-inventory" (OuterVolumeSpecName: "inventory") pod "07208b6c-07ec-458e-b7c5-6460916bb061" (UID: "07208b6c-07ec-458e-b7c5-6460916bb061"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402381 4826 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402423 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402436 4826 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402451 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402464 4826 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402479 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402488 4826 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402497 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws84f\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-kube-api-access-ws84f\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402506 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402516 4826 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402524 4826 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402534 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402543 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402553 4826 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07208b6c-07ec-458e-b7c5-6460916bb061-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.402562 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/07208b6c-07ec-458e-b7c5-6460916bb061-openstack-cell1-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.861145 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" event={"ID":"07208b6c-07ec-458e-b7c5-6460916bb061","Type":"ContainerDied","Data":"67476dec6facc7fcc9528bc101bf0ce1813574488a8c0f7677e1f7ea78d15035"} Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.861197 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67476dec6facc7fcc9528bc101bf0ce1813574488a8c0f7677e1f7ea78d15035" Jan 29 08:46:03 crc kubenswrapper[4826]: I0129 08:46:03.861219 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-2qqxd" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.050284 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-zrxrk"] Jan 29 08:46:04 crc kubenswrapper[4826]: E0129 08:46:04.050961 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07208b6c-07ec-458e-b7c5-6460916bb061" containerName="install-certs-openstack-openstack-cell1" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.050978 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="07208b6c-07ec-458e-b7c5-6460916bb061" containerName="install-certs-openstack-openstack-cell1" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.051161 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="07208b6c-07ec-458e-b7c5-6460916bb061" containerName="install-certs-openstack-openstack-cell1" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.051867 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.054515 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.055602 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.055822 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.056022 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.056631 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.070991 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-zrxrk"] Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.117996 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.118499 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.118549 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn554\" (UniqueName: \"kubernetes.io/projected/4cf68c2a-d351-4f17-a5e2-da5006da2e03-kube-api-access-sn554\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.118722 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-inventory\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.118788 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.220721 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-inventory\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.220812 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.220870 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.221107 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.221133 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn554\" (UniqueName: \"kubernetes.io/projected/4cf68c2a-d351-4f17-a5e2-da5006da2e03-kube-api-access-sn554\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.221940 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.224791 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.230519 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.232105 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-inventory\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.249524 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn554\" (UniqueName: \"kubernetes.io/projected/4cf68c2a-d351-4f17-a5e2-da5006da2e03-kube-api-access-sn554\") pod \"ovn-openstack-openstack-cell1-zrxrk\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.385722 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:46:04 crc kubenswrapper[4826]: I0129 08:46:04.967637 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-zrxrk"] Jan 29 08:46:05 crc kubenswrapper[4826]: I0129 08:46:05.886055 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-zrxrk" event={"ID":"4cf68c2a-d351-4f17-a5e2-da5006da2e03","Type":"ContainerStarted","Data":"5c2879ced6dc13931812433c1ddab2c49a8e520d2ae6d90a890a5ad1883b5cc3"} Jan 29 08:46:05 crc kubenswrapper[4826]: I0129 08:46:05.886100 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-zrxrk" event={"ID":"4cf68c2a-d351-4f17-a5e2-da5006da2e03","Type":"ContainerStarted","Data":"647ed93c738c029389d2ca2cceb6f1f34bafae9fb62a5311793e280534facc15"} Jan 29 08:46:05 crc kubenswrapper[4826]: I0129 08:46:05.925898 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-zrxrk" podStartSLOduration=1.478236882 podStartE2EDuration="1.925865373s" podCreationTimestamp="2026-01-29 08:46:04 +0000 UTC" firstStartedPulling="2026-01-29 08:46:04.970936922 +0000 UTC m=+7348.832729991" lastFinishedPulling="2026-01-29 08:46:05.418565413 +0000 UTC m=+7349.280358482" observedRunningTime="2026-01-29 08:46:05.91479438 +0000 UTC m=+7349.776587449" watchObservedRunningTime="2026-01-29 08:46:05.925865373 +0000 UTC m=+7349.787658442" Jan 29 08:46:10 crc kubenswrapper[4826]: I0129 08:46:10.810735 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:46:10 crc kubenswrapper[4826]: E0129 08:46:10.811630 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:46:24 crc kubenswrapper[4826]: I0129 08:46:24.809129 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:46:24 crc kubenswrapper[4826]: E0129 08:46:24.810191 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:46:36 crc kubenswrapper[4826]: I0129 08:46:36.814675 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:46:36 crc kubenswrapper[4826]: E0129 08:46:36.815589 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:46:49 crc kubenswrapper[4826]: I0129 08:46:49.810048 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:46:49 crc kubenswrapper[4826]: E0129 08:46:49.811257 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:46:55 crc kubenswrapper[4826]: I0129 08:46:55.333946 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2cgsn"] Jan 29 08:46:55 crc kubenswrapper[4826]: I0129 08:46:55.337058 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:46:55 crc kubenswrapper[4826]: I0129 08:46:55.345507 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2cgsn"] Jan 29 08:46:55 crc kubenswrapper[4826]: I0129 08:46:55.456401 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43128bfd-e0a1-487e-b5e6-ddfcc178838d-catalog-content\") pod \"redhat-operators-2cgsn\" (UID: \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\") " pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:46:55 crc kubenswrapper[4826]: I0129 08:46:55.456482 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43128bfd-e0a1-487e-b5e6-ddfcc178838d-utilities\") pod \"redhat-operators-2cgsn\" (UID: \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\") " pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:46:55 crc kubenswrapper[4826]: I0129 08:46:55.456575 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm468\" (UniqueName: \"kubernetes.io/projected/43128bfd-e0a1-487e-b5e6-ddfcc178838d-kube-api-access-tm468\") pod \"redhat-operators-2cgsn\" (UID: \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\") " pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:46:55 crc kubenswrapper[4826]: I0129 08:46:55.559075 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm468\" (UniqueName: \"kubernetes.io/projected/43128bfd-e0a1-487e-b5e6-ddfcc178838d-kube-api-access-tm468\") pod \"redhat-operators-2cgsn\" (UID: \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\") " pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:46:55 crc kubenswrapper[4826]: I0129 08:46:55.559341 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43128bfd-e0a1-487e-b5e6-ddfcc178838d-catalog-content\") pod \"redhat-operators-2cgsn\" (UID: \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\") " pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:46:55 crc kubenswrapper[4826]: I0129 08:46:55.559418 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43128bfd-e0a1-487e-b5e6-ddfcc178838d-utilities\") pod \"redhat-operators-2cgsn\" (UID: \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\") " pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:46:55 crc kubenswrapper[4826]: I0129 08:46:55.559984 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43128bfd-e0a1-487e-b5e6-ddfcc178838d-catalog-content\") pod \"redhat-operators-2cgsn\" (UID: \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\") " pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:46:55 crc kubenswrapper[4826]: I0129 08:46:55.560044 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43128bfd-e0a1-487e-b5e6-ddfcc178838d-utilities\") pod \"redhat-operators-2cgsn\" (UID: \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\") " pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:46:55 crc kubenswrapper[4826]: I0129 08:46:55.583212 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm468\" (UniqueName: \"kubernetes.io/projected/43128bfd-e0a1-487e-b5e6-ddfcc178838d-kube-api-access-tm468\") pod \"redhat-operators-2cgsn\" (UID: \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\") " pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:46:55 crc kubenswrapper[4826]: I0129 08:46:55.668813 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:46:56 crc kubenswrapper[4826]: I0129 08:46:56.197523 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2cgsn"] Jan 29 08:46:57 crc kubenswrapper[4826]: I0129 08:46:57.019031 4826 generic.go:334] "Generic (PLEG): container finished" podID="43128bfd-e0a1-487e-b5e6-ddfcc178838d" containerID="89ab827bbe13f1e1f8a3765a98e7eb81640a23816c4f4e178da463376b5bfa2a" exitCode=0 Jan 29 08:46:57 crc kubenswrapper[4826]: I0129 08:46:57.019108 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cgsn" event={"ID":"43128bfd-e0a1-487e-b5e6-ddfcc178838d","Type":"ContainerDied","Data":"89ab827bbe13f1e1f8a3765a98e7eb81640a23816c4f4e178da463376b5bfa2a"} Jan 29 08:46:57 crc kubenswrapper[4826]: I0129 08:46:57.019313 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cgsn" event={"ID":"43128bfd-e0a1-487e-b5e6-ddfcc178838d","Type":"ContainerStarted","Data":"2419aff5a0d40fad610a3b64188208a89a03a5658b6e4f742ecfabf514d0c1ce"} Jan 29 08:46:58 crc kubenswrapper[4826]: I0129 08:46:58.031963 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cgsn" event={"ID":"43128bfd-e0a1-487e-b5e6-ddfcc178838d","Type":"ContainerStarted","Data":"a26aaa3f4b5f2954e79c2b0c8db0b634fba26756a7a575a3e7bf9589e4e62f82"} Jan 29 08:47:02 crc kubenswrapper[4826]: I0129 08:47:02.808547 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:47:02 crc kubenswrapper[4826]: E0129 08:47:02.809273 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:47:07 crc kubenswrapper[4826]: I0129 08:47:07.128079 4826 generic.go:334] "Generic (PLEG): container finished" podID="43128bfd-e0a1-487e-b5e6-ddfcc178838d" containerID="a26aaa3f4b5f2954e79c2b0c8db0b634fba26756a7a575a3e7bf9589e4e62f82" exitCode=0 Jan 29 08:47:07 crc kubenswrapper[4826]: I0129 08:47:07.128175 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cgsn" event={"ID":"43128bfd-e0a1-487e-b5e6-ddfcc178838d","Type":"ContainerDied","Data":"a26aaa3f4b5f2954e79c2b0c8db0b634fba26756a7a575a3e7bf9589e4e62f82"} Jan 29 08:47:07 crc kubenswrapper[4826]: I0129 08:47:07.131380 4826 generic.go:334] "Generic (PLEG): container finished" podID="4cf68c2a-d351-4f17-a5e2-da5006da2e03" containerID="5c2879ced6dc13931812433c1ddab2c49a8e520d2ae6d90a890a5ad1883b5cc3" exitCode=0 Jan 29 08:47:07 crc kubenswrapper[4826]: I0129 08:47:07.131413 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-zrxrk" event={"ID":"4cf68c2a-d351-4f17-a5e2-da5006da2e03","Type":"ContainerDied","Data":"5c2879ced6dc13931812433c1ddab2c49a8e520d2ae6d90a890a5ad1883b5cc3"} Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.143996 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cgsn" event={"ID":"43128bfd-e0a1-487e-b5e6-ddfcc178838d","Type":"ContainerStarted","Data":"e5203cce697d4ddfcbdc6b74ca629ea0b1cf7435a352625549e2e1eba03dd555"} Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.183690 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2cgsn" podStartSLOduration=2.432007114 podStartE2EDuration="13.183666687s" podCreationTimestamp="2026-01-29 08:46:55 +0000 UTC" firstStartedPulling="2026-01-29 08:46:57.021258537 +0000 UTC m=+7400.883051626" lastFinishedPulling="2026-01-29 08:47:07.77291813 +0000 UTC m=+7411.634711199" observedRunningTime="2026-01-29 08:47:08.170958231 +0000 UTC m=+7412.032751310" watchObservedRunningTime="2026-01-29 08:47:08.183666687 +0000 UTC m=+7412.045459766" Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.736648 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.836337 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ovn-combined-ca-bundle\") pod \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.836479 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn554\" (UniqueName: \"kubernetes.io/projected/4cf68c2a-d351-4f17-a5e2-da5006da2e03-kube-api-access-sn554\") pod \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.836632 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-inventory\") pod \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.836744 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ssh-key-openstack-cell1\") pod \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.836775 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ovncontroller-config-0\") pod \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\" (UID: \"4cf68c2a-d351-4f17-a5e2-da5006da2e03\") " Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.841725 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf68c2a-d351-4f17-a5e2-da5006da2e03-kube-api-access-sn554" (OuterVolumeSpecName: "kube-api-access-sn554") pod "4cf68c2a-d351-4f17-a5e2-da5006da2e03" (UID: "4cf68c2a-d351-4f17-a5e2-da5006da2e03"). InnerVolumeSpecName "kube-api-access-sn554". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.843495 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4cf68c2a-d351-4f17-a5e2-da5006da2e03" (UID: "4cf68c2a-d351-4f17-a5e2-da5006da2e03"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.863466 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "4cf68c2a-d351-4f17-a5e2-da5006da2e03" (UID: "4cf68c2a-d351-4f17-a5e2-da5006da2e03"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.869480 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "4cf68c2a-d351-4f17-a5e2-da5006da2e03" (UID: "4cf68c2a-d351-4f17-a5e2-da5006da2e03"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.870843 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-inventory" (OuterVolumeSpecName: "inventory") pod "4cf68c2a-d351-4f17-a5e2-da5006da2e03" (UID: "4cf68c2a-d351-4f17-a5e2-da5006da2e03"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.939430 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.939461 4826 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.939472 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.939482 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn554\" (UniqueName: \"kubernetes.io/projected/4cf68c2a-d351-4f17-a5e2-da5006da2e03-kube-api-access-sn554\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:08 crc kubenswrapper[4826]: I0129 08:47:08.939493 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cf68c2a-d351-4f17-a5e2-da5006da2e03-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.160038 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-zrxrk" event={"ID":"4cf68c2a-d351-4f17-a5e2-da5006da2e03","Type":"ContainerDied","Data":"647ed93c738c029389d2ca2cceb6f1f34bafae9fb62a5311793e280534facc15"} Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.160460 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="647ed93c738c029389d2ca2cceb6f1f34bafae9fb62a5311793e280534facc15" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.160523 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-zrxrk" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.287969 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-wsjhn"] Jan 29 08:47:09 crc kubenswrapper[4826]: E0129 08:47:09.288573 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf68c2a-d351-4f17-a5e2-da5006da2e03" containerName="ovn-openstack-openstack-cell1" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.288599 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf68c2a-d351-4f17-a5e2-da5006da2e03" containerName="ovn-openstack-openstack-cell1" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.288910 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf68c2a-d351-4f17-a5e2-da5006da2e03" containerName="ovn-openstack-openstack-cell1" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.289797 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.294109 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.294355 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.294519 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.294697 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.294887 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.295242 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.318845 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-wsjhn"] Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.349050 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.349370 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj56q\" (UniqueName: \"kubernetes.io/projected/7c28e648-e232-40da-abc7-39b878b704c2-kube-api-access-qj56q\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.349484 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.349611 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.349731 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.349868 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.451357 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.451440 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj56q\" (UniqueName: \"kubernetes.io/projected/7c28e648-e232-40da-abc7-39b878b704c2-kube-api-access-qj56q\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.451505 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.451551 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.451630 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.451689 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.456901 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.457432 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.457944 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.466062 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.469641 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj56q\" (UniqueName: \"kubernetes.io/projected/7c28e648-e232-40da-abc7-39b878b704c2-kube-api-access-qj56q\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.474116 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-wsjhn\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:09 crc kubenswrapper[4826]: I0129 08:47:09.622573 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:47:10 crc kubenswrapper[4826]: I0129 08:47:10.132274 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-wsjhn"] Jan 29 08:47:10 crc kubenswrapper[4826]: W0129 08:47:10.140239 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c28e648_e232_40da_abc7_39b878b704c2.slice/crio-5c136c75359329e42a5e68d0433c6053008950dd411bcb32c7a1f09663e3973c WatchSource:0}: Error finding container 5c136c75359329e42a5e68d0433c6053008950dd411bcb32c7a1f09663e3973c: Status 404 returned error can't find the container with id 5c136c75359329e42a5e68d0433c6053008950dd411bcb32c7a1f09663e3973c Jan 29 08:47:10 crc kubenswrapper[4826]: I0129 08:47:10.169756 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" event={"ID":"7c28e648-e232-40da-abc7-39b878b704c2","Type":"ContainerStarted","Data":"5c136c75359329e42a5e68d0433c6053008950dd411bcb32c7a1f09663e3973c"} Jan 29 08:47:11 crc kubenswrapper[4826]: I0129 08:47:11.184494 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" event={"ID":"7c28e648-e232-40da-abc7-39b878b704c2","Type":"ContainerStarted","Data":"5df9ba6698ea61e39ac5c2e317dfd38684f677dc0ac3fc51a7de6e2cd625ea55"} Jan 29 08:47:11 crc kubenswrapper[4826]: I0129 08:47:11.200630 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" podStartSLOduration=1.715831629 podStartE2EDuration="2.200608343s" podCreationTimestamp="2026-01-29 08:47:09 +0000 UTC" firstStartedPulling="2026-01-29 08:47:10.143446729 +0000 UTC m=+7414.005239798" lastFinishedPulling="2026-01-29 08:47:10.628223453 +0000 UTC m=+7414.490016512" observedRunningTime="2026-01-29 08:47:11.199033791 +0000 UTC m=+7415.060826880" watchObservedRunningTime="2026-01-29 08:47:11.200608343 +0000 UTC m=+7415.062401412" Jan 29 08:47:13 crc kubenswrapper[4826]: I0129 08:47:13.808982 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:47:13 crc kubenswrapper[4826]: E0129 08:47:13.809190 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:47:15 crc kubenswrapper[4826]: I0129 08:47:15.805926 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:47:15 crc kubenswrapper[4826]: I0129 08:47:15.808514 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:47:15 crc kubenswrapper[4826]: I0129 08:47:15.879283 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:47:16 crc kubenswrapper[4826]: I0129 08:47:16.287849 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:47:16 crc kubenswrapper[4826]: I0129 08:47:16.336589 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2cgsn"] Jan 29 08:47:18 crc kubenswrapper[4826]: I0129 08:47:18.251281 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2cgsn" podUID="43128bfd-e0a1-487e-b5e6-ddfcc178838d" containerName="registry-server" containerID="cri-o://e5203cce697d4ddfcbdc6b74ca629ea0b1cf7435a352625549e2e1eba03dd555" gracePeriod=2 Jan 29 08:47:18 crc kubenswrapper[4826]: I0129 08:47:18.775198 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:47:18 crc kubenswrapper[4826]: I0129 08:47:18.970572 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43128bfd-e0a1-487e-b5e6-ddfcc178838d-catalog-content\") pod \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\" (UID: \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\") " Jan 29 08:47:18 crc kubenswrapper[4826]: I0129 08:47:18.970685 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm468\" (UniqueName: \"kubernetes.io/projected/43128bfd-e0a1-487e-b5e6-ddfcc178838d-kube-api-access-tm468\") pod \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\" (UID: \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\") " Jan 29 08:47:18 crc kubenswrapper[4826]: I0129 08:47:18.970761 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43128bfd-e0a1-487e-b5e6-ddfcc178838d-utilities\") pod \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\" (UID: \"43128bfd-e0a1-487e-b5e6-ddfcc178838d\") " Jan 29 08:47:18 crc kubenswrapper[4826]: I0129 08:47:18.971769 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43128bfd-e0a1-487e-b5e6-ddfcc178838d-utilities" (OuterVolumeSpecName: "utilities") pod "43128bfd-e0a1-487e-b5e6-ddfcc178838d" (UID: "43128bfd-e0a1-487e-b5e6-ddfcc178838d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:47:18 crc kubenswrapper[4826]: I0129 08:47:18.980750 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43128bfd-e0a1-487e-b5e6-ddfcc178838d-kube-api-access-tm468" (OuterVolumeSpecName: "kube-api-access-tm468") pod "43128bfd-e0a1-487e-b5e6-ddfcc178838d" (UID: "43128bfd-e0a1-487e-b5e6-ddfcc178838d"). InnerVolumeSpecName "kube-api-access-tm468". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.073171 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm468\" (UniqueName: \"kubernetes.io/projected/43128bfd-e0a1-487e-b5e6-ddfcc178838d-kube-api-access-tm468\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.073215 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43128bfd-e0a1-487e-b5e6-ddfcc178838d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.091942 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43128bfd-e0a1-487e-b5e6-ddfcc178838d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43128bfd-e0a1-487e-b5e6-ddfcc178838d" (UID: "43128bfd-e0a1-487e-b5e6-ddfcc178838d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.175070 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43128bfd-e0a1-487e-b5e6-ddfcc178838d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.262315 4826 generic.go:334] "Generic (PLEG): container finished" podID="43128bfd-e0a1-487e-b5e6-ddfcc178838d" containerID="e5203cce697d4ddfcbdc6b74ca629ea0b1cf7435a352625549e2e1eba03dd555" exitCode=0 Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.262372 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cgsn" event={"ID":"43128bfd-e0a1-487e-b5e6-ddfcc178838d","Type":"ContainerDied","Data":"e5203cce697d4ddfcbdc6b74ca629ea0b1cf7435a352625549e2e1eba03dd555"} Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.262380 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cgsn" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.262414 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cgsn" event={"ID":"43128bfd-e0a1-487e-b5e6-ddfcc178838d","Type":"ContainerDied","Data":"2419aff5a0d40fad610a3b64188208a89a03a5658b6e4f742ecfabf514d0c1ce"} Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.262442 4826 scope.go:117] "RemoveContainer" containerID="e5203cce697d4ddfcbdc6b74ca629ea0b1cf7435a352625549e2e1eba03dd555" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.285205 4826 scope.go:117] "RemoveContainer" containerID="a26aaa3f4b5f2954e79c2b0c8db0b634fba26756a7a575a3e7bf9589e4e62f82" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.299631 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2cgsn"] Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.310564 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2cgsn"] Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.321911 4826 scope.go:117] "RemoveContainer" containerID="89ab827bbe13f1e1f8a3765a98e7eb81640a23816c4f4e178da463376b5bfa2a" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.386530 4826 scope.go:117] "RemoveContainer" containerID="e5203cce697d4ddfcbdc6b74ca629ea0b1cf7435a352625549e2e1eba03dd555" Jan 29 08:47:19 crc kubenswrapper[4826]: E0129 08:47:19.387593 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5203cce697d4ddfcbdc6b74ca629ea0b1cf7435a352625549e2e1eba03dd555\": container with ID starting with e5203cce697d4ddfcbdc6b74ca629ea0b1cf7435a352625549e2e1eba03dd555 not found: ID does not exist" containerID="e5203cce697d4ddfcbdc6b74ca629ea0b1cf7435a352625549e2e1eba03dd555" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.387637 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5203cce697d4ddfcbdc6b74ca629ea0b1cf7435a352625549e2e1eba03dd555"} err="failed to get container status \"e5203cce697d4ddfcbdc6b74ca629ea0b1cf7435a352625549e2e1eba03dd555\": rpc error: code = NotFound desc = could not find container \"e5203cce697d4ddfcbdc6b74ca629ea0b1cf7435a352625549e2e1eba03dd555\": container with ID starting with e5203cce697d4ddfcbdc6b74ca629ea0b1cf7435a352625549e2e1eba03dd555 not found: ID does not exist" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.387676 4826 scope.go:117] "RemoveContainer" containerID="a26aaa3f4b5f2954e79c2b0c8db0b634fba26756a7a575a3e7bf9589e4e62f82" Jan 29 08:47:19 crc kubenswrapper[4826]: E0129 08:47:19.387982 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a26aaa3f4b5f2954e79c2b0c8db0b634fba26756a7a575a3e7bf9589e4e62f82\": container with ID starting with a26aaa3f4b5f2954e79c2b0c8db0b634fba26756a7a575a3e7bf9589e4e62f82 not found: ID does not exist" containerID="a26aaa3f4b5f2954e79c2b0c8db0b634fba26756a7a575a3e7bf9589e4e62f82" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.388008 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a26aaa3f4b5f2954e79c2b0c8db0b634fba26756a7a575a3e7bf9589e4e62f82"} err="failed to get container status \"a26aaa3f4b5f2954e79c2b0c8db0b634fba26756a7a575a3e7bf9589e4e62f82\": rpc error: code = NotFound desc = could not find container \"a26aaa3f4b5f2954e79c2b0c8db0b634fba26756a7a575a3e7bf9589e4e62f82\": container with ID starting with a26aaa3f4b5f2954e79c2b0c8db0b634fba26756a7a575a3e7bf9589e4e62f82 not found: ID does not exist" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.388053 4826 scope.go:117] "RemoveContainer" containerID="89ab827bbe13f1e1f8a3765a98e7eb81640a23816c4f4e178da463376b5bfa2a" Jan 29 08:47:19 crc kubenswrapper[4826]: E0129 08:47:19.388415 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89ab827bbe13f1e1f8a3765a98e7eb81640a23816c4f4e178da463376b5bfa2a\": container with ID starting with 89ab827bbe13f1e1f8a3765a98e7eb81640a23816c4f4e178da463376b5bfa2a not found: ID does not exist" containerID="89ab827bbe13f1e1f8a3765a98e7eb81640a23816c4f4e178da463376b5bfa2a" Jan 29 08:47:19 crc kubenswrapper[4826]: I0129 08:47:19.388448 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89ab827bbe13f1e1f8a3765a98e7eb81640a23816c4f4e178da463376b5bfa2a"} err="failed to get container status \"89ab827bbe13f1e1f8a3765a98e7eb81640a23816c4f4e178da463376b5bfa2a\": rpc error: code = NotFound desc = could not find container \"89ab827bbe13f1e1f8a3765a98e7eb81640a23816c4f4e178da463376b5bfa2a\": container with ID starting with 89ab827bbe13f1e1f8a3765a98e7eb81640a23816c4f4e178da463376b5bfa2a not found: ID does not exist" Jan 29 08:47:20 crc kubenswrapper[4826]: I0129 08:47:20.819807 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43128bfd-e0a1-487e-b5e6-ddfcc178838d" path="/var/lib/kubelet/pods/43128bfd-e0a1-487e-b5e6-ddfcc178838d/volumes" Jan 29 08:47:25 crc kubenswrapper[4826]: I0129 08:47:25.809364 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:47:25 crc kubenswrapper[4826]: E0129 08:47:25.810375 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:47:38 crc kubenswrapper[4826]: I0129 08:47:38.808895 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:47:38 crc kubenswrapper[4826]: E0129 08:47:38.810483 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:47:52 crc kubenswrapper[4826]: I0129 08:47:52.809589 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:47:52 crc kubenswrapper[4826]: E0129 08:47:52.810123 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:48:00 crc kubenswrapper[4826]: I0129 08:48:00.694996 4826 generic.go:334] "Generic (PLEG): container finished" podID="7c28e648-e232-40da-abc7-39b878b704c2" containerID="5df9ba6698ea61e39ac5c2e317dfd38684f677dc0ac3fc51a7de6e2cd625ea55" exitCode=0 Jan 29 08:48:00 crc kubenswrapper[4826]: I0129 08:48:00.695067 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" event={"ID":"7c28e648-e232-40da-abc7-39b878b704c2","Type":"ContainerDied","Data":"5df9ba6698ea61e39ac5c2e317dfd38684f677dc0ac3fc51a7de6e2cd625ea55"} Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.164865 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.242203 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7c28e648-e232-40da-abc7-39b878b704c2\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.242281 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj56q\" (UniqueName: \"kubernetes.io/projected/7c28e648-e232-40da-abc7-39b878b704c2-kube-api-access-qj56q\") pod \"7c28e648-e232-40da-abc7-39b878b704c2\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.242406 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-nova-metadata-neutron-config-0\") pod \"7c28e648-e232-40da-abc7-39b878b704c2\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.242551 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-ssh-key-openstack-cell1\") pod \"7c28e648-e232-40da-abc7-39b878b704c2\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.242595 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-neutron-metadata-combined-ca-bundle\") pod \"7c28e648-e232-40da-abc7-39b878b704c2\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.242691 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-inventory\") pod \"7c28e648-e232-40da-abc7-39b878b704c2\" (UID: \"7c28e648-e232-40da-abc7-39b878b704c2\") " Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.247683 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c28e648-e232-40da-abc7-39b878b704c2-kube-api-access-qj56q" (OuterVolumeSpecName: "kube-api-access-qj56q") pod "7c28e648-e232-40da-abc7-39b878b704c2" (UID: "7c28e648-e232-40da-abc7-39b878b704c2"). InnerVolumeSpecName "kube-api-access-qj56q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.250959 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7c28e648-e232-40da-abc7-39b878b704c2" (UID: "7c28e648-e232-40da-abc7-39b878b704c2"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.275398 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7c28e648-e232-40da-abc7-39b878b704c2" (UID: "7c28e648-e232-40da-abc7-39b878b704c2"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.277692 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7c28e648-e232-40da-abc7-39b878b704c2" (UID: "7c28e648-e232-40da-abc7-39b878b704c2"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.279765 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-inventory" (OuterVolumeSpecName: "inventory") pod "7c28e648-e232-40da-abc7-39b878b704c2" (UID: "7c28e648-e232-40da-abc7-39b878b704c2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.286277 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "7c28e648-e232-40da-abc7-39b878b704c2" (UID: "7c28e648-e232-40da-abc7-39b878b704c2"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.346042 4826 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.346083 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj56q\" (UniqueName: \"kubernetes.io/projected/7c28e648-e232-40da-abc7-39b878b704c2-kube-api-access-qj56q\") on node \"crc\" DevicePath \"\"" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.346097 4826 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.346111 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.346123 4826 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.346139 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c28e648-e232-40da-abc7-39b878b704c2-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.717675 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" event={"ID":"7c28e648-e232-40da-abc7-39b878b704c2","Type":"ContainerDied","Data":"5c136c75359329e42a5e68d0433c6053008950dd411bcb32c7a1f09663e3973c"} Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.717999 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c136c75359329e42a5e68d0433c6053008950dd411bcb32c7a1f09663e3973c" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.717907 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-wsjhn" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.823813 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-rd9vm"] Jan 29 08:48:02 crc kubenswrapper[4826]: E0129 08:48:02.824509 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43128bfd-e0a1-487e-b5e6-ddfcc178838d" containerName="registry-server" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.824608 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="43128bfd-e0a1-487e-b5e6-ddfcc178838d" containerName="registry-server" Jan 29 08:48:02 crc kubenswrapper[4826]: E0129 08:48:02.824683 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43128bfd-e0a1-487e-b5e6-ddfcc178838d" containerName="extract-content" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.824741 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="43128bfd-e0a1-487e-b5e6-ddfcc178838d" containerName="extract-content" Jan 29 08:48:02 crc kubenswrapper[4826]: E0129 08:48:02.824800 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c28e648-e232-40da-abc7-39b878b704c2" containerName="neutron-metadata-openstack-openstack-cell1" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.824857 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c28e648-e232-40da-abc7-39b878b704c2" containerName="neutron-metadata-openstack-openstack-cell1" Jan 29 08:48:02 crc kubenswrapper[4826]: E0129 08:48:02.824918 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43128bfd-e0a1-487e-b5e6-ddfcc178838d" containerName="extract-utilities" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.824977 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="43128bfd-e0a1-487e-b5e6-ddfcc178838d" containerName="extract-utilities" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.825230 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c28e648-e232-40da-abc7-39b878b704c2" containerName="neutron-metadata-openstack-openstack-cell1" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.825323 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="43128bfd-e0a1-487e-b5e6-ddfcc178838d" containerName="registry-server" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.826260 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.828163 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.828400 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-rd9vm"] Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.828729 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.829160 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.829192 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.829192 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.967219 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.967278 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.967364 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-inventory\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.967458 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:02 crc kubenswrapper[4826]: I0129 08:48:02.967585 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhdjh\" (UniqueName: \"kubernetes.io/projected/bf6c48ac-7491-4cff-a809-f164ac932d35-kube-api-access-dhdjh\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.069094 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhdjh\" (UniqueName: \"kubernetes.io/projected/bf6c48ac-7491-4cff-a809-f164ac932d35-kube-api-access-dhdjh\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.069170 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.069200 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.069225 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-inventory\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.069321 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.074267 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.074529 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.078842 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.086726 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-inventory\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.087991 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhdjh\" (UniqueName: \"kubernetes.io/projected/bf6c48ac-7491-4cff-a809-f164ac932d35-kube-api-access-dhdjh\") pod \"libvirt-openstack-openstack-cell1-rd9vm\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.176288 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.701072 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-rd9vm"] Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.710104 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:48:03 crc kubenswrapper[4826]: I0129 08:48:03.730573 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" event={"ID":"bf6c48ac-7491-4cff-a809-f164ac932d35","Type":"ContainerStarted","Data":"87be355102b88491db494e644de8de23962cb5d0c04d74a490365967d49cf481"} Jan 29 08:48:04 crc kubenswrapper[4826]: I0129 08:48:04.740500 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" event={"ID":"bf6c48ac-7491-4cff-a809-f164ac932d35","Type":"ContainerStarted","Data":"43baf1beafe0fd49804b1af0ed47a2466f63ceddf99eeed4ee380ef9e2337b2a"} Jan 29 08:48:04 crc kubenswrapper[4826]: I0129 08:48:04.765238 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" podStartSLOduration=2.3183840829999998 podStartE2EDuration="2.765215615s" podCreationTimestamp="2026-01-29 08:48:02 +0000 UTC" firstStartedPulling="2026-01-29 08:48:03.709795427 +0000 UTC m=+7467.571588496" lastFinishedPulling="2026-01-29 08:48:04.156626959 +0000 UTC m=+7468.018420028" observedRunningTime="2026-01-29 08:48:04.752796467 +0000 UTC m=+7468.614589536" watchObservedRunningTime="2026-01-29 08:48:04.765215615 +0000 UTC m=+7468.627008684" Jan 29 08:48:06 crc kubenswrapper[4826]: I0129 08:48:06.817736 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:48:06 crc kubenswrapper[4826]: E0129 08:48:06.818259 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:48:20 crc kubenswrapper[4826]: I0129 08:48:20.808771 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:48:20 crc kubenswrapper[4826]: E0129 08:48:20.809836 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:48:33 crc kubenswrapper[4826]: I0129 08:48:33.809082 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:48:33 crc kubenswrapper[4826]: E0129 08:48:33.810071 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:48:47 crc kubenswrapper[4826]: I0129 08:48:47.809678 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:48:47 crc kubenswrapper[4826]: E0129 08:48:47.810671 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:49:01 crc kubenswrapper[4826]: I0129 08:49:01.809592 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:49:01 crc kubenswrapper[4826]: E0129 08:49:01.810453 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:49:12 crc kubenswrapper[4826]: I0129 08:49:12.809209 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:49:12 crc kubenswrapper[4826]: E0129 08:49:12.810134 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:49:23 crc kubenswrapper[4826]: I0129 08:49:23.808741 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:49:23 crc kubenswrapper[4826]: E0129 08:49:23.809474 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:49:38 crc kubenswrapper[4826]: I0129 08:49:38.810930 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:49:38 crc kubenswrapper[4826]: E0129 08:49:38.811976 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:49:52 crc kubenswrapper[4826]: I0129 08:49:52.808855 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:49:52 crc kubenswrapper[4826]: E0129 08:49:52.809658 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:50:01 crc kubenswrapper[4826]: I0129 08:50:01.177029 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fmtd2"] Jan 29 08:50:01 crc kubenswrapper[4826]: I0129 08:50:01.179774 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:01 crc kubenswrapper[4826]: I0129 08:50:01.197533 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fmtd2"] Jan 29 08:50:01 crc kubenswrapper[4826]: I0129 08:50:01.298058 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65bf6e8-dd19-4b13-b5f5-16f257771c86-utilities\") pod \"certified-operators-fmtd2\" (UID: \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\") " pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:01 crc kubenswrapper[4826]: I0129 08:50:01.298196 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65bf6e8-dd19-4b13-b5f5-16f257771c86-catalog-content\") pod \"certified-operators-fmtd2\" (UID: \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\") " pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:01 crc kubenswrapper[4826]: I0129 08:50:01.298514 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8sfj\" (UniqueName: \"kubernetes.io/projected/b65bf6e8-dd19-4b13-b5f5-16f257771c86-kube-api-access-l8sfj\") pod \"certified-operators-fmtd2\" (UID: \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\") " pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:01 crc kubenswrapper[4826]: I0129 08:50:01.400445 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65bf6e8-dd19-4b13-b5f5-16f257771c86-catalog-content\") pod \"certified-operators-fmtd2\" (UID: \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\") " pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:01 crc kubenswrapper[4826]: I0129 08:50:01.401030 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8sfj\" (UniqueName: \"kubernetes.io/projected/b65bf6e8-dd19-4b13-b5f5-16f257771c86-kube-api-access-l8sfj\") pod \"certified-operators-fmtd2\" (UID: \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\") " pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:01 crc kubenswrapper[4826]: I0129 08:50:01.401188 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65bf6e8-dd19-4b13-b5f5-16f257771c86-utilities\") pod \"certified-operators-fmtd2\" (UID: \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\") " pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:01 crc kubenswrapper[4826]: I0129 08:50:01.401494 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65bf6e8-dd19-4b13-b5f5-16f257771c86-catalog-content\") pod \"certified-operators-fmtd2\" (UID: \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\") " pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:01 crc kubenswrapper[4826]: I0129 08:50:01.401739 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65bf6e8-dd19-4b13-b5f5-16f257771c86-utilities\") pod \"certified-operators-fmtd2\" (UID: \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\") " pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:01 crc kubenswrapper[4826]: I0129 08:50:01.427138 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8sfj\" (UniqueName: \"kubernetes.io/projected/b65bf6e8-dd19-4b13-b5f5-16f257771c86-kube-api-access-l8sfj\") pod \"certified-operators-fmtd2\" (UID: \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\") " pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:01 crc kubenswrapper[4826]: I0129 08:50:01.502126 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:02 crc kubenswrapper[4826]: I0129 08:50:02.071747 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fmtd2"] Jan 29 08:50:02 crc kubenswrapper[4826]: I0129 08:50:02.841263 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmtd2" event={"ID":"b65bf6e8-dd19-4b13-b5f5-16f257771c86","Type":"ContainerStarted","Data":"68ae594b78162ccb7c73723a99057696c4098a8f65c0b4bd1df3e4c0b1d7090a"} Jan 29 08:50:03 crc kubenswrapper[4826]: I0129 08:50:03.850714 4826 generic.go:334] "Generic (PLEG): container finished" podID="b65bf6e8-dd19-4b13-b5f5-16f257771c86" containerID="e13ddcc05320b9aa04fd95932a2e93a93da547f3bfe79c96a234d542afc31546" exitCode=0 Jan 29 08:50:03 crc kubenswrapper[4826]: I0129 08:50:03.850780 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmtd2" event={"ID":"b65bf6e8-dd19-4b13-b5f5-16f257771c86","Type":"ContainerDied","Data":"e13ddcc05320b9aa04fd95932a2e93a93da547f3bfe79c96a234d542afc31546"} Jan 29 08:50:04 crc kubenswrapper[4826]: I0129 08:50:04.865609 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmtd2" event={"ID":"b65bf6e8-dd19-4b13-b5f5-16f257771c86","Type":"ContainerStarted","Data":"96ca6411743ea9aa54834e575c868f32b46c25fa3d8263f016b92866dd782a28"} Jan 29 08:50:06 crc kubenswrapper[4826]: I0129 08:50:06.815861 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:50:06 crc kubenswrapper[4826]: I0129 08:50:06.888319 4826 generic.go:334] "Generic (PLEG): container finished" podID="b65bf6e8-dd19-4b13-b5f5-16f257771c86" containerID="96ca6411743ea9aa54834e575c868f32b46c25fa3d8263f016b92866dd782a28" exitCode=0 Jan 29 08:50:06 crc kubenswrapper[4826]: I0129 08:50:06.888352 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmtd2" event={"ID":"b65bf6e8-dd19-4b13-b5f5-16f257771c86","Type":"ContainerDied","Data":"96ca6411743ea9aa54834e575c868f32b46c25fa3d8263f016b92866dd782a28"} Jan 29 08:50:07 crc kubenswrapper[4826]: I0129 08:50:07.898261 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmtd2" event={"ID":"b65bf6e8-dd19-4b13-b5f5-16f257771c86","Type":"ContainerStarted","Data":"61c05e2779a5a0fb46fd3fd34cba10de615ba29dfd2f7c352e9e8452860ede7c"} Jan 29 08:50:07 crc kubenswrapper[4826]: I0129 08:50:07.901274 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"666f0ccfe2ff5cac72efc625ba1fe3fdefa85172107688bdc1b03cf120de6c3d"} Jan 29 08:50:07 crc kubenswrapper[4826]: I0129 08:50:07.923865 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fmtd2" podStartSLOduration=3.395218834 podStartE2EDuration="6.923838925s" podCreationTimestamp="2026-01-29 08:50:01 +0000 UTC" firstStartedPulling="2026-01-29 08:50:03.852917099 +0000 UTC m=+7587.714710168" lastFinishedPulling="2026-01-29 08:50:07.38153718 +0000 UTC m=+7591.243330259" observedRunningTime="2026-01-29 08:50:07.917789925 +0000 UTC m=+7591.779582994" watchObservedRunningTime="2026-01-29 08:50:07.923838925 +0000 UTC m=+7591.785631994" Jan 29 08:50:11 crc kubenswrapper[4826]: I0129 08:50:11.502886 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:11 crc kubenswrapper[4826]: I0129 08:50:11.503528 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:11 crc kubenswrapper[4826]: I0129 08:50:11.550078 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:21 crc kubenswrapper[4826]: I0129 08:50:21.546699 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:21 crc kubenswrapper[4826]: I0129 08:50:21.600355 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fmtd2"] Jan 29 08:50:22 crc kubenswrapper[4826]: I0129 08:50:22.544823 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fmtd2" podUID="b65bf6e8-dd19-4b13-b5f5-16f257771c86" containerName="registry-server" containerID="cri-o://61c05e2779a5a0fb46fd3fd34cba10de615ba29dfd2f7c352e9e8452860ede7c" gracePeriod=2 Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.021623 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.062643 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8sfj\" (UniqueName: \"kubernetes.io/projected/b65bf6e8-dd19-4b13-b5f5-16f257771c86-kube-api-access-l8sfj\") pod \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\" (UID: \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\") " Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.062738 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65bf6e8-dd19-4b13-b5f5-16f257771c86-catalog-content\") pod \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\" (UID: \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\") " Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.062949 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65bf6e8-dd19-4b13-b5f5-16f257771c86-utilities\") pod \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\" (UID: \"b65bf6e8-dd19-4b13-b5f5-16f257771c86\") " Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.064238 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b65bf6e8-dd19-4b13-b5f5-16f257771c86-utilities" (OuterVolumeSpecName: "utilities") pod "b65bf6e8-dd19-4b13-b5f5-16f257771c86" (UID: "b65bf6e8-dd19-4b13-b5f5-16f257771c86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.069475 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b65bf6e8-dd19-4b13-b5f5-16f257771c86-kube-api-access-l8sfj" (OuterVolumeSpecName: "kube-api-access-l8sfj") pod "b65bf6e8-dd19-4b13-b5f5-16f257771c86" (UID: "b65bf6e8-dd19-4b13-b5f5-16f257771c86"). InnerVolumeSpecName "kube-api-access-l8sfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.120880 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b65bf6e8-dd19-4b13-b5f5-16f257771c86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b65bf6e8-dd19-4b13-b5f5-16f257771c86" (UID: "b65bf6e8-dd19-4b13-b5f5-16f257771c86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.166412 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b65bf6e8-dd19-4b13-b5f5-16f257771c86-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.166475 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8sfj\" (UniqueName: \"kubernetes.io/projected/b65bf6e8-dd19-4b13-b5f5-16f257771c86-kube-api-access-l8sfj\") on node \"crc\" DevicePath \"\"" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.166489 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b65bf6e8-dd19-4b13-b5f5-16f257771c86-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.557985 4826 generic.go:334] "Generic (PLEG): container finished" podID="b65bf6e8-dd19-4b13-b5f5-16f257771c86" containerID="61c05e2779a5a0fb46fd3fd34cba10de615ba29dfd2f7c352e9e8452860ede7c" exitCode=0 Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.558088 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmtd2" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.558088 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmtd2" event={"ID":"b65bf6e8-dd19-4b13-b5f5-16f257771c86","Type":"ContainerDied","Data":"61c05e2779a5a0fb46fd3fd34cba10de615ba29dfd2f7c352e9e8452860ede7c"} Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.558697 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmtd2" event={"ID":"b65bf6e8-dd19-4b13-b5f5-16f257771c86","Type":"ContainerDied","Data":"68ae594b78162ccb7c73723a99057696c4098a8f65c0b4bd1df3e4c0b1d7090a"} Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.558744 4826 scope.go:117] "RemoveContainer" containerID="61c05e2779a5a0fb46fd3fd34cba10de615ba29dfd2f7c352e9e8452860ede7c" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.595822 4826 scope.go:117] "RemoveContainer" containerID="96ca6411743ea9aa54834e575c868f32b46c25fa3d8263f016b92866dd782a28" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.613157 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fmtd2"] Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.635076 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fmtd2"] Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.644831 4826 scope.go:117] "RemoveContainer" containerID="e13ddcc05320b9aa04fd95932a2e93a93da547f3bfe79c96a234d542afc31546" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.676095 4826 scope.go:117] "RemoveContainer" containerID="61c05e2779a5a0fb46fd3fd34cba10de615ba29dfd2f7c352e9e8452860ede7c" Jan 29 08:50:23 crc kubenswrapper[4826]: E0129 08:50:23.676693 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61c05e2779a5a0fb46fd3fd34cba10de615ba29dfd2f7c352e9e8452860ede7c\": container with ID starting with 61c05e2779a5a0fb46fd3fd34cba10de615ba29dfd2f7c352e9e8452860ede7c not found: ID does not exist" containerID="61c05e2779a5a0fb46fd3fd34cba10de615ba29dfd2f7c352e9e8452860ede7c" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.676743 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61c05e2779a5a0fb46fd3fd34cba10de615ba29dfd2f7c352e9e8452860ede7c"} err="failed to get container status \"61c05e2779a5a0fb46fd3fd34cba10de615ba29dfd2f7c352e9e8452860ede7c\": rpc error: code = NotFound desc = could not find container \"61c05e2779a5a0fb46fd3fd34cba10de615ba29dfd2f7c352e9e8452860ede7c\": container with ID starting with 61c05e2779a5a0fb46fd3fd34cba10de615ba29dfd2f7c352e9e8452860ede7c not found: ID does not exist" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.676773 4826 scope.go:117] "RemoveContainer" containerID="96ca6411743ea9aa54834e575c868f32b46c25fa3d8263f016b92866dd782a28" Jan 29 08:50:23 crc kubenswrapper[4826]: E0129 08:50:23.677128 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96ca6411743ea9aa54834e575c868f32b46c25fa3d8263f016b92866dd782a28\": container with ID starting with 96ca6411743ea9aa54834e575c868f32b46c25fa3d8263f016b92866dd782a28 not found: ID does not exist" containerID="96ca6411743ea9aa54834e575c868f32b46c25fa3d8263f016b92866dd782a28" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.677153 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96ca6411743ea9aa54834e575c868f32b46c25fa3d8263f016b92866dd782a28"} err="failed to get container status \"96ca6411743ea9aa54834e575c868f32b46c25fa3d8263f016b92866dd782a28\": rpc error: code = NotFound desc = could not find container \"96ca6411743ea9aa54834e575c868f32b46c25fa3d8263f016b92866dd782a28\": container with ID starting with 96ca6411743ea9aa54834e575c868f32b46c25fa3d8263f016b92866dd782a28 not found: ID does not exist" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.677168 4826 scope.go:117] "RemoveContainer" containerID="e13ddcc05320b9aa04fd95932a2e93a93da547f3bfe79c96a234d542afc31546" Jan 29 08:50:23 crc kubenswrapper[4826]: E0129 08:50:23.677571 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e13ddcc05320b9aa04fd95932a2e93a93da547f3bfe79c96a234d542afc31546\": container with ID starting with e13ddcc05320b9aa04fd95932a2e93a93da547f3bfe79c96a234d542afc31546 not found: ID does not exist" containerID="e13ddcc05320b9aa04fd95932a2e93a93da547f3bfe79c96a234d542afc31546" Jan 29 08:50:23 crc kubenswrapper[4826]: I0129 08:50:23.677638 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13ddcc05320b9aa04fd95932a2e93a93da547f3bfe79c96a234d542afc31546"} err="failed to get container status \"e13ddcc05320b9aa04fd95932a2e93a93da547f3bfe79c96a234d542afc31546\": rpc error: code = NotFound desc = could not find container \"e13ddcc05320b9aa04fd95932a2e93a93da547f3bfe79c96a234d542afc31546\": container with ID starting with e13ddcc05320b9aa04fd95932a2e93a93da547f3bfe79c96a234d542afc31546 not found: ID does not exist" Jan 29 08:50:24 crc kubenswrapper[4826]: I0129 08:50:24.825863 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b65bf6e8-dd19-4b13-b5f5-16f257771c86" path="/var/lib/kubelet/pods/b65bf6e8-dd19-4b13-b5f5-16f257771c86/volumes" Jan 29 08:52:24 crc kubenswrapper[4826]: I0129 08:52:24.718493 4826 generic.go:334] "Generic (PLEG): container finished" podID="bf6c48ac-7491-4cff-a809-f164ac932d35" containerID="43baf1beafe0fd49804b1af0ed47a2466f63ceddf99eeed4ee380ef9e2337b2a" exitCode=0 Jan 29 08:52:24 crc kubenswrapper[4826]: I0129 08:52:24.718606 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" event={"ID":"bf6c48ac-7491-4cff-a809-f164ac932d35","Type":"ContainerDied","Data":"43baf1beafe0fd49804b1af0ed47a2466f63ceddf99eeed4ee380ef9e2337b2a"} Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.122142 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.181262 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhdjh\" (UniqueName: \"kubernetes.io/projected/bf6c48ac-7491-4cff-a809-f164ac932d35-kube-api-access-dhdjh\") pod \"bf6c48ac-7491-4cff-a809-f164ac932d35\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.182527 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-inventory\") pod \"bf6c48ac-7491-4cff-a809-f164ac932d35\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.182669 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-ssh-key-openstack-cell1\") pod \"bf6c48ac-7491-4cff-a809-f164ac932d35\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.182707 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-libvirt-combined-ca-bundle\") pod \"bf6c48ac-7491-4cff-a809-f164ac932d35\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.182728 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-libvirt-secret-0\") pod \"bf6c48ac-7491-4cff-a809-f164ac932d35\" (UID: \"bf6c48ac-7491-4cff-a809-f164ac932d35\") " Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.187097 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf6c48ac-7491-4cff-a809-f164ac932d35-kube-api-access-dhdjh" (OuterVolumeSpecName: "kube-api-access-dhdjh") pod "bf6c48ac-7491-4cff-a809-f164ac932d35" (UID: "bf6c48ac-7491-4cff-a809-f164ac932d35"). InnerVolumeSpecName "kube-api-access-dhdjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.189639 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "bf6c48ac-7491-4cff-a809-f164ac932d35" (UID: "bf6c48ac-7491-4cff-a809-f164ac932d35"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.211896 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-inventory" (OuterVolumeSpecName: "inventory") pod "bf6c48ac-7491-4cff-a809-f164ac932d35" (UID: "bf6c48ac-7491-4cff-a809-f164ac932d35"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.219730 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "bf6c48ac-7491-4cff-a809-f164ac932d35" (UID: "bf6c48ac-7491-4cff-a809-f164ac932d35"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.220592 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "bf6c48ac-7491-4cff-a809-f164ac932d35" (UID: "bf6c48ac-7491-4cff-a809-f164ac932d35"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.285273 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.285317 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.285327 4826 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.285338 4826 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bf6c48ac-7491-4cff-a809-f164ac932d35-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.285349 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhdjh\" (UniqueName: \"kubernetes.io/projected/bf6c48ac-7491-4cff-a809-f164ac932d35-kube-api-access-dhdjh\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.736550 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" event={"ID":"bf6c48ac-7491-4cff-a809-f164ac932d35","Type":"ContainerDied","Data":"87be355102b88491db494e644de8de23962cb5d0c04d74a490365967d49cf481"} Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.736586 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87be355102b88491db494e644de8de23962cb5d0c04d74a490365967d49cf481" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.736610 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-rd9vm" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.850232 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-mclk4"] Jan 29 08:52:26 crc kubenswrapper[4826]: E0129 08:52:26.850910 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65bf6e8-dd19-4b13-b5f5-16f257771c86" containerName="extract-content" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.851012 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65bf6e8-dd19-4b13-b5f5-16f257771c86" containerName="extract-content" Jan 29 08:52:26 crc kubenswrapper[4826]: E0129 08:52:26.851098 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65bf6e8-dd19-4b13-b5f5-16f257771c86" containerName="extract-utilities" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.851186 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65bf6e8-dd19-4b13-b5f5-16f257771c86" containerName="extract-utilities" Jan 29 08:52:26 crc kubenswrapper[4826]: E0129 08:52:26.851317 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65bf6e8-dd19-4b13-b5f5-16f257771c86" containerName="registry-server" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.851404 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65bf6e8-dd19-4b13-b5f5-16f257771c86" containerName="registry-server" Jan 29 08:52:26 crc kubenswrapper[4826]: E0129 08:52:26.851473 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf6c48ac-7491-4cff-a809-f164ac932d35" containerName="libvirt-openstack-openstack-cell1" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.851529 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf6c48ac-7491-4cff-a809-f164ac932d35" containerName="libvirt-openstack-openstack-cell1" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.851790 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65bf6e8-dd19-4b13-b5f5-16f257771c86" containerName="registry-server" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.851977 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf6c48ac-7491-4cff-a809-f164ac932d35" containerName="libvirt-openstack-openstack-cell1" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.852747 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.863162 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.863349 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.864266 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-mclk4"] Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.864458 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.865116 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.865160 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.865280 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.865326 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.897648 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.898034 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.898113 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btqkd\" (UniqueName: \"kubernetes.io/projected/df8a8f37-849b-4d34-8527-edc1fe6aa082-kube-api-access-btqkd\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.898144 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.898280 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.898423 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.898545 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-inventory\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.899458 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:26 crc kubenswrapper[4826]: I0129 08:52:26.899593 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.002017 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.002112 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.002176 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-inventory\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.002209 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.002234 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.002260 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.002287 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.002338 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btqkd\" (UniqueName: \"kubernetes.io/projected/df8a8f37-849b-4d34-8527-edc1fe6aa082-kube-api-access-btqkd\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.002361 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.003225 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.006766 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.007084 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.007108 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-inventory\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.007363 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.007892 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.008798 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.018867 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.021441 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btqkd\" (UniqueName: \"kubernetes.io/projected/df8a8f37-849b-4d34-8527-edc1fe6aa082-kube-api-access-btqkd\") pod \"nova-cell1-openstack-openstack-cell1-mclk4\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.170085 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:52:27 crc kubenswrapper[4826]: I0129 08:52:27.786710 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-mclk4"] Jan 29 08:52:28 crc kubenswrapper[4826]: I0129 08:52:28.769208 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" event={"ID":"df8a8f37-849b-4d34-8527-edc1fe6aa082","Type":"ContainerStarted","Data":"7fa2baa237ce66585a1f1dc0b45e128b09a086acbfb58ddee7eaa7cf2c9cfbaf"} Jan 29 08:52:28 crc kubenswrapper[4826]: I0129 08:52:28.769622 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" event={"ID":"df8a8f37-849b-4d34-8527-edc1fe6aa082","Type":"ContainerStarted","Data":"786b230c2fa3231c5074b94361231e85b0be8a01f842cee2edae68834a5ed3fc"} Jan 29 08:52:28 crc kubenswrapper[4826]: I0129 08:52:28.793035 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" podStartSLOduration=2.361861529 podStartE2EDuration="2.793014228s" podCreationTimestamp="2026-01-29 08:52:26 +0000 UTC" firstStartedPulling="2026-01-29 08:52:27.789476625 +0000 UTC m=+7731.651269694" lastFinishedPulling="2026-01-29 08:52:28.220629334 +0000 UTC m=+7732.082422393" observedRunningTime="2026-01-29 08:52:28.790146392 +0000 UTC m=+7732.651939461" watchObservedRunningTime="2026-01-29 08:52:28.793014228 +0000 UTC m=+7732.654807297" Jan 29 08:52:35 crc kubenswrapper[4826]: I0129 08:52:35.656312 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:52:35 crc kubenswrapper[4826]: I0129 08:52:35.658355 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.085644 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9cgrd"] Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.088901 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.114377 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9cgrd"] Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.194687 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75de5ba1-6624-494e-bb35-022dc4e2b150-catalog-content\") pod \"community-operators-9cgrd\" (UID: \"75de5ba1-6624-494e-bb35-022dc4e2b150\") " pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.194783 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jnl2\" (UniqueName: \"kubernetes.io/projected/75de5ba1-6624-494e-bb35-022dc4e2b150-kube-api-access-6jnl2\") pod \"community-operators-9cgrd\" (UID: \"75de5ba1-6624-494e-bb35-022dc4e2b150\") " pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.194864 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75de5ba1-6624-494e-bb35-022dc4e2b150-utilities\") pod \"community-operators-9cgrd\" (UID: \"75de5ba1-6624-494e-bb35-022dc4e2b150\") " pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.296840 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jnl2\" (UniqueName: \"kubernetes.io/projected/75de5ba1-6624-494e-bb35-022dc4e2b150-kube-api-access-6jnl2\") pod \"community-operators-9cgrd\" (UID: \"75de5ba1-6624-494e-bb35-022dc4e2b150\") " pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.296941 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75de5ba1-6624-494e-bb35-022dc4e2b150-utilities\") pod \"community-operators-9cgrd\" (UID: \"75de5ba1-6624-494e-bb35-022dc4e2b150\") " pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.297085 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75de5ba1-6624-494e-bb35-022dc4e2b150-catalog-content\") pod \"community-operators-9cgrd\" (UID: \"75de5ba1-6624-494e-bb35-022dc4e2b150\") " pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.297685 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75de5ba1-6624-494e-bb35-022dc4e2b150-catalog-content\") pod \"community-operators-9cgrd\" (UID: \"75de5ba1-6624-494e-bb35-022dc4e2b150\") " pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.297697 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75de5ba1-6624-494e-bb35-022dc4e2b150-utilities\") pod \"community-operators-9cgrd\" (UID: \"75de5ba1-6624-494e-bb35-022dc4e2b150\") " pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.315057 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jnl2\" (UniqueName: \"kubernetes.io/projected/75de5ba1-6624-494e-bb35-022dc4e2b150-kube-api-access-6jnl2\") pod \"community-operators-9cgrd\" (UID: \"75de5ba1-6624-494e-bb35-022dc4e2b150\") " pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.420518 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:40 crc kubenswrapper[4826]: I0129 08:52:40.974914 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9cgrd"] Jan 29 08:52:41 crc kubenswrapper[4826]: I0129 08:52:41.891885 4826 generic.go:334] "Generic (PLEG): container finished" podID="75de5ba1-6624-494e-bb35-022dc4e2b150" containerID="45ca603a4843afbd64324f4c81c1b10c44144d5ff7319d88aa6c3e5b32cc8359" exitCode=0 Jan 29 08:52:41 crc kubenswrapper[4826]: I0129 08:52:41.891979 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cgrd" event={"ID":"75de5ba1-6624-494e-bb35-022dc4e2b150","Type":"ContainerDied","Data":"45ca603a4843afbd64324f4c81c1b10c44144d5ff7319d88aa6c3e5b32cc8359"} Jan 29 08:52:41 crc kubenswrapper[4826]: I0129 08:52:41.894041 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cgrd" event={"ID":"75de5ba1-6624-494e-bb35-022dc4e2b150","Type":"ContainerStarted","Data":"1052bd97e67ea1dc3aed39e5418e3b07ec06f66df3bba6c667eae944439d115a"} Jan 29 08:52:42 crc kubenswrapper[4826]: I0129 08:52:42.906776 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cgrd" event={"ID":"75de5ba1-6624-494e-bb35-022dc4e2b150","Type":"ContainerStarted","Data":"38b04a7e07e7e0252bfeca37a7b7e7a41815683d5111312e7a15d84663e33015"} Jan 29 08:52:43 crc kubenswrapper[4826]: I0129 08:52:43.918828 4826 generic.go:334] "Generic (PLEG): container finished" podID="75de5ba1-6624-494e-bb35-022dc4e2b150" containerID="38b04a7e07e7e0252bfeca37a7b7e7a41815683d5111312e7a15d84663e33015" exitCode=0 Jan 29 08:52:43 crc kubenswrapper[4826]: I0129 08:52:43.919521 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cgrd" event={"ID":"75de5ba1-6624-494e-bb35-022dc4e2b150","Type":"ContainerDied","Data":"38b04a7e07e7e0252bfeca37a7b7e7a41815683d5111312e7a15d84663e33015"} Jan 29 08:52:44 crc kubenswrapper[4826]: I0129 08:52:44.932628 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cgrd" event={"ID":"75de5ba1-6624-494e-bb35-022dc4e2b150","Type":"ContainerStarted","Data":"e7fe7f6e349f7daa0fc7243616f35b769114f579bec7931565c34141e756b9d5"} Jan 29 08:52:44 crc kubenswrapper[4826]: I0129 08:52:44.950249 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9cgrd" podStartSLOduration=2.509285949 podStartE2EDuration="4.950232287s" podCreationTimestamp="2026-01-29 08:52:40 +0000 UTC" firstStartedPulling="2026-01-29 08:52:41.893320252 +0000 UTC m=+7745.755113321" lastFinishedPulling="2026-01-29 08:52:44.33426659 +0000 UTC m=+7748.196059659" observedRunningTime="2026-01-29 08:52:44.947827303 +0000 UTC m=+7748.809620372" watchObservedRunningTime="2026-01-29 08:52:44.950232287 +0000 UTC m=+7748.812025356" Jan 29 08:52:50 crc kubenswrapper[4826]: I0129 08:52:50.422425 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:50 crc kubenswrapper[4826]: I0129 08:52:50.423059 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:50 crc kubenswrapper[4826]: I0129 08:52:50.469513 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:51 crc kubenswrapper[4826]: I0129 08:52:51.034374 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:51 crc kubenswrapper[4826]: I0129 08:52:51.091258 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9cgrd"] Jan 29 08:52:53 crc kubenswrapper[4826]: I0129 08:52:53.005054 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9cgrd" podUID="75de5ba1-6624-494e-bb35-022dc4e2b150" containerName="registry-server" containerID="cri-o://e7fe7f6e349f7daa0fc7243616f35b769114f579bec7931565c34141e756b9d5" gracePeriod=2 Jan 29 08:52:53 crc kubenswrapper[4826]: I0129 08:52:53.495473 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:53 crc kubenswrapper[4826]: I0129 08:52:53.585904 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75de5ba1-6624-494e-bb35-022dc4e2b150-catalog-content\") pod \"75de5ba1-6624-494e-bb35-022dc4e2b150\" (UID: \"75de5ba1-6624-494e-bb35-022dc4e2b150\") " Jan 29 08:52:53 crc kubenswrapper[4826]: I0129 08:52:53.586081 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75de5ba1-6624-494e-bb35-022dc4e2b150-utilities\") pod \"75de5ba1-6624-494e-bb35-022dc4e2b150\" (UID: \"75de5ba1-6624-494e-bb35-022dc4e2b150\") " Jan 29 08:52:53 crc kubenswrapper[4826]: I0129 08:52:53.586256 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jnl2\" (UniqueName: \"kubernetes.io/projected/75de5ba1-6624-494e-bb35-022dc4e2b150-kube-api-access-6jnl2\") pod \"75de5ba1-6624-494e-bb35-022dc4e2b150\" (UID: \"75de5ba1-6624-494e-bb35-022dc4e2b150\") " Jan 29 08:52:53 crc kubenswrapper[4826]: I0129 08:52:53.587323 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75de5ba1-6624-494e-bb35-022dc4e2b150-utilities" (OuterVolumeSpecName: "utilities") pod "75de5ba1-6624-494e-bb35-022dc4e2b150" (UID: "75de5ba1-6624-494e-bb35-022dc4e2b150"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:52:53 crc kubenswrapper[4826]: I0129 08:52:53.594049 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75de5ba1-6624-494e-bb35-022dc4e2b150-kube-api-access-6jnl2" (OuterVolumeSpecName: "kube-api-access-6jnl2") pod "75de5ba1-6624-494e-bb35-022dc4e2b150" (UID: "75de5ba1-6624-494e-bb35-022dc4e2b150"). InnerVolumeSpecName "kube-api-access-6jnl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:52:53 crc kubenswrapper[4826]: I0129 08:52:53.647215 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75de5ba1-6624-494e-bb35-022dc4e2b150-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75de5ba1-6624-494e-bb35-022dc4e2b150" (UID: "75de5ba1-6624-494e-bb35-022dc4e2b150"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:52:53 crc kubenswrapper[4826]: I0129 08:52:53.688517 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75de5ba1-6624-494e-bb35-022dc4e2b150-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:53 crc kubenswrapper[4826]: I0129 08:52:53.688836 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75de5ba1-6624-494e-bb35-022dc4e2b150-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:53 crc kubenswrapper[4826]: I0129 08:52:53.688846 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jnl2\" (UniqueName: \"kubernetes.io/projected/75de5ba1-6624-494e-bb35-022dc4e2b150-kube-api-access-6jnl2\") on node \"crc\" DevicePath \"\"" Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.016919 4826 generic.go:334] "Generic (PLEG): container finished" podID="75de5ba1-6624-494e-bb35-022dc4e2b150" containerID="e7fe7f6e349f7daa0fc7243616f35b769114f579bec7931565c34141e756b9d5" exitCode=0 Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.016967 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cgrd" event={"ID":"75de5ba1-6624-494e-bb35-022dc4e2b150","Type":"ContainerDied","Data":"e7fe7f6e349f7daa0fc7243616f35b769114f579bec7931565c34141e756b9d5"} Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.016981 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9cgrd" Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.017006 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cgrd" event={"ID":"75de5ba1-6624-494e-bb35-022dc4e2b150","Type":"ContainerDied","Data":"1052bd97e67ea1dc3aed39e5418e3b07ec06f66df3bba6c667eae944439d115a"} Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.017023 4826 scope.go:117] "RemoveContainer" containerID="e7fe7f6e349f7daa0fc7243616f35b769114f579bec7931565c34141e756b9d5" Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.039882 4826 scope.go:117] "RemoveContainer" containerID="38b04a7e07e7e0252bfeca37a7b7e7a41815683d5111312e7a15d84663e33015" Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.062272 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9cgrd"] Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.072782 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9cgrd"] Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.076804 4826 scope.go:117] "RemoveContainer" containerID="45ca603a4843afbd64324f4c81c1b10c44144d5ff7319d88aa6c3e5b32cc8359" Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.120781 4826 scope.go:117] "RemoveContainer" containerID="e7fe7f6e349f7daa0fc7243616f35b769114f579bec7931565c34141e756b9d5" Jan 29 08:52:54 crc kubenswrapper[4826]: E0129 08:52:54.121262 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7fe7f6e349f7daa0fc7243616f35b769114f579bec7931565c34141e756b9d5\": container with ID starting with e7fe7f6e349f7daa0fc7243616f35b769114f579bec7931565c34141e756b9d5 not found: ID does not exist" containerID="e7fe7f6e349f7daa0fc7243616f35b769114f579bec7931565c34141e756b9d5" Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.121342 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7fe7f6e349f7daa0fc7243616f35b769114f579bec7931565c34141e756b9d5"} err="failed to get container status \"e7fe7f6e349f7daa0fc7243616f35b769114f579bec7931565c34141e756b9d5\": rpc error: code = NotFound desc = could not find container \"e7fe7f6e349f7daa0fc7243616f35b769114f579bec7931565c34141e756b9d5\": container with ID starting with e7fe7f6e349f7daa0fc7243616f35b769114f579bec7931565c34141e756b9d5 not found: ID does not exist" Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.121381 4826 scope.go:117] "RemoveContainer" containerID="38b04a7e07e7e0252bfeca37a7b7e7a41815683d5111312e7a15d84663e33015" Jan 29 08:52:54 crc kubenswrapper[4826]: E0129 08:52:54.121734 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38b04a7e07e7e0252bfeca37a7b7e7a41815683d5111312e7a15d84663e33015\": container with ID starting with 38b04a7e07e7e0252bfeca37a7b7e7a41815683d5111312e7a15d84663e33015 not found: ID does not exist" containerID="38b04a7e07e7e0252bfeca37a7b7e7a41815683d5111312e7a15d84663e33015" Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.121768 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38b04a7e07e7e0252bfeca37a7b7e7a41815683d5111312e7a15d84663e33015"} err="failed to get container status \"38b04a7e07e7e0252bfeca37a7b7e7a41815683d5111312e7a15d84663e33015\": rpc error: code = NotFound desc = could not find container \"38b04a7e07e7e0252bfeca37a7b7e7a41815683d5111312e7a15d84663e33015\": container with ID starting with 38b04a7e07e7e0252bfeca37a7b7e7a41815683d5111312e7a15d84663e33015 not found: ID does not exist" Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.121791 4826 scope.go:117] "RemoveContainer" containerID="45ca603a4843afbd64324f4c81c1b10c44144d5ff7319d88aa6c3e5b32cc8359" Jan 29 08:52:54 crc kubenswrapper[4826]: E0129 08:52:54.122055 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ca603a4843afbd64324f4c81c1b10c44144d5ff7319d88aa6c3e5b32cc8359\": container with ID starting with 45ca603a4843afbd64324f4c81c1b10c44144d5ff7319d88aa6c3e5b32cc8359 not found: ID does not exist" containerID="45ca603a4843afbd64324f4c81c1b10c44144d5ff7319d88aa6c3e5b32cc8359" Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.122101 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ca603a4843afbd64324f4c81c1b10c44144d5ff7319d88aa6c3e5b32cc8359"} err="failed to get container status \"45ca603a4843afbd64324f4c81c1b10c44144d5ff7319d88aa6c3e5b32cc8359\": rpc error: code = NotFound desc = could not find container \"45ca603a4843afbd64324f4c81c1b10c44144d5ff7319d88aa6c3e5b32cc8359\": container with ID starting with 45ca603a4843afbd64324f4c81c1b10c44144d5ff7319d88aa6c3e5b32cc8359 not found: ID does not exist" Jan 29 08:52:54 crc kubenswrapper[4826]: I0129 08:52:54.819019 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75de5ba1-6624-494e-bb35-022dc4e2b150" path="/var/lib/kubelet/pods/75de5ba1-6624-494e-bb35-022dc4e2b150/volumes" Jan 29 08:53:05 crc kubenswrapper[4826]: I0129 08:53:05.656341 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:53:05 crc kubenswrapper[4826]: I0129 08:53:05.656869 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:53:35 crc kubenswrapper[4826]: I0129 08:53:35.657029 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:53:35 crc kubenswrapper[4826]: I0129 08:53:35.659009 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:53:35 crc kubenswrapper[4826]: I0129 08:53:35.659236 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 08:53:35 crc kubenswrapper[4826]: I0129 08:53:35.660851 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"666f0ccfe2ff5cac72efc625ba1fe3fdefa85172107688bdc1b03cf120de6c3d"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:53:35 crc kubenswrapper[4826]: I0129 08:53:35.661143 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://666f0ccfe2ff5cac72efc625ba1fe3fdefa85172107688bdc1b03cf120de6c3d" gracePeriod=600 Jan 29 08:53:36 crc kubenswrapper[4826]: I0129 08:53:36.434486 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="666f0ccfe2ff5cac72efc625ba1fe3fdefa85172107688bdc1b03cf120de6c3d" exitCode=0 Jan 29 08:53:36 crc kubenswrapper[4826]: I0129 08:53:36.434558 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"666f0ccfe2ff5cac72efc625ba1fe3fdefa85172107688bdc1b03cf120de6c3d"} Jan 29 08:53:36 crc kubenswrapper[4826]: I0129 08:53:36.435029 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd"} Jan 29 08:53:36 crc kubenswrapper[4826]: I0129 08:53:36.435056 4826 scope.go:117] "RemoveContainer" containerID="a36305785ee669f844d11a90394473bfc22dd7ebbfc667cd04792ae324d02c88" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.410561 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8ltfd"] Jan 29 08:53:56 crc kubenswrapper[4826]: E0129 08:53:56.411702 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75de5ba1-6624-494e-bb35-022dc4e2b150" containerName="extract-utilities" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.411720 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="75de5ba1-6624-494e-bb35-022dc4e2b150" containerName="extract-utilities" Jan 29 08:53:56 crc kubenswrapper[4826]: E0129 08:53:56.411747 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75de5ba1-6624-494e-bb35-022dc4e2b150" containerName="registry-server" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.411754 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="75de5ba1-6624-494e-bb35-022dc4e2b150" containerName="registry-server" Jan 29 08:53:56 crc kubenswrapper[4826]: E0129 08:53:56.411789 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75de5ba1-6624-494e-bb35-022dc4e2b150" containerName="extract-content" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.411796 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="75de5ba1-6624-494e-bb35-022dc4e2b150" containerName="extract-content" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.411971 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="75de5ba1-6624-494e-bb35-022dc4e2b150" containerName="registry-server" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.413570 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.424701 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ltfd"] Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.512580 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgjmj\" (UniqueName: \"kubernetes.io/projected/ac95b182-6149-4f14-b0fc-7be86bcc8640-kube-api-access-kgjmj\") pod \"redhat-marketplace-8ltfd\" (UID: \"ac95b182-6149-4f14-b0fc-7be86bcc8640\") " pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.512667 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac95b182-6149-4f14-b0fc-7be86bcc8640-utilities\") pod \"redhat-marketplace-8ltfd\" (UID: \"ac95b182-6149-4f14-b0fc-7be86bcc8640\") " pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.512763 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac95b182-6149-4f14-b0fc-7be86bcc8640-catalog-content\") pod \"redhat-marketplace-8ltfd\" (UID: \"ac95b182-6149-4f14-b0fc-7be86bcc8640\") " pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.615063 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac95b182-6149-4f14-b0fc-7be86bcc8640-utilities\") pod \"redhat-marketplace-8ltfd\" (UID: \"ac95b182-6149-4f14-b0fc-7be86bcc8640\") " pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.615201 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac95b182-6149-4f14-b0fc-7be86bcc8640-catalog-content\") pod \"redhat-marketplace-8ltfd\" (UID: \"ac95b182-6149-4f14-b0fc-7be86bcc8640\") " pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.615358 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgjmj\" (UniqueName: \"kubernetes.io/projected/ac95b182-6149-4f14-b0fc-7be86bcc8640-kube-api-access-kgjmj\") pod \"redhat-marketplace-8ltfd\" (UID: \"ac95b182-6149-4f14-b0fc-7be86bcc8640\") " pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.615636 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac95b182-6149-4f14-b0fc-7be86bcc8640-utilities\") pod \"redhat-marketplace-8ltfd\" (UID: \"ac95b182-6149-4f14-b0fc-7be86bcc8640\") " pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.615957 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac95b182-6149-4f14-b0fc-7be86bcc8640-catalog-content\") pod \"redhat-marketplace-8ltfd\" (UID: \"ac95b182-6149-4f14-b0fc-7be86bcc8640\") " pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.637897 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgjmj\" (UniqueName: \"kubernetes.io/projected/ac95b182-6149-4f14-b0fc-7be86bcc8640-kube-api-access-kgjmj\") pod \"redhat-marketplace-8ltfd\" (UID: \"ac95b182-6149-4f14-b0fc-7be86bcc8640\") " pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:53:56 crc kubenswrapper[4826]: I0129 08:53:56.743608 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:53:57 crc kubenswrapper[4826]: W0129 08:53:57.214876 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac95b182_6149_4f14_b0fc_7be86bcc8640.slice/crio-62ba1d9b83e2faa69e22e973283775ff3c86e8e4ae9332cbd59c1e178a077563 WatchSource:0}: Error finding container 62ba1d9b83e2faa69e22e973283775ff3c86e8e4ae9332cbd59c1e178a077563: Status 404 returned error can't find the container with id 62ba1d9b83e2faa69e22e973283775ff3c86e8e4ae9332cbd59c1e178a077563 Jan 29 08:53:57 crc kubenswrapper[4826]: I0129 08:53:57.226068 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ltfd"] Jan 29 08:53:57 crc kubenswrapper[4826]: I0129 08:53:57.643180 4826 generic.go:334] "Generic (PLEG): container finished" podID="ac95b182-6149-4f14-b0fc-7be86bcc8640" containerID="d07e1a5bf96584660f95dee229a1a52d239b2c1a46fe8a7833d05610e9fc0e46" exitCode=0 Jan 29 08:53:57 crc kubenswrapper[4826]: I0129 08:53:57.643247 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ltfd" event={"ID":"ac95b182-6149-4f14-b0fc-7be86bcc8640","Type":"ContainerDied","Data":"d07e1a5bf96584660f95dee229a1a52d239b2c1a46fe8a7833d05610e9fc0e46"} Jan 29 08:53:57 crc kubenswrapper[4826]: I0129 08:53:57.643312 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ltfd" event={"ID":"ac95b182-6149-4f14-b0fc-7be86bcc8640","Type":"ContainerStarted","Data":"62ba1d9b83e2faa69e22e973283775ff3c86e8e4ae9332cbd59c1e178a077563"} Jan 29 08:53:57 crc kubenswrapper[4826]: I0129 08:53:57.645089 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:53:59 crc kubenswrapper[4826]: I0129 08:53:59.696823 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ltfd" event={"ID":"ac95b182-6149-4f14-b0fc-7be86bcc8640","Type":"ContainerStarted","Data":"77d8d3a48f541bd47e108bbe25d48a5ddfe7bcdf3eacc95294cc892db3718fb0"} Jan 29 08:54:00 crc kubenswrapper[4826]: I0129 08:54:00.710422 4826 generic.go:334] "Generic (PLEG): container finished" podID="ac95b182-6149-4f14-b0fc-7be86bcc8640" containerID="77d8d3a48f541bd47e108bbe25d48a5ddfe7bcdf3eacc95294cc892db3718fb0" exitCode=0 Jan 29 08:54:00 crc kubenswrapper[4826]: I0129 08:54:00.710527 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ltfd" event={"ID":"ac95b182-6149-4f14-b0fc-7be86bcc8640","Type":"ContainerDied","Data":"77d8d3a48f541bd47e108bbe25d48a5ddfe7bcdf3eacc95294cc892db3718fb0"} Jan 29 08:54:01 crc kubenswrapper[4826]: I0129 08:54:01.723584 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ltfd" event={"ID":"ac95b182-6149-4f14-b0fc-7be86bcc8640","Type":"ContainerStarted","Data":"2e2902227b91eeb6abe63471e69ff80e3eb509c116f2f9b365ff02c93f11bc65"} Jan 29 08:54:01 crc kubenswrapper[4826]: I0129 08:54:01.749986 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8ltfd" podStartSLOduration=2.29726884 podStartE2EDuration="5.749968509s" podCreationTimestamp="2026-01-29 08:53:56 +0000 UTC" firstStartedPulling="2026-01-29 08:53:57.644849912 +0000 UTC m=+7821.506642981" lastFinishedPulling="2026-01-29 08:54:01.097549591 +0000 UTC m=+7824.959342650" observedRunningTime="2026-01-29 08:54:01.746326252 +0000 UTC m=+7825.608119331" watchObservedRunningTime="2026-01-29 08:54:01.749968509 +0000 UTC m=+7825.611761588" Jan 29 08:54:06 crc kubenswrapper[4826]: I0129 08:54:06.744020 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:54:06 crc kubenswrapper[4826]: I0129 08:54:06.744569 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:54:06 crc kubenswrapper[4826]: I0129 08:54:06.822104 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:54:06 crc kubenswrapper[4826]: I0129 08:54:06.870757 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:54:07 crc kubenswrapper[4826]: I0129 08:54:07.085935 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ltfd"] Jan 29 08:54:08 crc kubenswrapper[4826]: I0129 08:54:08.788931 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8ltfd" podUID="ac95b182-6149-4f14-b0fc-7be86bcc8640" containerName="registry-server" containerID="cri-o://2e2902227b91eeb6abe63471e69ff80e3eb509c116f2f9b365ff02c93f11bc65" gracePeriod=2 Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.276439 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.380955 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac95b182-6149-4f14-b0fc-7be86bcc8640-catalog-content\") pod \"ac95b182-6149-4f14-b0fc-7be86bcc8640\" (UID: \"ac95b182-6149-4f14-b0fc-7be86bcc8640\") " Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.381253 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgjmj\" (UniqueName: \"kubernetes.io/projected/ac95b182-6149-4f14-b0fc-7be86bcc8640-kube-api-access-kgjmj\") pod \"ac95b182-6149-4f14-b0fc-7be86bcc8640\" (UID: \"ac95b182-6149-4f14-b0fc-7be86bcc8640\") " Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.381566 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac95b182-6149-4f14-b0fc-7be86bcc8640-utilities\") pod \"ac95b182-6149-4f14-b0fc-7be86bcc8640\" (UID: \"ac95b182-6149-4f14-b0fc-7be86bcc8640\") " Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.382367 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac95b182-6149-4f14-b0fc-7be86bcc8640-utilities" (OuterVolumeSpecName: "utilities") pod "ac95b182-6149-4f14-b0fc-7be86bcc8640" (UID: "ac95b182-6149-4f14-b0fc-7be86bcc8640"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.397709 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac95b182-6149-4f14-b0fc-7be86bcc8640-kube-api-access-kgjmj" (OuterVolumeSpecName: "kube-api-access-kgjmj") pod "ac95b182-6149-4f14-b0fc-7be86bcc8640" (UID: "ac95b182-6149-4f14-b0fc-7be86bcc8640"). InnerVolumeSpecName "kube-api-access-kgjmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.409081 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac95b182-6149-4f14-b0fc-7be86bcc8640-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac95b182-6149-4f14-b0fc-7be86bcc8640" (UID: "ac95b182-6149-4f14-b0fc-7be86bcc8640"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.484196 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac95b182-6149-4f14-b0fc-7be86bcc8640-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.484227 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac95b182-6149-4f14-b0fc-7be86bcc8640-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.484240 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgjmj\" (UniqueName: \"kubernetes.io/projected/ac95b182-6149-4f14-b0fc-7be86bcc8640-kube-api-access-kgjmj\") on node \"crc\" DevicePath \"\"" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.799874 4826 generic.go:334] "Generic (PLEG): container finished" podID="ac95b182-6149-4f14-b0fc-7be86bcc8640" containerID="2e2902227b91eeb6abe63471e69ff80e3eb509c116f2f9b365ff02c93f11bc65" exitCode=0 Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.799946 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ltfd" event={"ID":"ac95b182-6149-4f14-b0fc-7be86bcc8640","Type":"ContainerDied","Data":"2e2902227b91eeb6abe63471e69ff80e3eb509c116f2f9b365ff02c93f11bc65"} Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.799976 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ltfd" event={"ID":"ac95b182-6149-4f14-b0fc-7be86bcc8640","Type":"ContainerDied","Data":"62ba1d9b83e2faa69e22e973283775ff3c86e8e4ae9332cbd59c1e178a077563"} Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.800020 4826 scope.go:117] "RemoveContainer" containerID="2e2902227b91eeb6abe63471e69ff80e3eb509c116f2f9b365ff02c93f11bc65" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.800485 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ltfd" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.838582 4826 scope.go:117] "RemoveContainer" containerID="77d8d3a48f541bd47e108bbe25d48a5ddfe7bcdf3eacc95294cc892db3718fb0" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.841428 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ltfd"] Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.853344 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ltfd"] Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.864515 4826 scope.go:117] "RemoveContainer" containerID="d07e1a5bf96584660f95dee229a1a52d239b2c1a46fe8a7833d05610e9fc0e46" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.914980 4826 scope.go:117] "RemoveContainer" containerID="2e2902227b91eeb6abe63471e69ff80e3eb509c116f2f9b365ff02c93f11bc65" Jan 29 08:54:09 crc kubenswrapper[4826]: E0129 08:54:09.915973 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e2902227b91eeb6abe63471e69ff80e3eb509c116f2f9b365ff02c93f11bc65\": container with ID starting with 2e2902227b91eeb6abe63471e69ff80e3eb509c116f2f9b365ff02c93f11bc65 not found: ID does not exist" containerID="2e2902227b91eeb6abe63471e69ff80e3eb509c116f2f9b365ff02c93f11bc65" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.916029 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e2902227b91eeb6abe63471e69ff80e3eb509c116f2f9b365ff02c93f11bc65"} err="failed to get container status \"2e2902227b91eeb6abe63471e69ff80e3eb509c116f2f9b365ff02c93f11bc65\": rpc error: code = NotFound desc = could not find container \"2e2902227b91eeb6abe63471e69ff80e3eb509c116f2f9b365ff02c93f11bc65\": container with ID starting with 2e2902227b91eeb6abe63471e69ff80e3eb509c116f2f9b365ff02c93f11bc65 not found: ID does not exist" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.916062 4826 scope.go:117] "RemoveContainer" containerID="77d8d3a48f541bd47e108bbe25d48a5ddfe7bcdf3eacc95294cc892db3718fb0" Jan 29 08:54:09 crc kubenswrapper[4826]: E0129 08:54:09.916618 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d8d3a48f541bd47e108bbe25d48a5ddfe7bcdf3eacc95294cc892db3718fb0\": container with ID starting with 77d8d3a48f541bd47e108bbe25d48a5ddfe7bcdf3eacc95294cc892db3718fb0 not found: ID does not exist" containerID="77d8d3a48f541bd47e108bbe25d48a5ddfe7bcdf3eacc95294cc892db3718fb0" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.916674 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d8d3a48f541bd47e108bbe25d48a5ddfe7bcdf3eacc95294cc892db3718fb0"} err="failed to get container status \"77d8d3a48f541bd47e108bbe25d48a5ddfe7bcdf3eacc95294cc892db3718fb0\": rpc error: code = NotFound desc = could not find container \"77d8d3a48f541bd47e108bbe25d48a5ddfe7bcdf3eacc95294cc892db3718fb0\": container with ID starting with 77d8d3a48f541bd47e108bbe25d48a5ddfe7bcdf3eacc95294cc892db3718fb0 not found: ID does not exist" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.916711 4826 scope.go:117] "RemoveContainer" containerID="d07e1a5bf96584660f95dee229a1a52d239b2c1a46fe8a7833d05610e9fc0e46" Jan 29 08:54:09 crc kubenswrapper[4826]: E0129 08:54:09.917056 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d07e1a5bf96584660f95dee229a1a52d239b2c1a46fe8a7833d05610e9fc0e46\": container with ID starting with d07e1a5bf96584660f95dee229a1a52d239b2c1a46fe8a7833d05610e9fc0e46 not found: ID does not exist" containerID="d07e1a5bf96584660f95dee229a1a52d239b2c1a46fe8a7833d05610e9fc0e46" Jan 29 08:54:09 crc kubenswrapper[4826]: I0129 08:54:09.917088 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d07e1a5bf96584660f95dee229a1a52d239b2c1a46fe8a7833d05610e9fc0e46"} err="failed to get container status \"d07e1a5bf96584660f95dee229a1a52d239b2c1a46fe8a7833d05610e9fc0e46\": rpc error: code = NotFound desc = could not find container \"d07e1a5bf96584660f95dee229a1a52d239b2c1a46fe8a7833d05610e9fc0e46\": container with ID starting with d07e1a5bf96584660f95dee229a1a52d239b2c1a46fe8a7833d05610e9fc0e46 not found: ID does not exist" Jan 29 08:54:10 crc kubenswrapper[4826]: I0129 08:54:10.821455 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac95b182-6149-4f14-b0fc-7be86bcc8640" path="/var/lib/kubelet/pods/ac95b182-6149-4f14-b0fc-7be86bcc8640/volumes" Jan 29 08:55:04 crc kubenswrapper[4826]: I0129 08:55:04.337205 4826 generic.go:334] "Generic (PLEG): container finished" podID="df8a8f37-849b-4d34-8527-edc1fe6aa082" containerID="7fa2baa237ce66585a1f1dc0b45e128b09a086acbfb58ddee7eaa7cf2c9cfbaf" exitCode=0 Jan 29 08:55:04 crc kubenswrapper[4826]: I0129 08:55:04.337554 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" event={"ID":"df8a8f37-849b-4d34-8527-edc1fe6aa082","Type":"ContainerDied","Data":"7fa2baa237ce66585a1f1dc0b45e128b09a086acbfb58ddee7eaa7cf2c9cfbaf"} Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.800434 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.936213 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-combined-ca-bundle\") pod \"df8a8f37-849b-4d34-8527-edc1fe6aa082\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.936569 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-compute-config-0\") pod \"df8a8f37-849b-4d34-8527-edc1fe6aa082\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.936729 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-migration-ssh-key-1\") pod \"df8a8f37-849b-4d34-8527-edc1fe6aa082\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.937465 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-compute-config-1\") pod \"df8a8f37-849b-4d34-8527-edc1fe6aa082\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.937531 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-ssh-key-openstack-cell1\") pod \"df8a8f37-849b-4d34-8527-edc1fe6aa082\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.937574 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cells-global-config-0\") pod \"df8a8f37-849b-4d34-8527-edc1fe6aa082\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.937745 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btqkd\" (UniqueName: \"kubernetes.io/projected/df8a8f37-849b-4d34-8527-edc1fe6aa082-kube-api-access-btqkd\") pod \"df8a8f37-849b-4d34-8527-edc1fe6aa082\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.937770 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-migration-ssh-key-0\") pod \"df8a8f37-849b-4d34-8527-edc1fe6aa082\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.937824 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-inventory\") pod \"df8a8f37-849b-4d34-8527-edc1fe6aa082\" (UID: \"df8a8f37-849b-4d34-8527-edc1fe6aa082\") " Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.943098 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8a8f37-849b-4d34-8527-edc1fe6aa082-kube-api-access-btqkd" (OuterVolumeSpecName: "kube-api-access-btqkd") pod "df8a8f37-849b-4d34-8527-edc1fe6aa082" (UID: "df8a8f37-849b-4d34-8527-edc1fe6aa082"). InnerVolumeSpecName "kube-api-access-btqkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.943097 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "df8a8f37-849b-4d34-8527-edc1fe6aa082" (UID: "df8a8f37-849b-4d34-8527-edc1fe6aa082"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.969446 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-inventory" (OuterVolumeSpecName: "inventory") pod "df8a8f37-849b-4d34-8527-edc1fe6aa082" (UID: "df8a8f37-849b-4d34-8527-edc1fe6aa082"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.970418 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "df8a8f37-849b-4d34-8527-edc1fe6aa082" (UID: "df8a8f37-849b-4d34-8527-edc1fe6aa082"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.971574 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "df8a8f37-849b-4d34-8527-edc1fe6aa082" (UID: "df8a8f37-849b-4d34-8527-edc1fe6aa082"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.972847 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "df8a8f37-849b-4d34-8527-edc1fe6aa082" (UID: "df8a8f37-849b-4d34-8527-edc1fe6aa082"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.980690 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "df8a8f37-849b-4d34-8527-edc1fe6aa082" (UID: "df8a8f37-849b-4d34-8527-edc1fe6aa082"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.992793 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "df8a8f37-849b-4d34-8527-edc1fe6aa082" (UID: "df8a8f37-849b-4d34-8527-edc1fe6aa082"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:55:05 crc kubenswrapper[4826]: I0129 08:55:05.996497 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "df8a8f37-849b-4d34-8527-edc1fe6aa082" (UID: "df8a8f37-849b-4d34-8527-edc1fe6aa082"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.042329 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btqkd\" (UniqueName: \"kubernetes.io/projected/df8a8f37-849b-4d34-8527-edc1fe6aa082-kube-api-access-btqkd\") on node \"crc\" DevicePath \"\"" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.044558 4826 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.044790 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.045084 4826 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.045281 4826 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.045466 4826 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.045636 4826 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.045825 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/df8a8f37-849b-4d34-8527-edc1fe6aa082-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.046013 4826 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/df8a8f37-849b-4d34-8527-edc1fe6aa082-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.357728 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" event={"ID":"df8a8f37-849b-4d34-8527-edc1fe6aa082","Type":"ContainerDied","Data":"786b230c2fa3231c5074b94361231e85b0be8a01f842cee2edae68834a5ed3fc"} Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.358041 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="786b230c2fa3231c5074b94361231e85b0be8a01f842cee2edae68834a5ed3fc" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.357786 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-mclk4" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.470039 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-fh64b"] Jan 29 08:55:06 crc kubenswrapper[4826]: E0129 08:55:06.470748 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac95b182-6149-4f14-b0fc-7be86bcc8640" containerName="extract-content" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.470774 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac95b182-6149-4f14-b0fc-7be86bcc8640" containerName="extract-content" Jan 29 08:55:06 crc kubenswrapper[4826]: E0129 08:55:06.470813 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac95b182-6149-4f14-b0fc-7be86bcc8640" containerName="extract-utilities" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.470821 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac95b182-6149-4f14-b0fc-7be86bcc8640" containerName="extract-utilities" Jan 29 08:55:06 crc kubenswrapper[4826]: E0129 08:55:06.470833 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8a8f37-849b-4d34-8527-edc1fe6aa082" containerName="nova-cell1-openstack-openstack-cell1" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.470839 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8a8f37-849b-4d34-8527-edc1fe6aa082" containerName="nova-cell1-openstack-openstack-cell1" Jan 29 08:55:06 crc kubenswrapper[4826]: E0129 08:55:06.470851 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac95b182-6149-4f14-b0fc-7be86bcc8640" containerName="registry-server" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.470856 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac95b182-6149-4f14-b0fc-7be86bcc8640" containerName="registry-server" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.471062 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8a8f37-849b-4d34-8527-edc1fe6aa082" containerName="nova-cell1-openstack-openstack-cell1" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.471082 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac95b182-6149-4f14-b0fc-7be86bcc8640" containerName="registry-server" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.471839 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.474236 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.474549 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.474623 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.474709 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.476008 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.490674 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-fh64b"] Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.658147 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.658236 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.658272 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f67v\" (UniqueName: \"kubernetes.io/projected/b73120ed-4019-4aed-883b-3fd780326a7e-kube-api-access-6f67v\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.658291 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.658466 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.658515 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-inventory\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.658548 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.760679 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.761477 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-inventory\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.761548 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.761656 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.761738 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.761776 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f67v\" (UniqueName: \"kubernetes.io/projected/b73120ed-4019-4aed-883b-3fd780326a7e-kube-api-access-6f67v\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.761804 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.766591 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.766624 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-inventory\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.766833 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.767653 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.767791 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.772751 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:06 crc kubenswrapper[4826]: I0129 08:55:06.790809 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f67v\" (UniqueName: \"kubernetes.io/projected/b73120ed-4019-4aed-883b-3fd780326a7e-kube-api-access-6f67v\") pod \"telemetry-openstack-openstack-cell1-fh64b\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:07 crc kubenswrapper[4826]: I0129 08:55:07.092237 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:55:07 crc kubenswrapper[4826]: I0129 08:55:07.690779 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-fh64b"] Jan 29 08:55:08 crc kubenswrapper[4826]: I0129 08:55:08.382884 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-fh64b" event={"ID":"b73120ed-4019-4aed-883b-3fd780326a7e","Type":"ContainerStarted","Data":"71be51b1db125f800b72d063460bd00844e41be0b2ca9e753d1a269db38a3d9b"} Jan 29 08:55:09 crc kubenswrapper[4826]: I0129 08:55:09.398710 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-fh64b" event={"ID":"b73120ed-4019-4aed-883b-3fd780326a7e","Type":"ContainerStarted","Data":"f27d63447ca06d52be4a80192d2e4bfa13dba2df507bf50599c01770d1b19ba5"} Jan 29 08:55:09 crc kubenswrapper[4826]: I0129 08:55:09.428904 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-fh64b" podStartSLOduration=2.795894787 podStartE2EDuration="3.428880308s" podCreationTimestamp="2026-01-29 08:55:06 +0000 UTC" firstStartedPulling="2026-01-29 08:55:07.699553819 +0000 UTC m=+7891.561346888" lastFinishedPulling="2026-01-29 08:55:08.33253934 +0000 UTC m=+7892.194332409" observedRunningTime="2026-01-29 08:55:09.42224432 +0000 UTC m=+7893.284037399" watchObservedRunningTime="2026-01-29 08:55:09.428880308 +0000 UTC m=+7893.290673377" Jan 29 08:55:35 crc kubenswrapper[4826]: I0129 08:55:35.656264 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:55:35 crc kubenswrapper[4826]: I0129 08:55:35.656854 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:56:05 crc kubenswrapper[4826]: I0129 08:56:05.656667 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:56:05 crc kubenswrapper[4826]: I0129 08:56:05.657275 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:56:35 crc kubenswrapper[4826]: I0129 08:56:35.656083 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 08:56:35 crc kubenswrapper[4826]: I0129 08:56:35.656643 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 08:56:35 crc kubenswrapper[4826]: I0129 08:56:35.656693 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 08:56:35 crc kubenswrapper[4826]: I0129 08:56:35.657525 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 08:56:35 crc kubenswrapper[4826]: I0129 08:56:35.657581 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" gracePeriod=600 Jan 29 08:56:35 crc kubenswrapper[4826]: E0129 08:56:35.783033 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:56:36 crc kubenswrapper[4826]: I0129 08:56:36.262759 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" exitCode=0 Jan 29 08:56:36 crc kubenswrapper[4826]: I0129 08:56:36.262835 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd"} Jan 29 08:56:36 crc kubenswrapper[4826]: I0129 08:56:36.262939 4826 scope.go:117] "RemoveContainer" containerID="666f0ccfe2ff5cac72efc625ba1fe3fdefa85172107688bdc1b03cf120de6c3d" Jan 29 08:56:36 crc kubenswrapper[4826]: I0129 08:56:36.264007 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:56:36 crc kubenswrapper[4826]: E0129 08:56:36.264636 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:56:48 crc kubenswrapper[4826]: I0129 08:56:48.808847 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:56:48 crc kubenswrapper[4826]: E0129 08:56:48.809740 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:57:02 crc kubenswrapper[4826]: I0129 08:57:02.809455 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:57:02 crc kubenswrapper[4826]: E0129 08:57:02.810400 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:57:13 crc kubenswrapper[4826]: I0129 08:57:13.809329 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:57:13 crc kubenswrapper[4826]: E0129 08:57:13.810099 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:57:28 crc kubenswrapper[4826]: I0129 08:57:28.808657 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:57:28 crc kubenswrapper[4826]: E0129 08:57:28.809540 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:57:42 crc kubenswrapper[4826]: I0129 08:57:42.808950 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:57:42 crc kubenswrapper[4826]: E0129 08:57:42.810643 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:57:56 crc kubenswrapper[4826]: I0129 08:57:56.814602 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:57:56 crc kubenswrapper[4826]: E0129 08:57:56.816515 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:58:05 crc kubenswrapper[4826]: I0129 08:58:05.435377 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d87p6"] Jan 29 08:58:05 crc kubenswrapper[4826]: I0129 08:58:05.441115 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:05 crc kubenswrapper[4826]: I0129 08:58:05.455624 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d87p6"] Jan 29 08:58:05 crc kubenswrapper[4826]: I0129 08:58:05.576754 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68cb7f47-570e-4098-9291-43b71a28ec39-utilities\") pod \"redhat-operators-d87p6\" (UID: \"68cb7f47-570e-4098-9291-43b71a28ec39\") " pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:05 crc kubenswrapper[4826]: I0129 08:58:05.577197 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2ps6\" (UniqueName: \"kubernetes.io/projected/68cb7f47-570e-4098-9291-43b71a28ec39-kube-api-access-m2ps6\") pod \"redhat-operators-d87p6\" (UID: \"68cb7f47-570e-4098-9291-43b71a28ec39\") " pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:05 crc kubenswrapper[4826]: I0129 08:58:05.577257 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68cb7f47-570e-4098-9291-43b71a28ec39-catalog-content\") pod \"redhat-operators-d87p6\" (UID: \"68cb7f47-570e-4098-9291-43b71a28ec39\") " pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:05 crc kubenswrapper[4826]: I0129 08:58:05.679664 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2ps6\" (UniqueName: \"kubernetes.io/projected/68cb7f47-570e-4098-9291-43b71a28ec39-kube-api-access-m2ps6\") pod \"redhat-operators-d87p6\" (UID: \"68cb7f47-570e-4098-9291-43b71a28ec39\") " pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:05 crc kubenswrapper[4826]: I0129 08:58:05.679766 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68cb7f47-570e-4098-9291-43b71a28ec39-catalog-content\") pod \"redhat-operators-d87p6\" (UID: \"68cb7f47-570e-4098-9291-43b71a28ec39\") " pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:05 crc kubenswrapper[4826]: I0129 08:58:05.679923 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68cb7f47-570e-4098-9291-43b71a28ec39-utilities\") pod \"redhat-operators-d87p6\" (UID: \"68cb7f47-570e-4098-9291-43b71a28ec39\") " pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:05 crc kubenswrapper[4826]: I0129 08:58:05.680566 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68cb7f47-570e-4098-9291-43b71a28ec39-utilities\") pod \"redhat-operators-d87p6\" (UID: \"68cb7f47-570e-4098-9291-43b71a28ec39\") " pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:05 crc kubenswrapper[4826]: I0129 08:58:05.680585 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68cb7f47-570e-4098-9291-43b71a28ec39-catalog-content\") pod \"redhat-operators-d87p6\" (UID: \"68cb7f47-570e-4098-9291-43b71a28ec39\") " pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:05 crc kubenswrapper[4826]: I0129 08:58:05.707643 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2ps6\" (UniqueName: \"kubernetes.io/projected/68cb7f47-570e-4098-9291-43b71a28ec39-kube-api-access-m2ps6\") pod \"redhat-operators-d87p6\" (UID: \"68cb7f47-570e-4098-9291-43b71a28ec39\") " pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:05 crc kubenswrapper[4826]: I0129 08:58:05.770228 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:06 crc kubenswrapper[4826]: I0129 08:58:06.274287 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d87p6"] Jan 29 08:58:07 crc kubenswrapper[4826]: I0129 08:58:07.107383 4826 generic.go:334] "Generic (PLEG): container finished" podID="68cb7f47-570e-4098-9291-43b71a28ec39" containerID="c70d68a068bc0d75412e38302d1ba23725c58be4ce85b030495981bf445ec765" exitCode=0 Jan 29 08:58:07 crc kubenswrapper[4826]: I0129 08:58:07.107433 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d87p6" event={"ID":"68cb7f47-570e-4098-9291-43b71a28ec39","Type":"ContainerDied","Data":"c70d68a068bc0d75412e38302d1ba23725c58be4ce85b030495981bf445ec765"} Jan 29 08:58:07 crc kubenswrapper[4826]: I0129 08:58:07.107727 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d87p6" event={"ID":"68cb7f47-570e-4098-9291-43b71a28ec39","Type":"ContainerStarted","Data":"66895f0e6b40154af615655071b43c3204a37fb71e487b46a497336a3d78bfce"} Jan 29 08:58:08 crc kubenswrapper[4826]: I0129 08:58:08.137439 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d87p6" event={"ID":"68cb7f47-570e-4098-9291-43b71a28ec39","Type":"ContainerStarted","Data":"75083e603c5b62914ab787cd5ef4a04ae69097521ad2d5fdcf9fd82d6b02a635"} Jan 29 08:58:08 crc kubenswrapper[4826]: I0129 08:58:08.809498 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:58:08 crc kubenswrapper[4826]: E0129 08:58:08.810004 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:58:17 crc kubenswrapper[4826]: I0129 08:58:17.236345 4826 generic.go:334] "Generic (PLEG): container finished" podID="68cb7f47-570e-4098-9291-43b71a28ec39" containerID="75083e603c5b62914ab787cd5ef4a04ae69097521ad2d5fdcf9fd82d6b02a635" exitCode=0 Jan 29 08:58:17 crc kubenswrapper[4826]: I0129 08:58:17.236460 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d87p6" event={"ID":"68cb7f47-570e-4098-9291-43b71a28ec39","Type":"ContainerDied","Data":"75083e603c5b62914ab787cd5ef4a04ae69097521ad2d5fdcf9fd82d6b02a635"} Jan 29 08:58:18 crc kubenswrapper[4826]: I0129 08:58:18.248629 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d87p6" event={"ID":"68cb7f47-570e-4098-9291-43b71a28ec39","Type":"ContainerStarted","Data":"c35d435ca597b1a9e2af5fd328d6e2a9baa9c181ed2bc94aed659a8d34d45f5f"} Jan 29 08:58:18 crc kubenswrapper[4826]: I0129 08:58:18.267521 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d87p6" podStartSLOduration=2.756533157 podStartE2EDuration="13.267498005s" podCreationTimestamp="2026-01-29 08:58:05 +0000 UTC" firstStartedPulling="2026-01-29 08:58:07.109196673 +0000 UTC m=+8070.970989742" lastFinishedPulling="2026-01-29 08:58:17.620161521 +0000 UTC m=+8081.481954590" observedRunningTime="2026-01-29 08:58:18.265529052 +0000 UTC m=+8082.127322121" watchObservedRunningTime="2026-01-29 08:58:18.267498005 +0000 UTC m=+8082.129291074" Jan 29 08:58:20 crc kubenswrapper[4826]: I0129 08:58:20.809925 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:58:20 crc kubenswrapper[4826]: E0129 08:58:20.810913 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:58:25 crc kubenswrapper[4826]: I0129 08:58:25.770674 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:25 crc kubenswrapper[4826]: I0129 08:58:25.773552 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:25 crc kubenswrapper[4826]: I0129 08:58:25.822811 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:26 crc kubenswrapper[4826]: I0129 08:58:26.369084 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:26 crc kubenswrapper[4826]: I0129 08:58:26.455709 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d87p6"] Jan 29 08:58:28 crc kubenswrapper[4826]: I0129 08:58:28.330521 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d87p6" podUID="68cb7f47-570e-4098-9291-43b71a28ec39" containerName="registry-server" containerID="cri-o://c35d435ca597b1a9e2af5fd328d6e2a9baa9c181ed2bc94aed659a8d34d45f5f" gracePeriod=2 Jan 29 08:58:28 crc kubenswrapper[4826]: I0129 08:58:28.870622 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:28 crc kubenswrapper[4826]: I0129 08:58:28.906232 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68cb7f47-570e-4098-9291-43b71a28ec39-utilities\") pod \"68cb7f47-570e-4098-9291-43b71a28ec39\" (UID: \"68cb7f47-570e-4098-9291-43b71a28ec39\") " Jan 29 08:58:28 crc kubenswrapper[4826]: I0129 08:58:28.906406 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2ps6\" (UniqueName: \"kubernetes.io/projected/68cb7f47-570e-4098-9291-43b71a28ec39-kube-api-access-m2ps6\") pod \"68cb7f47-570e-4098-9291-43b71a28ec39\" (UID: \"68cb7f47-570e-4098-9291-43b71a28ec39\") " Jan 29 08:58:28 crc kubenswrapper[4826]: I0129 08:58:28.906470 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68cb7f47-570e-4098-9291-43b71a28ec39-catalog-content\") pod \"68cb7f47-570e-4098-9291-43b71a28ec39\" (UID: \"68cb7f47-570e-4098-9291-43b71a28ec39\") " Jan 29 08:58:28 crc kubenswrapper[4826]: I0129 08:58:28.931667 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68cb7f47-570e-4098-9291-43b71a28ec39-utilities" (OuterVolumeSpecName: "utilities") pod "68cb7f47-570e-4098-9291-43b71a28ec39" (UID: "68cb7f47-570e-4098-9291-43b71a28ec39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:58:28 crc kubenswrapper[4826]: I0129 08:58:28.941693 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68cb7f47-570e-4098-9291-43b71a28ec39-kube-api-access-m2ps6" (OuterVolumeSpecName: "kube-api-access-m2ps6") pod "68cb7f47-570e-4098-9291-43b71a28ec39" (UID: "68cb7f47-570e-4098-9291-43b71a28ec39"). InnerVolumeSpecName "kube-api-access-m2ps6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.011788 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68cb7f47-570e-4098-9291-43b71a28ec39-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.011833 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2ps6\" (UniqueName: \"kubernetes.io/projected/68cb7f47-570e-4098-9291-43b71a28ec39-kube-api-access-m2ps6\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.172826 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68cb7f47-570e-4098-9291-43b71a28ec39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68cb7f47-570e-4098-9291-43b71a28ec39" (UID: "68cb7f47-570e-4098-9291-43b71a28ec39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.216565 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68cb7f47-570e-4098-9291-43b71a28ec39-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.342602 4826 generic.go:334] "Generic (PLEG): container finished" podID="68cb7f47-570e-4098-9291-43b71a28ec39" containerID="c35d435ca597b1a9e2af5fd328d6e2a9baa9c181ed2bc94aed659a8d34d45f5f" exitCode=0 Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.342695 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d87p6" event={"ID":"68cb7f47-570e-4098-9291-43b71a28ec39","Type":"ContainerDied","Data":"c35d435ca597b1a9e2af5fd328d6e2a9baa9c181ed2bc94aed659a8d34d45f5f"} Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.342713 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d87p6" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.342726 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d87p6" event={"ID":"68cb7f47-570e-4098-9291-43b71a28ec39","Type":"ContainerDied","Data":"66895f0e6b40154af615655071b43c3204a37fb71e487b46a497336a3d78bfce"} Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.342749 4826 scope.go:117] "RemoveContainer" containerID="c35d435ca597b1a9e2af5fd328d6e2a9baa9c181ed2bc94aed659a8d34d45f5f" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.344639 4826 generic.go:334] "Generic (PLEG): container finished" podID="b73120ed-4019-4aed-883b-3fd780326a7e" containerID="f27d63447ca06d52be4a80192d2e4bfa13dba2df507bf50599c01770d1b19ba5" exitCode=0 Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.344695 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-fh64b" event={"ID":"b73120ed-4019-4aed-883b-3fd780326a7e","Type":"ContainerDied","Data":"f27d63447ca06d52be4a80192d2e4bfa13dba2df507bf50599c01770d1b19ba5"} Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.362516 4826 scope.go:117] "RemoveContainer" containerID="75083e603c5b62914ab787cd5ef4a04ae69097521ad2d5fdcf9fd82d6b02a635" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.402442 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d87p6"] Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.406638 4826 scope.go:117] "RemoveContainer" containerID="c70d68a068bc0d75412e38302d1ba23725c58be4ce85b030495981bf445ec765" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.413489 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d87p6"] Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.473375 4826 scope.go:117] "RemoveContainer" containerID="c35d435ca597b1a9e2af5fd328d6e2a9baa9c181ed2bc94aed659a8d34d45f5f" Jan 29 08:58:29 crc kubenswrapper[4826]: E0129 08:58:29.475904 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35d435ca597b1a9e2af5fd328d6e2a9baa9c181ed2bc94aed659a8d34d45f5f\": container with ID starting with c35d435ca597b1a9e2af5fd328d6e2a9baa9c181ed2bc94aed659a8d34d45f5f not found: ID does not exist" containerID="c35d435ca597b1a9e2af5fd328d6e2a9baa9c181ed2bc94aed659a8d34d45f5f" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.475937 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35d435ca597b1a9e2af5fd328d6e2a9baa9c181ed2bc94aed659a8d34d45f5f"} err="failed to get container status \"c35d435ca597b1a9e2af5fd328d6e2a9baa9c181ed2bc94aed659a8d34d45f5f\": rpc error: code = NotFound desc = could not find container \"c35d435ca597b1a9e2af5fd328d6e2a9baa9c181ed2bc94aed659a8d34d45f5f\": container with ID starting with c35d435ca597b1a9e2af5fd328d6e2a9baa9c181ed2bc94aed659a8d34d45f5f not found: ID does not exist" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.475957 4826 scope.go:117] "RemoveContainer" containerID="75083e603c5b62914ab787cd5ef4a04ae69097521ad2d5fdcf9fd82d6b02a635" Jan 29 08:58:29 crc kubenswrapper[4826]: E0129 08:58:29.476453 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75083e603c5b62914ab787cd5ef4a04ae69097521ad2d5fdcf9fd82d6b02a635\": container with ID starting with 75083e603c5b62914ab787cd5ef4a04ae69097521ad2d5fdcf9fd82d6b02a635 not found: ID does not exist" containerID="75083e603c5b62914ab787cd5ef4a04ae69097521ad2d5fdcf9fd82d6b02a635" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.476501 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75083e603c5b62914ab787cd5ef4a04ae69097521ad2d5fdcf9fd82d6b02a635"} err="failed to get container status \"75083e603c5b62914ab787cd5ef4a04ae69097521ad2d5fdcf9fd82d6b02a635\": rpc error: code = NotFound desc = could not find container \"75083e603c5b62914ab787cd5ef4a04ae69097521ad2d5fdcf9fd82d6b02a635\": container with ID starting with 75083e603c5b62914ab787cd5ef4a04ae69097521ad2d5fdcf9fd82d6b02a635 not found: ID does not exist" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.476544 4826 scope.go:117] "RemoveContainer" containerID="c70d68a068bc0d75412e38302d1ba23725c58be4ce85b030495981bf445ec765" Jan 29 08:58:29 crc kubenswrapper[4826]: E0129 08:58:29.476910 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c70d68a068bc0d75412e38302d1ba23725c58be4ce85b030495981bf445ec765\": container with ID starting with c70d68a068bc0d75412e38302d1ba23725c58be4ce85b030495981bf445ec765 not found: ID does not exist" containerID="c70d68a068bc0d75412e38302d1ba23725c58be4ce85b030495981bf445ec765" Jan 29 08:58:29 crc kubenswrapper[4826]: I0129 08:58:29.476937 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70d68a068bc0d75412e38302d1ba23725c58be4ce85b030495981bf445ec765"} err="failed to get container status \"c70d68a068bc0d75412e38302d1ba23725c58be4ce85b030495981bf445ec765\": rpc error: code = NotFound desc = could not find container \"c70d68a068bc0d75412e38302d1ba23725c58be4ce85b030495981bf445ec765\": container with ID starting with c70d68a068bc0d75412e38302d1ba23725c58be4ce85b030495981bf445ec765 not found: ID does not exist" Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.844952 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68cb7f47-570e-4098-9291-43b71a28ec39" path="/var/lib/kubelet/pods/68cb7f47-570e-4098-9291-43b71a28ec39/volumes" Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.901669 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.957021 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-1\") pod \"b73120ed-4019-4aed-883b-3fd780326a7e\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.957087 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-2\") pod \"b73120ed-4019-4aed-883b-3fd780326a7e\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.957186 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ssh-key-openstack-cell1\") pod \"b73120ed-4019-4aed-883b-3fd780326a7e\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.957209 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-telemetry-combined-ca-bundle\") pod \"b73120ed-4019-4aed-883b-3fd780326a7e\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.957244 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f67v\" (UniqueName: \"kubernetes.io/projected/b73120ed-4019-4aed-883b-3fd780326a7e-kube-api-access-6f67v\") pod \"b73120ed-4019-4aed-883b-3fd780326a7e\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.957458 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-0\") pod \"b73120ed-4019-4aed-883b-3fd780326a7e\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.957589 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-inventory\") pod \"b73120ed-4019-4aed-883b-3fd780326a7e\" (UID: \"b73120ed-4019-4aed-883b-3fd780326a7e\") " Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.975887 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b73120ed-4019-4aed-883b-3fd780326a7e" (UID: "b73120ed-4019-4aed-883b-3fd780326a7e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.975957 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b73120ed-4019-4aed-883b-3fd780326a7e-kube-api-access-6f67v" (OuterVolumeSpecName: "kube-api-access-6f67v") pod "b73120ed-4019-4aed-883b-3fd780326a7e" (UID: "b73120ed-4019-4aed-883b-3fd780326a7e"). InnerVolumeSpecName "kube-api-access-6f67v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.986272 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b73120ed-4019-4aed-883b-3fd780326a7e" (UID: "b73120ed-4019-4aed-883b-3fd780326a7e"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.986413 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b73120ed-4019-4aed-883b-3fd780326a7e" (UID: "b73120ed-4019-4aed-883b-3fd780326a7e"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.992658 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-inventory" (OuterVolumeSpecName: "inventory") pod "b73120ed-4019-4aed-883b-3fd780326a7e" (UID: "b73120ed-4019-4aed-883b-3fd780326a7e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:30 crc kubenswrapper[4826]: I0129 08:58:30.993334 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b73120ed-4019-4aed-883b-3fd780326a7e" (UID: "b73120ed-4019-4aed-883b-3fd780326a7e"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.006514 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "b73120ed-4019-4aed-883b-3fd780326a7e" (UID: "b73120ed-4019-4aed-883b-3fd780326a7e"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.060434 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.060472 4826 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.060487 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f67v\" (UniqueName: \"kubernetes.io/projected/b73120ed-4019-4aed-883b-3fd780326a7e-kube-api-access-6f67v\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.060500 4826 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.060519 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.060533 4826 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.060544 4826 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b73120ed-4019-4aed-883b-3fd780326a7e-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.367593 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-fh64b" event={"ID":"b73120ed-4019-4aed-883b-3fd780326a7e","Type":"ContainerDied","Data":"71be51b1db125f800b72d063460bd00844e41be0b2ca9e753d1a269db38a3d9b"} Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.367642 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71be51b1db125f800b72d063460bd00844e41be0b2ca9e753d1a269db38a3d9b" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.367662 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-fh64b" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.550976 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-ffxzr"] Jan 29 08:58:31 crc kubenswrapper[4826]: E0129 08:58:31.551721 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68cb7f47-570e-4098-9291-43b71a28ec39" containerName="registry-server" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.551831 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="68cb7f47-570e-4098-9291-43b71a28ec39" containerName="registry-server" Jan 29 08:58:31 crc kubenswrapper[4826]: E0129 08:58:31.551932 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73120ed-4019-4aed-883b-3fd780326a7e" containerName="telemetry-openstack-openstack-cell1" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.552009 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73120ed-4019-4aed-883b-3fd780326a7e" containerName="telemetry-openstack-openstack-cell1" Jan 29 08:58:31 crc kubenswrapper[4826]: E0129 08:58:31.552091 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68cb7f47-570e-4098-9291-43b71a28ec39" containerName="extract-utilities" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.552160 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="68cb7f47-570e-4098-9291-43b71a28ec39" containerName="extract-utilities" Jan 29 08:58:31 crc kubenswrapper[4826]: E0129 08:58:31.552259 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68cb7f47-570e-4098-9291-43b71a28ec39" containerName="extract-content" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.552367 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="68cb7f47-570e-4098-9291-43b71a28ec39" containerName="extract-content" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.552690 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="68cb7f47-570e-4098-9291-43b71a28ec39" containerName="registry-server" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.552799 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b73120ed-4019-4aed-883b-3fd780326a7e" containerName="telemetry-openstack-openstack-cell1" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.553753 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.556441 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.556613 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.556718 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.556822 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.556955 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.560774 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-ffxzr"] Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.571576 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.571620 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.571723 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.571756 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c44kp\" (UniqueName: \"kubernetes.io/projected/506786bb-f168-420d-9f74-01304213b10b-kube-api-access-c44kp\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.571787 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.673848 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.674035 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.674083 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c44kp\" (UniqueName: \"kubernetes.io/projected/506786bb-f168-420d-9f74-01304213b10b-kube-api-access-c44kp\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.674126 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.674218 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.682368 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.682785 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.685226 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.692345 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c44kp\" (UniqueName: \"kubernetes.io/projected/506786bb-f168-420d-9f74-01304213b10b-kube-api-access-c44kp\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.697458 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-ffxzr\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:31 crc kubenswrapper[4826]: I0129 08:58:31.869697 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:58:32 crc kubenswrapper[4826]: I0129 08:58:32.434625 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-ffxzr"] Jan 29 08:58:32 crc kubenswrapper[4826]: W0129 08:58:32.441209 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod506786bb_f168_420d_9f74_01304213b10b.slice/crio-53b63fbb99807b8c5252c022831c003c5930d1657a510e3718f744f79d28d742 WatchSource:0}: Error finding container 53b63fbb99807b8c5252c022831c003c5930d1657a510e3718f744f79d28d742: Status 404 returned error can't find the container with id 53b63fbb99807b8c5252c022831c003c5930d1657a510e3718f744f79d28d742 Jan 29 08:58:33 crc kubenswrapper[4826]: I0129 08:58:33.405948 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" event={"ID":"506786bb-f168-420d-9f74-01304213b10b","Type":"ContainerStarted","Data":"7626d58bec7de3a9ca946c80b5545d403849278200460c697451715a4f7f83c0"} Jan 29 08:58:33 crc kubenswrapper[4826]: I0129 08:58:33.406402 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" event={"ID":"506786bb-f168-420d-9f74-01304213b10b","Type":"ContainerStarted","Data":"53b63fbb99807b8c5252c022831c003c5930d1657a510e3718f744f79d28d742"} Jan 29 08:58:33 crc kubenswrapper[4826]: I0129 08:58:33.436636 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" podStartSLOduration=1.929616937 podStartE2EDuration="2.436606008s" podCreationTimestamp="2026-01-29 08:58:31 +0000 UTC" firstStartedPulling="2026-01-29 08:58:32.444279815 +0000 UTC m=+8096.306072884" lastFinishedPulling="2026-01-29 08:58:32.951268886 +0000 UTC m=+8096.813061955" observedRunningTime="2026-01-29 08:58:33.43180539 +0000 UTC m=+8097.293598459" watchObservedRunningTime="2026-01-29 08:58:33.436606008 +0000 UTC m=+8097.298399077" Jan 29 08:58:34 crc kubenswrapper[4826]: I0129 08:58:34.808474 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:58:34 crc kubenswrapper[4826]: E0129 08:58:34.808918 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:58:47 crc kubenswrapper[4826]: I0129 08:58:47.808602 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:58:47 crc kubenswrapper[4826]: E0129 08:58:47.809345 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:59:02 crc kubenswrapper[4826]: I0129 08:59:02.808958 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:59:02 crc kubenswrapper[4826]: E0129 08:59:02.828657 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:59:17 crc kubenswrapper[4826]: I0129 08:59:17.809156 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:59:17 crc kubenswrapper[4826]: E0129 08:59:17.809855 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:59:32 crc kubenswrapper[4826]: I0129 08:59:32.809356 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:59:32 crc kubenswrapper[4826]: E0129 08:59:32.810090 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:59:34 crc kubenswrapper[4826]: I0129 08:59:34.967074 4826 generic.go:334] "Generic (PLEG): container finished" podID="506786bb-f168-420d-9f74-01304213b10b" containerID="7626d58bec7de3a9ca946c80b5545d403849278200460c697451715a4f7f83c0" exitCode=0 Jan 29 08:59:34 crc kubenswrapper[4826]: I0129 08:59:34.967117 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" event={"ID":"506786bb-f168-420d-9f74-01304213b10b","Type":"ContainerDied","Data":"7626d58bec7de3a9ca946c80b5545d403849278200460c697451715a4f7f83c0"} Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.455708 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.528846 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c44kp\" (UniqueName: \"kubernetes.io/projected/506786bb-f168-420d-9f74-01304213b10b-kube-api-access-c44kp\") pod \"506786bb-f168-420d-9f74-01304213b10b\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.529244 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-inventory\") pod \"506786bb-f168-420d-9f74-01304213b10b\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.529331 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-ssh-key-openstack-cell1\") pod \"506786bb-f168-420d-9f74-01304213b10b\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.529383 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-neutron-sriov-combined-ca-bundle\") pod \"506786bb-f168-420d-9f74-01304213b10b\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.529410 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-neutron-sriov-agent-neutron-config-0\") pod \"506786bb-f168-420d-9f74-01304213b10b\" (UID: \"506786bb-f168-420d-9f74-01304213b10b\") " Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.537381 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "506786bb-f168-420d-9f74-01304213b10b" (UID: "506786bb-f168-420d-9f74-01304213b10b"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.543655 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/506786bb-f168-420d-9f74-01304213b10b-kube-api-access-c44kp" (OuterVolumeSpecName: "kube-api-access-c44kp") pod "506786bb-f168-420d-9f74-01304213b10b" (UID: "506786bb-f168-420d-9f74-01304213b10b"). InnerVolumeSpecName "kube-api-access-c44kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.568669 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "506786bb-f168-420d-9f74-01304213b10b" (UID: "506786bb-f168-420d-9f74-01304213b10b"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.575606 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-inventory" (OuterVolumeSpecName: "inventory") pod "506786bb-f168-420d-9f74-01304213b10b" (UID: "506786bb-f168-420d-9f74-01304213b10b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.590807 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "506786bb-f168-420d-9f74-01304213b10b" (UID: "506786bb-f168-420d-9f74-01304213b10b"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.632190 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c44kp\" (UniqueName: \"kubernetes.io/projected/506786bb-f168-420d-9f74-01304213b10b-kube-api-access-c44kp\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.632407 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.632473 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.632538 4826 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.632594 4826 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/506786bb-f168-420d-9f74-01304213b10b-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.987443 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" event={"ID":"506786bb-f168-420d-9f74-01304213b10b","Type":"ContainerDied","Data":"53b63fbb99807b8c5252c022831c003c5930d1657a510e3718f744f79d28d742"} Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.987484 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53b63fbb99807b8c5252c022831c003c5930d1657a510e3718f744f79d28d742" Jan 29 08:59:36 crc kubenswrapper[4826]: I0129 08:59:36.987573 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-ffxzr" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.079168 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv"] Jan 29 08:59:37 crc kubenswrapper[4826]: E0129 08:59:37.079741 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506786bb-f168-420d-9f74-01304213b10b" containerName="neutron-sriov-openstack-openstack-cell1" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.079767 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="506786bb-f168-420d-9f74-01304213b10b" containerName="neutron-sriov-openstack-openstack-cell1" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.080054 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="506786bb-f168-420d-9f74-01304213b10b" containerName="neutron-sriov-openstack-openstack-cell1" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.080761 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.083543 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.083802 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.083957 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.084179 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.084381 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.090726 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv"] Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.142846 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbxjb\" (UniqueName: \"kubernetes.io/projected/4a787b34-0474-44bf-9785-416b19422277-kube-api-access-dbxjb\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.142902 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.143075 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.143175 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.143254 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.245436 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.245519 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbxjb\" (UniqueName: \"kubernetes.io/projected/4a787b34-0474-44bf-9785-416b19422277-kube-api-access-dbxjb\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.245565 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.245734 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.245822 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.249801 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.249985 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.249985 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.251810 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.265061 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbxjb\" (UniqueName: \"kubernetes.io/projected/4a787b34-0474-44bf-9785-416b19422277-kube-api-access-dbxjb\") pod \"neutron-dhcp-openstack-openstack-cell1-d5rbv\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.423316 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.971492 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv"] Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.985934 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 08:59:37 crc kubenswrapper[4826]: I0129 08:59:37.997445 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" event={"ID":"4a787b34-0474-44bf-9785-416b19422277","Type":"ContainerStarted","Data":"e31bf97e17cf31ca69b053c31ad82d9fd038b82db586640297f8f885b53f0d0d"} Jan 29 08:59:39 crc kubenswrapper[4826]: I0129 08:59:39.008536 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" event={"ID":"4a787b34-0474-44bf-9785-416b19422277","Type":"ContainerStarted","Data":"09ae7b8db46068f89f12f64325bd9e7b599a8b8470c4efd3d91dbbcbce17bb83"} Jan 29 08:59:39 crc kubenswrapper[4826]: I0129 08:59:39.034339 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" podStartSLOduration=1.519559517 podStartE2EDuration="2.034316206s" podCreationTimestamp="2026-01-29 08:59:37 +0000 UTC" firstStartedPulling="2026-01-29 08:59:37.985754651 +0000 UTC m=+8161.847547720" lastFinishedPulling="2026-01-29 08:59:38.50051134 +0000 UTC m=+8162.362304409" observedRunningTime="2026-01-29 08:59:39.026655132 +0000 UTC m=+8162.888448221" watchObservedRunningTime="2026-01-29 08:59:39.034316206 +0000 UTC m=+8162.896109275" Jan 29 08:59:45 crc kubenswrapper[4826]: I0129 08:59:45.809290 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:59:45 crc kubenswrapper[4826]: E0129 08:59:45.810115 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 08:59:57 crc kubenswrapper[4826]: I0129 08:59:57.809091 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 08:59:57 crc kubenswrapper[4826]: E0129 08:59:57.810133 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.154331 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd"] Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.158615 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.160665 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.161098 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.171099 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd"] Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.350137 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d25ea228-c684-43a6-8fc9-50a50242c2b5-secret-volume\") pod \"collect-profiles-29494620-hwpvd\" (UID: \"d25ea228-c684-43a6-8fc9-50a50242c2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.350320 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj7nw\" (UniqueName: \"kubernetes.io/projected/d25ea228-c684-43a6-8fc9-50a50242c2b5-kube-api-access-tj7nw\") pod \"collect-profiles-29494620-hwpvd\" (UID: \"d25ea228-c684-43a6-8fc9-50a50242c2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.350375 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d25ea228-c684-43a6-8fc9-50a50242c2b5-config-volume\") pod \"collect-profiles-29494620-hwpvd\" (UID: \"d25ea228-c684-43a6-8fc9-50a50242c2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.452288 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d25ea228-c684-43a6-8fc9-50a50242c2b5-config-volume\") pod \"collect-profiles-29494620-hwpvd\" (UID: \"d25ea228-c684-43a6-8fc9-50a50242c2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.452868 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d25ea228-c684-43a6-8fc9-50a50242c2b5-secret-volume\") pod \"collect-profiles-29494620-hwpvd\" (UID: \"d25ea228-c684-43a6-8fc9-50a50242c2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.453059 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj7nw\" (UniqueName: \"kubernetes.io/projected/d25ea228-c684-43a6-8fc9-50a50242c2b5-kube-api-access-tj7nw\") pod \"collect-profiles-29494620-hwpvd\" (UID: \"d25ea228-c684-43a6-8fc9-50a50242c2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.453962 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d25ea228-c684-43a6-8fc9-50a50242c2b5-config-volume\") pod \"collect-profiles-29494620-hwpvd\" (UID: \"d25ea228-c684-43a6-8fc9-50a50242c2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.458910 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d25ea228-c684-43a6-8fc9-50a50242c2b5-secret-volume\") pod \"collect-profiles-29494620-hwpvd\" (UID: \"d25ea228-c684-43a6-8fc9-50a50242c2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.470752 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj7nw\" (UniqueName: \"kubernetes.io/projected/d25ea228-c684-43a6-8fc9-50a50242c2b5-kube-api-access-tj7nw\") pod \"collect-profiles-29494620-hwpvd\" (UID: \"d25ea228-c684-43a6-8fc9-50a50242c2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.491558 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" Jan 29 09:00:00 crc kubenswrapper[4826]: I0129 09:00:00.951938 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd"] Jan 29 09:00:01 crc kubenswrapper[4826]: I0129 09:00:01.215875 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" event={"ID":"d25ea228-c684-43a6-8fc9-50a50242c2b5","Type":"ContainerStarted","Data":"e44ab69c4e530cd13910ec74d58d6b035e288e3f88391fa68d07729890f0e45b"} Jan 29 09:00:01 crc kubenswrapper[4826]: I0129 09:00:01.215917 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" event={"ID":"d25ea228-c684-43a6-8fc9-50a50242c2b5","Type":"ContainerStarted","Data":"4648f5aac5367b7f26248b8fe6cd4982d10ece17d2a81316090f7a91c5938089"} Jan 29 09:00:01 crc kubenswrapper[4826]: I0129 09:00:01.244552 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" podStartSLOduration=1.24453331 podStartE2EDuration="1.24453331s" podCreationTimestamp="2026-01-29 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:00:01.235558421 +0000 UTC m=+8185.097351490" watchObservedRunningTime="2026-01-29 09:00:01.24453331 +0000 UTC m=+8185.106326379" Jan 29 09:00:02 crc kubenswrapper[4826]: I0129 09:00:02.235654 4826 generic.go:334] "Generic (PLEG): container finished" podID="d25ea228-c684-43a6-8fc9-50a50242c2b5" containerID="e44ab69c4e530cd13910ec74d58d6b035e288e3f88391fa68d07729890f0e45b" exitCode=0 Jan 29 09:00:02 crc kubenswrapper[4826]: I0129 09:00:02.236047 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" event={"ID":"d25ea228-c684-43a6-8fc9-50a50242c2b5","Type":"ContainerDied","Data":"e44ab69c4e530cd13910ec74d58d6b035e288e3f88391fa68d07729890f0e45b"} Jan 29 09:00:03 crc kubenswrapper[4826]: I0129 09:00:03.599409 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" Jan 29 09:00:03 crc kubenswrapper[4826]: I0129 09:00:03.721958 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d25ea228-c684-43a6-8fc9-50a50242c2b5-secret-volume\") pod \"d25ea228-c684-43a6-8fc9-50a50242c2b5\" (UID: \"d25ea228-c684-43a6-8fc9-50a50242c2b5\") " Jan 29 09:00:03 crc kubenswrapper[4826]: I0129 09:00:03.722100 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj7nw\" (UniqueName: \"kubernetes.io/projected/d25ea228-c684-43a6-8fc9-50a50242c2b5-kube-api-access-tj7nw\") pod \"d25ea228-c684-43a6-8fc9-50a50242c2b5\" (UID: \"d25ea228-c684-43a6-8fc9-50a50242c2b5\") " Jan 29 09:00:03 crc kubenswrapper[4826]: I0129 09:00:03.722272 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d25ea228-c684-43a6-8fc9-50a50242c2b5-config-volume\") pod \"d25ea228-c684-43a6-8fc9-50a50242c2b5\" (UID: \"d25ea228-c684-43a6-8fc9-50a50242c2b5\") " Jan 29 09:00:03 crc kubenswrapper[4826]: I0129 09:00:03.723125 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d25ea228-c684-43a6-8fc9-50a50242c2b5-config-volume" (OuterVolumeSpecName: "config-volume") pod "d25ea228-c684-43a6-8fc9-50a50242c2b5" (UID: "d25ea228-c684-43a6-8fc9-50a50242c2b5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:00:03 crc kubenswrapper[4826]: I0129 09:00:03.729655 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d25ea228-c684-43a6-8fc9-50a50242c2b5-kube-api-access-tj7nw" (OuterVolumeSpecName: "kube-api-access-tj7nw") pod "d25ea228-c684-43a6-8fc9-50a50242c2b5" (UID: "d25ea228-c684-43a6-8fc9-50a50242c2b5"). InnerVolumeSpecName "kube-api-access-tj7nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:03 crc kubenswrapper[4826]: I0129 09:00:03.729790 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25ea228-c684-43a6-8fc9-50a50242c2b5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d25ea228-c684-43a6-8fc9-50a50242c2b5" (UID: "d25ea228-c684-43a6-8fc9-50a50242c2b5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:00:03 crc kubenswrapper[4826]: I0129 09:00:03.824697 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj7nw\" (UniqueName: \"kubernetes.io/projected/d25ea228-c684-43a6-8fc9-50a50242c2b5-kube-api-access-tj7nw\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:03 crc kubenswrapper[4826]: I0129 09:00:03.824727 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d25ea228-c684-43a6-8fc9-50a50242c2b5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:03 crc kubenswrapper[4826]: I0129 09:00:03.824736 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d25ea228-c684-43a6-8fc9-50a50242c2b5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:04 crc kubenswrapper[4826]: I0129 09:00:04.256452 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" event={"ID":"d25ea228-c684-43a6-8fc9-50a50242c2b5","Type":"ContainerDied","Data":"4648f5aac5367b7f26248b8fe6cd4982d10ece17d2a81316090f7a91c5938089"} Jan 29 09:00:04 crc kubenswrapper[4826]: I0129 09:00:04.256812 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4648f5aac5367b7f26248b8fe6cd4982d10ece17d2a81316090f7a91c5938089" Jan 29 09:00:04 crc kubenswrapper[4826]: I0129 09:00:04.256508 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494620-hwpvd" Jan 29 09:00:04 crc kubenswrapper[4826]: I0129 09:00:04.314756 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9"] Jan 29 09:00:04 crc kubenswrapper[4826]: I0129 09:00:04.322927 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494575-bdsx9"] Jan 29 09:00:04 crc kubenswrapper[4826]: I0129 09:00:04.824282 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bcc2de4-434f-468a-98d5-62d14b98be5e" path="/var/lib/kubelet/pods/3bcc2de4-434f-468a-98d5-62d14b98be5e/volumes" Jan 29 09:00:09 crc kubenswrapper[4826]: I0129 09:00:09.809630 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 09:00:09 crc kubenswrapper[4826]: E0129 09:00:09.810517 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:00:15 crc kubenswrapper[4826]: I0129 09:00:15.722636 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-45kx6"] Jan 29 09:00:15 crc kubenswrapper[4826]: E0129 09:00:15.723615 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d25ea228-c684-43a6-8fc9-50a50242c2b5" containerName="collect-profiles" Jan 29 09:00:15 crc kubenswrapper[4826]: I0129 09:00:15.723632 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25ea228-c684-43a6-8fc9-50a50242c2b5" containerName="collect-profiles" Jan 29 09:00:15 crc kubenswrapper[4826]: I0129 09:00:15.723814 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d25ea228-c684-43a6-8fc9-50a50242c2b5" containerName="collect-profiles" Jan 29 09:00:15 crc kubenswrapper[4826]: I0129 09:00:15.725583 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:15 crc kubenswrapper[4826]: I0129 09:00:15.754561 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-45kx6"] Jan 29 09:00:16 crc kubenswrapper[4826]: I0129 09:00:16.686104 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-catalog-content\") pod \"certified-operators-45kx6\" (UID: \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\") " pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:16 crc kubenswrapper[4826]: I0129 09:00:16.686218 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f62q\" (UniqueName: \"kubernetes.io/projected/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-kube-api-access-5f62q\") pod \"certified-operators-45kx6\" (UID: \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\") " pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:16 crc kubenswrapper[4826]: I0129 09:00:16.686255 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-utilities\") pod \"certified-operators-45kx6\" (UID: \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\") " pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:16 crc kubenswrapper[4826]: I0129 09:00:16.788935 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-catalog-content\") pod \"certified-operators-45kx6\" (UID: \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\") " pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:16 crc kubenswrapper[4826]: I0129 09:00:16.789462 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f62q\" (UniqueName: \"kubernetes.io/projected/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-kube-api-access-5f62q\") pod \"certified-operators-45kx6\" (UID: \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\") " pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:16 crc kubenswrapper[4826]: I0129 09:00:16.789580 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-utilities\") pod \"certified-operators-45kx6\" (UID: \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\") " pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:16 crc kubenswrapper[4826]: I0129 09:00:16.789595 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-catalog-content\") pod \"certified-operators-45kx6\" (UID: \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\") " pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:16 crc kubenswrapper[4826]: I0129 09:00:16.790032 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-utilities\") pod \"certified-operators-45kx6\" (UID: \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\") " pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:16 crc kubenswrapper[4826]: I0129 09:00:16.811169 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f62q\" (UniqueName: \"kubernetes.io/projected/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-kube-api-access-5f62q\") pod \"certified-operators-45kx6\" (UID: \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\") " pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:16 crc kubenswrapper[4826]: I0129 09:00:16.955080 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:17 crc kubenswrapper[4826]: I0129 09:00:17.509709 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-45kx6"] Jan 29 09:00:17 crc kubenswrapper[4826]: I0129 09:00:17.731252 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45kx6" event={"ID":"e6a78af9-4a8a-46d7-a2a3-13a1ed499430","Type":"ContainerStarted","Data":"b05bf4e5002e25d3b0e31ece7138b7e4155225eb980ecb967416ba921361c146"} Jan 29 09:00:18 crc kubenswrapper[4826]: I0129 09:00:18.745163 4826 generic.go:334] "Generic (PLEG): container finished" podID="e6a78af9-4a8a-46d7-a2a3-13a1ed499430" containerID="4d05d639c0014970000fa22df6cd510242dd135ae1e7e7b56524b56d2a66c00a" exitCode=0 Jan 29 09:00:18 crc kubenswrapper[4826]: I0129 09:00:18.745394 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45kx6" event={"ID":"e6a78af9-4a8a-46d7-a2a3-13a1ed499430","Type":"ContainerDied","Data":"4d05d639c0014970000fa22df6cd510242dd135ae1e7e7b56524b56d2a66c00a"} Jan 29 09:00:20 crc kubenswrapper[4826]: I0129 09:00:20.766834 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45kx6" event={"ID":"e6a78af9-4a8a-46d7-a2a3-13a1ed499430","Type":"ContainerStarted","Data":"d8cff1a3084db7257d7a82dd5cfc3833e4f20ca8dbc89e5fbf05522dc4606e12"} Jan 29 09:00:22 crc kubenswrapper[4826]: I0129 09:00:22.791309 4826 generic.go:334] "Generic (PLEG): container finished" podID="e6a78af9-4a8a-46d7-a2a3-13a1ed499430" containerID="d8cff1a3084db7257d7a82dd5cfc3833e4f20ca8dbc89e5fbf05522dc4606e12" exitCode=0 Jan 29 09:00:22 crc kubenswrapper[4826]: I0129 09:00:22.792245 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45kx6" event={"ID":"e6a78af9-4a8a-46d7-a2a3-13a1ed499430","Type":"ContainerDied","Data":"d8cff1a3084db7257d7a82dd5cfc3833e4f20ca8dbc89e5fbf05522dc4606e12"} Jan 29 09:00:23 crc kubenswrapper[4826]: I0129 09:00:23.804491 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45kx6" event={"ID":"e6a78af9-4a8a-46d7-a2a3-13a1ed499430","Type":"ContainerStarted","Data":"b4f34a91b04c6604c6a781e370d26257b43303276763719bf3956f698fcc208d"} Jan 29 09:00:23 crc kubenswrapper[4826]: I0129 09:00:23.808581 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 09:00:23 crc kubenswrapper[4826]: E0129 09:00:23.808842 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:00:23 crc kubenswrapper[4826]: I0129 09:00:23.835207 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-45kx6" podStartSLOduration=4.3866336409999995 podStartE2EDuration="8.835188469s" podCreationTimestamp="2026-01-29 09:00:15 +0000 UTC" firstStartedPulling="2026-01-29 09:00:18.749074198 +0000 UTC m=+8202.610867257" lastFinishedPulling="2026-01-29 09:00:23.197629016 +0000 UTC m=+8207.059422085" observedRunningTime="2026-01-29 09:00:23.825180262 +0000 UTC m=+8207.686973341" watchObservedRunningTime="2026-01-29 09:00:23.835188469 +0000 UTC m=+8207.696981538" Jan 29 09:00:25 crc kubenswrapper[4826]: I0129 09:00:25.274452 4826 scope.go:117] "RemoveContainer" containerID="41ec008557db7f5057b49861b2c0ae4bd5b559f8c4cb02b9903d4d163b4b03bf" Jan 29 09:00:26 crc kubenswrapper[4826]: I0129 09:00:26.955988 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:26 crc kubenswrapper[4826]: I0129 09:00:26.956392 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:27 crc kubenswrapper[4826]: I0129 09:00:27.009134 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:37 crc kubenswrapper[4826]: I0129 09:00:37.003103 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:37 crc kubenswrapper[4826]: I0129 09:00:37.057844 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-45kx6"] Jan 29 09:00:37 crc kubenswrapper[4826]: I0129 09:00:37.934676 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-45kx6" podUID="e6a78af9-4a8a-46d7-a2a3-13a1ed499430" containerName="registry-server" containerID="cri-o://b4f34a91b04c6604c6a781e370d26257b43303276763719bf3956f698fcc208d" gracePeriod=2 Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.405083 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.552056 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-utilities\") pod \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\" (UID: \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\") " Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.552211 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-catalog-content\") pod \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\" (UID: \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\") " Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.552334 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f62q\" (UniqueName: \"kubernetes.io/projected/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-kube-api-access-5f62q\") pod \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\" (UID: \"e6a78af9-4a8a-46d7-a2a3-13a1ed499430\") " Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.552810 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-utilities" (OuterVolumeSpecName: "utilities") pod "e6a78af9-4a8a-46d7-a2a3-13a1ed499430" (UID: "e6a78af9-4a8a-46d7-a2a3-13a1ed499430"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.552966 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.557984 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-kube-api-access-5f62q" (OuterVolumeSpecName: "kube-api-access-5f62q") pod "e6a78af9-4a8a-46d7-a2a3-13a1ed499430" (UID: "e6a78af9-4a8a-46d7-a2a3-13a1ed499430"). InnerVolumeSpecName "kube-api-access-5f62q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.599712 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6a78af9-4a8a-46d7-a2a3-13a1ed499430" (UID: "e6a78af9-4a8a-46d7-a2a3-13a1ed499430"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.655242 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.655284 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f62q\" (UniqueName: \"kubernetes.io/projected/e6a78af9-4a8a-46d7-a2a3-13a1ed499430-kube-api-access-5f62q\") on node \"crc\" DevicePath \"\"" Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.808662 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 09:00:38 crc kubenswrapper[4826]: E0129 09:00:38.809008 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.947914 4826 generic.go:334] "Generic (PLEG): container finished" podID="e6a78af9-4a8a-46d7-a2a3-13a1ed499430" containerID="b4f34a91b04c6604c6a781e370d26257b43303276763719bf3956f698fcc208d" exitCode=0 Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.947953 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45kx6" event={"ID":"e6a78af9-4a8a-46d7-a2a3-13a1ed499430","Type":"ContainerDied","Data":"b4f34a91b04c6604c6a781e370d26257b43303276763719bf3956f698fcc208d"} Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.947978 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45kx6" event={"ID":"e6a78af9-4a8a-46d7-a2a3-13a1ed499430","Type":"ContainerDied","Data":"b05bf4e5002e25d3b0e31ece7138b7e4155225eb980ecb967416ba921361c146"} Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.947977 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45kx6" Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.947995 4826 scope.go:117] "RemoveContainer" containerID="b4f34a91b04c6604c6a781e370d26257b43303276763719bf3956f698fcc208d" Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.978222 4826 scope.go:117] "RemoveContainer" containerID="d8cff1a3084db7257d7a82dd5cfc3833e4f20ca8dbc89e5fbf05522dc4606e12" Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.979162 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-45kx6"] Jan 29 09:00:38 crc kubenswrapper[4826]: I0129 09:00:38.990927 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-45kx6"] Jan 29 09:00:39 crc kubenswrapper[4826]: I0129 09:00:39.015911 4826 scope.go:117] "RemoveContainer" containerID="4d05d639c0014970000fa22df6cd510242dd135ae1e7e7b56524b56d2a66c00a" Jan 29 09:00:39 crc kubenswrapper[4826]: I0129 09:00:39.048081 4826 scope.go:117] "RemoveContainer" containerID="b4f34a91b04c6604c6a781e370d26257b43303276763719bf3956f698fcc208d" Jan 29 09:00:39 crc kubenswrapper[4826]: E0129 09:00:39.048554 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4f34a91b04c6604c6a781e370d26257b43303276763719bf3956f698fcc208d\": container with ID starting with b4f34a91b04c6604c6a781e370d26257b43303276763719bf3956f698fcc208d not found: ID does not exist" containerID="b4f34a91b04c6604c6a781e370d26257b43303276763719bf3956f698fcc208d" Jan 29 09:00:39 crc kubenswrapper[4826]: I0129 09:00:39.048606 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f34a91b04c6604c6a781e370d26257b43303276763719bf3956f698fcc208d"} err="failed to get container status \"b4f34a91b04c6604c6a781e370d26257b43303276763719bf3956f698fcc208d\": rpc error: code = NotFound desc = could not find container \"b4f34a91b04c6604c6a781e370d26257b43303276763719bf3956f698fcc208d\": container with ID starting with b4f34a91b04c6604c6a781e370d26257b43303276763719bf3956f698fcc208d not found: ID does not exist" Jan 29 09:00:39 crc kubenswrapper[4826]: I0129 09:00:39.048637 4826 scope.go:117] "RemoveContainer" containerID="d8cff1a3084db7257d7a82dd5cfc3833e4f20ca8dbc89e5fbf05522dc4606e12" Jan 29 09:00:39 crc kubenswrapper[4826]: E0129 09:00:39.048938 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8cff1a3084db7257d7a82dd5cfc3833e4f20ca8dbc89e5fbf05522dc4606e12\": container with ID starting with d8cff1a3084db7257d7a82dd5cfc3833e4f20ca8dbc89e5fbf05522dc4606e12 not found: ID does not exist" containerID="d8cff1a3084db7257d7a82dd5cfc3833e4f20ca8dbc89e5fbf05522dc4606e12" Jan 29 09:00:39 crc kubenswrapper[4826]: I0129 09:00:39.048965 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8cff1a3084db7257d7a82dd5cfc3833e4f20ca8dbc89e5fbf05522dc4606e12"} err="failed to get container status \"d8cff1a3084db7257d7a82dd5cfc3833e4f20ca8dbc89e5fbf05522dc4606e12\": rpc error: code = NotFound desc = could not find container \"d8cff1a3084db7257d7a82dd5cfc3833e4f20ca8dbc89e5fbf05522dc4606e12\": container with ID starting with d8cff1a3084db7257d7a82dd5cfc3833e4f20ca8dbc89e5fbf05522dc4606e12 not found: ID does not exist" Jan 29 09:00:39 crc kubenswrapper[4826]: I0129 09:00:39.048985 4826 scope.go:117] "RemoveContainer" containerID="4d05d639c0014970000fa22df6cd510242dd135ae1e7e7b56524b56d2a66c00a" Jan 29 09:00:39 crc kubenswrapper[4826]: E0129 09:00:39.049216 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d05d639c0014970000fa22df6cd510242dd135ae1e7e7b56524b56d2a66c00a\": container with ID starting with 4d05d639c0014970000fa22df6cd510242dd135ae1e7e7b56524b56d2a66c00a not found: ID does not exist" containerID="4d05d639c0014970000fa22df6cd510242dd135ae1e7e7b56524b56d2a66c00a" Jan 29 09:00:39 crc kubenswrapper[4826]: I0129 09:00:39.049233 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d05d639c0014970000fa22df6cd510242dd135ae1e7e7b56524b56d2a66c00a"} err="failed to get container status \"4d05d639c0014970000fa22df6cd510242dd135ae1e7e7b56524b56d2a66c00a\": rpc error: code = NotFound desc = could not find container \"4d05d639c0014970000fa22df6cd510242dd135ae1e7e7b56524b56d2a66c00a\": container with ID starting with 4d05d639c0014970000fa22df6cd510242dd135ae1e7e7b56524b56d2a66c00a not found: ID does not exist" Jan 29 09:00:40 crc kubenswrapper[4826]: I0129 09:00:40.821507 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a78af9-4a8a-46d7-a2a3-13a1ed499430" path="/var/lib/kubelet/pods/e6a78af9-4a8a-46d7-a2a3-13a1ed499430/volumes" Jan 29 09:00:53 crc kubenswrapper[4826]: I0129 09:00:53.809209 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 09:00:53 crc kubenswrapper[4826]: E0129 09:00:53.810007 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.160518 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29494621-wq488"] Jan 29 09:01:00 crc kubenswrapper[4826]: E0129 09:01:00.161681 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a78af9-4a8a-46d7-a2a3-13a1ed499430" containerName="extract-utilities" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.161709 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a78af9-4a8a-46d7-a2a3-13a1ed499430" containerName="extract-utilities" Jan 29 09:01:00 crc kubenswrapper[4826]: E0129 09:01:00.161736 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a78af9-4a8a-46d7-a2a3-13a1ed499430" containerName="registry-server" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.161748 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a78af9-4a8a-46d7-a2a3-13a1ed499430" containerName="registry-server" Jan 29 09:01:00 crc kubenswrapper[4826]: E0129 09:01:00.161780 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a78af9-4a8a-46d7-a2a3-13a1ed499430" containerName="extract-content" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.161792 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a78af9-4a8a-46d7-a2a3-13a1ed499430" containerName="extract-content" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.162137 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a78af9-4a8a-46d7-a2a3-13a1ed499430" containerName="registry-server" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.163349 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.177787 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494621-wq488"] Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.300975 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-config-data\") pod \"keystone-cron-29494621-wq488\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.301372 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zth2t\" (UniqueName: \"kubernetes.io/projected/05abc85a-7397-460f-a033-6b21ecaf2ddb-kube-api-access-zth2t\") pod \"keystone-cron-29494621-wq488\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.301589 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-combined-ca-bundle\") pod \"keystone-cron-29494621-wq488\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.301667 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-fernet-keys\") pod \"keystone-cron-29494621-wq488\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.403624 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zth2t\" (UniqueName: \"kubernetes.io/projected/05abc85a-7397-460f-a033-6b21ecaf2ddb-kube-api-access-zth2t\") pod \"keystone-cron-29494621-wq488\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.403947 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-combined-ca-bundle\") pod \"keystone-cron-29494621-wq488\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.403976 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-fernet-keys\") pod \"keystone-cron-29494621-wq488\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.404045 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-config-data\") pod \"keystone-cron-29494621-wq488\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.410796 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-config-data\") pod \"keystone-cron-29494621-wq488\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.411736 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-fernet-keys\") pod \"keystone-cron-29494621-wq488\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.411762 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-combined-ca-bundle\") pod \"keystone-cron-29494621-wq488\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.424477 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zth2t\" (UniqueName: \"kubernetes.io/projected/05abc85a-7397-460f-a033-6b21ecaf2ddb-kube-api-access-zth2t\") pod \"keystone-cron-29494621-wq488\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.488170 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:00 crc kubenswrapper[4826]: I0129 09:01:00.946287 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494621-wq488"] Jan 29 09:01:01 crc kubenswrapper[4826]: I0129 09:01:01.167998 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494621-wq488" event={"ID":"05abc85a-7397-460f-a033-6b21ecaf2ddb","Type":"ContainerStarted","Data":"4333df8ba982544af6e43666abde4b24506a66a49fadad15920b7fa5f14015c6"} Jan 29 09:01:01 crc kubenswrapper[4826]: I0129 09:01:01.168444 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494621-wq488" event={"ID":"05abc85a-7397-460f-a033-6b21ecaf2ddb","Type":"ContainerStarted","Data":"fc30f20a050196a929dad229c09ee525884348dd09c6aef0aa3579a5d25d51fe"} Jan 29 09:01:01 crc kubenswrapper[4826]: I0129 09:01:01.192802 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29494621-wq488" podStartSLOduration=1.192773686 podStartE2EDuration="1.192773686s" podCreationTimestamp="2026-01-29 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:01.185514513 +0000 UTC m=+8245.047307582" watchObservedRunningTime="2026-01-29 09:01:01.192773686 +0000 UTC m=+8245.054566755" Jan 29 09:01:02 crc kubenswrapper[4826]: I0129 09:01:02.179478 4826 generic.go:334] "Generic (PLEG): container finished" podID="4a787b34-0474-44bf-9785-416b19422277" containerID="09ae7b8db46068f89f12f64325bd9e7b599a8b8470c4efd3d91dbbcbce17bb83" exitCode=0 Jan 29 09:01:02 crc kubenswrapper[4826]: I0129 09:01:02.179527 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" event={"ID":"4a787b34-0474-44bf-9785-416b19422277","Type":"ContainerDied","Data":"09ae7b8db46068f89f12f64325bd9e7b599a8b8470c4efd3d91dbbcbce17bb83"} Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.602597 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.772650 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-inventory\") pod \"4a787b34-0474-44bf-9785-416b19422277\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.772984 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-neutron-dhcp-agent-neutron-config-0\") pod \"4a787b34-0474-44bf-9785-416b19422277\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.773041 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbxjb\" (UniqueName: \"kubernetes.io/projected/4a787b34-0474-44bf-9785-416b19422277-kube-api-access-dbxjb\") pod \"4a787b34-0474-44bf-9785-416b19422277\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.773071 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-neutron-dhcp-combined-ca-bundle\") pod \"4a787b34-0474-44bf-9785-416b19422277\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.773138 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-ssh-key-openstack-cell1\") pod \"4a787b34-0474-44bf-9785-416b19422277\" (UID: \"4a787b34-0474-44bf-9785-416b19422277\") " Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.779653 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "4a787b34-0474-44bf-9785-416b19422277" (UID: "4a787b34-0474-44bf-9785-416b19422277"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.789595 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a787b34-0474-44bf-9785-416b19422277-kube-api-access-dbxjb" (OuterVolumeSpecName: "kube-api-access-dbxjb") pod "4a787b34-0474-44bf-9785-416b19422277" (UID: "4a787b34-0474-44bf-9785-416b19422277"). InnerVolumeSpecName "kube-api-access-dbxjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.801476 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "4a787b34-0474-44bf-9785-416b19422277" (UID: "4a787b34-0474-44bf-9785-416b19422277"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.802907 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-inventory" (OuterVolumeSpecName: "inventory") pod "4a787b34-0474-44bf-9785-416b19422277" (UID: "4a787b34-0474-44bf-9785-416b19422277"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.817208 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "4a787b34-0474-44bf-9785-416b19422277" (UID: "4a787b34-0474-44bf-9785-416b19422277"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.879173 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.879211 4826 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.879227 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbxjb\" (UniqueName: \"kubernetes.io/projected/4a787b34-0474-44bf-9785-416b19422277-kube-api-access-dbxjb\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.879241 4826 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:03 crc kubenswrapper[4826]: I0129 09:01:03.879250 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4a787b34-0474-44bf-9785-416b19422277-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:04 crc kubenswrapper[4826]: I0129 09:01:04.202408 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" event={"ID":"4a787b34-0474-44bf-9785-416b19422277","Type":"ContainerDied","Data":"e31bf97e17cf31ca69b053c31ad82d9fd038b82db586640297f8f885b53f0d0d"} Jan 29 09:01:04 crc kubenswrapper[4826]: I0129 09:01:04.202476 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e31bf97e17cf31ca69b053c31ad82d9fd038b82db586640297f8f885b53f0d0d" Jan 29 09:01:04 crc kubenswrapper[4826]: I0129 09:01:04.202418 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-d5rbv" Jan 29 09:01:04 crc kubenswrapper[4826]: I0129 09:01:04.204154 4826 generic.go:334] "Generic (PLEG): container finished" podID="05abc85a-7397-460f-a033-6b21ecaf2ddb" containerID="4333df8ba982544af6e43666abde4b24506a66a49fadad15920b7fa5f14015c6" exitCode=0 Jan 29 09:01:04 crc kubenswrapper[4826]: I0129 09:01:04.204203 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494621-wq488" event={"ID":"05abc85a-7397-460f-a033-6b21ecaf2ddb","Type":"ContainerDied","Data":"4333df8ba982544af6e43666abde4b24506a66a49fadad15920b7fa5f14015c6"} Jan 29 09:01:05 crc kubenswrapper[4826]: I0129 09:01:05.552032 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:05 crc kubenswrapper[4826]: I0129 09:01:05.714653 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-config-data\") pod \"05abc85a-7397-460f-a033-6b21ecaf2ddb\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " Jan 29 09:01:05 crc kubenswrapper[4826]: I0129 09:01:05.715003 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth2t\" (UniqueName: \"kubernetes.io/projected/05abc85a-7397-460f-a033-6b21ecaf2ddb-kube-api-access-zth2t\") pod \"05abc85a-7397-460f-a033-6b21ecaf2ddb\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " Jan 29 09:01:05 crc kubenswrapper[4826]: I0129 09:01:05.715193 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-combined-ca-bundle\") pod \"05abc85a-7397-460f-a033-6b21ecaf2ddb\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " Jan 29 09:01:05 crc kubenswrapper[4826]: I0129 09:01:05.715236 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-fernet-keys\") pod \"05abc85a-7397-460f-a033-6b21ecaf2ddb\" (UID: \"05abc85a-7397-460f-a033-6b21ecaf2ddb\") " Jan 29 09:01:05 crc kubenswrapper[4826]: I0129 09:01:05.720559 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "05abc85a-7397-460f-a033-6b21ecaf2ddb" (UID: "05abc85a-7397-460f-a033-6b21ecaf2ddb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:05 crc kubenswrapper[4826]: I0129 09:01:05.720670 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05abc85a-7397-460f-a033-6b21ecaf2ddb-kube-api-access-zth2t" (OuterVolumeSpecName: "kube-api-access-zth2t") pod "05abc85a-7397-460f-a033-6b21ecaf2ddb" (UID: "05abc85a-7397-460f-a033-6b21ecaf2ddb"). InnerVolumeSpecName "kube-api-access-zth2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:05 crc kubenswrapper[4826]: I0129 09:01:05.745274 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05abc85a-7397-460f-a033-6b21ecaf2ddb" (UID: "05abc85a-7397-460f-a033-6b21ecaf2ddb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:05 crc kubenswrapper[4826]: I0129 09:01:05.770901 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-config-data" (OuterVolumeSpecName: "config-data") pod "05abc85a-7397-460f-a033-6b21ecaf2ddb" (UID: "05abc85a-7397-460f-a033-6b21ecaf2ddb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:05 crc kubenswrapper[4826]: I0129 09:01:05.817917 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zth2t\" (UniqueName: \"kubernetes.io/projected/05abc85a-7397-460f-a033-6b21ecaf2ddb-kube-api-access-zth2t\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:05 crc kubenswrapper[4826]: I0129 09:01:05.817956 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:05 crc kubenswrapper[4826]: I0129 09:01:05.817969 4826 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:05 crc kubenswrapper[4826]: I0129 09:01:05.817981 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05abc85a-7397-460f-a033-6b21ecaf2ddb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:06 crc kubenswrapper[4826]: I0129 09:01:06.225743 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494621-wq488" event={"ID":"05abc85a-7397-460f-a033-6b21ecaf2ddb","Type":"ContainerDied","Data":"fc30f20a050196a929dad229c09ee525884348dd09c6aef0aa3579a5d25d51fe"} Jan 29 09:01:06 crc kubenswrapper[4826]: I0129 09:01:06.225795 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494621-wq488" Jan 29 09:01:06 crc kubenswrapper[4826]: I0129 09:01:06.225804 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc30f20a050196a929dad229c09ee525884348dd09c6aef0aa3579a5d25d51fe" Jan 29 09:01:06 crc kubenswrapper[4826]: I0129 09:01:06.816464 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 09:01:06 crc kubenswrapper[4826]: E0129 09:01:06.817029 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:01:14 crc kubenswrapper[4826]: I0129 09:01:14.061768 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:01:14 crc kubenswrapper[4826]: I0129 09:01:14.062263 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="45fab69a-7c08-45a2-99f4-1686d2e89f2c" containerName="nova-cell0-conductor-conductor" containerID="cri-o://8825ea0c41626b3259b0e36656ac85d2e4904b8fb977b0c21d16ad1eb32b7df0" gracePeriod=30 Jan 29 09:01:14 crc kubenswrapper[4826]: I0129 09:01:14.092139 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:01:14 crc kubenswrapper[4826]: I0129 09:01:14.092512 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="133f3eaf-ea6b-4214-b61d-ba8573bc20f8" containerName="nova-cell1-conductor-conductor" containerID="cri-o://4683903cfe2d99ef8f3df76bab53bd4cc76d6cb88af95f2a0d466f0b12211fd2" gracePeriod=30 Jan 29 09:01:15 crc kubenswrapper[4826]: E0129 09:01:15.291930 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8825ea0c41626b3259b0e36656ac85d2e4904b8fb977b0c21d16ad1eb32b7df0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 09:01:15 crc kubenswrapper[4826]: E0129 09:01:15.294838 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8825ea0c41626b3259b0e36656ac85d2e4904b8fb977b0c21d16ad1eb32b7df0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 09:01:15 crc kubenswrapper[4826]: E0129 09:01:15.296249 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8825ea0c41626b3259b0e36656ac85d2e4904b8fb977b0c21d16ad1eb32b7df0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 09:01:15 crc kubenswrapper[4826]: E0129 09:01:15.296288 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="45fab69a-7c08-45a2-99f4-1686d2e89f2c" containerName="nova-cell0-conductor-conductor" Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.314756 4826 generic.go:334] "Generic (PLEG): container finished" podID="133f3eaf-ea6b-4214-b61d-ba8573bc20f8" containerID="4683903cfe2d99ef8f3df76bab53bd4cc76d6cb88af95f2a0d466f0b12211fd2" exitCode=0 Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.314820 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"133f3eaf-ea6b-4214-b61d-ba8573bc20f8","Type":"ContainerDied","Data":"4683903cfe2d99ef8f3df76bab53bd4cc76d6cb88af95f2a0d466f0b12211fd2"} Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.460612 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.495045 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm4bz\" (UniqueName: \"kubernetes.io/projected/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-kube-api-access-gm4bz\") pod \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\" (UID: \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\") " Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.495528 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-config-data\") pod \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\" (UID: \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\") " Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.495815 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-combined-ca-bundle\") pod \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\" (UID: \"133f3eaf-ea6b-4214-b61d-ba8573bc20f8\") " Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.504524 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-kube-api-access-gm4bz" (OuterVolumeSpecName: "kube-api-access-gm4bz") pod "133f3eaf-ea6b-4214-b61d-ba8573bc20f8" (UID: "133f3eaf-ea6b-4214-b61d-ba8573bc20f8"). InnerVolumeSpecName "kube-api-access-gm4bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.525516 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "133f3eaf-ea6b-4214-b61d-ba8573bc20f8" (UID: "133f3eaf-ea6b-4214-b61d-ba8573bc20f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.534848 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-config-data" (OuterVolumeSpecName: "config-data") pod "133f3eaf-ea6b-4214-b61d-ba8573bc20f8" (UID: "133f3eaf-ea6b-4214-b61d-ba8573bc20f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.600816 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.600857 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm4bz\" (UniqueName: \"kubernetes.io/projected/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-kube-api-access-gm4bz\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.600874 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/133f3eaf-ea6b-4214-b61d-ba8573bc20f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.633796 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.634139 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="536b9f40-3e07-4874-857f-d71b27b1bdc7" containerName="nova-api-log" containerID="cri-o://930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b" gracePeriod=30 Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.639587 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="536b9f40-3e07-4874-857f-d71b27b1bdc7" containerName="nova-api-api" containerID="cri-o://2479a3585910eba55c2897c3b2d13822bc63d58b9f9f577654d2b606f8026b58" gracePeriod=30 Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.673251 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.673532 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0" containerName="nova-scheduler-scheduler" containerID="cri-o://ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f" gracePeriod=30 Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.685870 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.686129 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerName="nova-metadata-log" containerID="cri-o://e89a37b46a685e1c9a20a1def84b5602d8d70b2bb92c94f611e6e2ed977df9ab" gracePeriod=30 Jan 29 09:01:15 crc kubenswrapper[4826]: I0129 09:01:15.686217 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerName="nova-metadata-metadata" containerID="cri-o://eb955492493b97ecc22f8505debd9f845936c63c0749aff9c47f2bbce9f35cd0" gracePeriod=30 Jan 29 09:01:15 crc kubenswrapper[4826]: E0129 09:01:15.977203 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536b9f40_3e07_4874_857f_d71b27b1bdc7.slice/crio-conmon-930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536b9f40_3e07_4874_857f_d71b27b1bdc7.slice/crio-930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ef24356_51c0_45f8_b98e_6c694ae2f61b.slice/crio-e89a37b46a685e1c9a20a1def84b5602d8d70b2bb92c94f611e6e2ed977df9ab.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ef24356_51c0_45f8_b98e_6c694ae2f61b.slice/crio-conmon-e89a37b46a685e1c9a20a1def84b5602d8d70b2bb92c94f611e6e2ed977df9ab.scope\": RecentStats: unable to find data in memory cache]" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.333176 4826 generic.go:334] "Generic (PLEG): container finished" podID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerID="e89a37b46a685e1c9a20a1def84b5602d8d70b2bb92c94f611e6e2ed977df9ab" exitCode=143 Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.333529 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1ef24356-51c0-45f8-b98e-6c694ae2f61b","Type":"ContainerDied","Data":"e89a37b46a685e1c9a20a1def84b5602d8d70b2bb92c94f611e6e2ed977df9ab"} Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.335734 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"133f3eaf-ea6b-4214-b61d-ba8573bc20f8","Type":"ContainerDied","Data":"38bf14aa3d0d942cd02063973a9f2624958c2e05e64545afdff2037d20404d4f"} Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.335766 4826 scope.go:117] "RemoveContainer" containerID="4683903cfe2d99ef8f3df76bab53bd4cc76d6cb88af95f2a0d466f0b12211fd2" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.335833 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.340561 4826 generic.go:334] "Generic (PLEG): container finished" podID="536b9f40-3e07-4874-857f-d71b27b1bdc7" containerID="930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b" exitCode=143 Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.340604 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"536b9f40-3e07-4874-857f-d71b27b1bdc7","Type":"ContainerDied","Data":"930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b"} Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.405763 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.416971 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.429100 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:01:16 crc kubenswrapper[4826]: E0129 09:01:16.429600 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a787b34-0474-44bf-9785-416b19422277" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.429621 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a787b34-0474-44bf-9785-416b19422277" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 29 09:01:16 crc kubenswrapper[4826]: E0129 09:01:16.429641 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133f3eaf-ea6b-4214-b61d-ba8573bc20f8" containerName="nova-cell1-conductor-conductor" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.429647 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="133f3eaf-ea6b-4214-b61d-ba8573bc20f8" containerName="nova-cell1-conductor-conductor" Jan 29 09:01:16 crc kubenswrapper[4826]: E0129 09:01:16.429667 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05abc85a-7397-460f-a033-6b21ecaf2ddb" containerName="keystone-cron" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.429674 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="05abc85a-7397-460f-a033-6b21ecaf2ddb" containerName="keystone-cron" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.429862 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="05abc85a-7397-460f-a033-6b21ecaf2ddb" containerName="keystone-cron" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.429877 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="133f3eaf-ea6b-4214-b61d-ba8573bc20f8" containerName="nova-cell1-conductor-conductor" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.429890 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a787b34-0474-44bf-9785-416b19422277" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.430744 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.434907 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.445804 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.526045 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv87r\" (UniqueName: \"kubernetes.io/projected/c740aa90-a4d5-4260-bfcd-82b659c58b82-kube-api-access-lv87r\") pod \"nova-cell1-conductor-0\" (UID: \"c740aa90-a4d5-4260-bfcd-82b659c58b82\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.526261 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c740aa90-a4d5-4260-bfcd-82b659c58b82-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c740aa90-a4d5-4260-bfcd-82b659c58b82\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.526327 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c740aa90-a4d5-4260-bfcd-82b659c58b82-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c740aa90-a4d5-4260-bfcd-82b659c58b82\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.627757 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv87r\" (UniqueName: \"kubernetes.io/projected/c740aa90-a4d5-4260-bfcd-82b659c58b82-kube-api-access-lv87r\") pod \"nova-cell1-conductor-0\" (UID: \"c740aa90-a4d5-4260-bfcd-82b659c58b82\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.627993 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c740aa90-a4d5-4260-bfcd-82b659c58b82-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c740aa90-a4d5-4260-bfcd-82b659c58b82\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.628031 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c740aa90-a4d5-4260-bfcd-82b659c58b82-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c740aa90-a4d5-4260-bfcd-82b659c58b82\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.634277 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c740aa90-a4d5-4260-bfcd-82b659c58b82-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c740aa90-a4d5-4260-bfcd-82b659c58b82\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.648636 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c740aa90-a4d5-4260-bfcd-82b659c58b82-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c740aa90-a4d5-4260-bfcd-82b659c58b82\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.649621 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv87r\" (UniqueName: \"kubernetes.io/projected/c740aa90-a4d5-4260-bfcd-82b659c58b82-kube-api-access-lv87r\") pod \"nova-cell1-conductor-0\" (UID: \"c740aa90-a4d5-4260-bfcd-82b659c58b82\") " pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.748928 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:16 crc kubenswrapper[4826]: I0129 09:01:16.822011 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="133f3eaf-ea6b-4214-b61d-ba8573bc20f8" path="/var/lib/kubelet/pods/133f3eaf-ea6b-4214-b61d-ba8573bc20f8/volumes" Jan 29 09:01:17 crc kubenswrapper[4826]: I0129 09:01:17.251199 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 09:01:17 crc kubenswrapper[4826]: I0129 09:01:17.356472 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c740aa90-a4d5-4260-bfcd-82b659c58b82","Type":"ContainerStarted","Data":"9152dfc36dc3b88f2b7eeecefceeec6c7e87f237afdc01b22ce90d385b60f2ed"} Jan 29 09:01:17 crc kubenswrapper[4826]: I0129 09:01:17.808893 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 09:01:17 crc kubenswrapper[4826]: E0129 09:01:17.809312 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:01:18 crc kubenswrapper[4826]: I0129 09:01:18.368431 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c740aa90-a4d5-4260-bfcd-82b659c58b82","Type":"ContainerStarted","Data":"7bf0dc8050990e846eeee522d6bb2d4649d98ceb655f3743647ce90653698ea8"} Jan 29 09:01:18 crc kubenswrapper[4826]: I0129 09:01:18.369109 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:18 crc kubenswrapper[4826]: I0129 09:01:18.387614 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.387590466 podStartE2EDuration="2.387590466s" podCreationTimestamp="2026-01-29 09:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:18.386797845 +0000 UTC m=+8262.248590914" watchObservedRunningTime="2026-01-29 09:01:18.387590466 +0000 UTC m=+8262.249383535" Jan 29 09:01:18 crc kubenswrapper[4826]: I0129 09:01:18.857340 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.92:8775/\": read tcp 10.217.0.2:47854->10.217.1.92:8775: read: connection reset by peer" Jan 29 09:01:18 crc kubenswrapper[4826]: I0129 09:01:18.857433 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.92:8775/\": read tcp 10.217.0.2:47852->10.217.1.92:8775: read: connection reset by peer" Jan 29 09:01:19 crc kubenswrapper[4826]: E0129 09:01:19.005724 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:01:19 crc kubenswrapper[4826]: E0129 09:01:19.007312 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:01:19 crc kubenswrapper[4826]: E0129 09:01:19.008233 4826 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 09:01:19 crc kubenswrapper[4826]: E0129 09:01:19.008262 4826 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0" containerName="nova-scheduler-scheduler" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.353938 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.389107 4826 generic.go:334] "Generic (PLEG): container finished" podID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerID="eb955492493b97ecc22f8505debd9f845936c63c0749aff9c47f2bbce9f35cd0" exitCode=0 Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.389171 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1ef24356-51c0-45f8-b98e-6c694ae2f61b","Type":"ContainerDied","Data":"eb955492493b97ecc22f8505debd9f845936c63c0749aff9c47f2bbce9f35cd0"} Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.392912 4826 generic.go:334] "Generic (PLEG): container finished" podID="536b9f40-3e07-4874-857f-d71b27b1bdc7" containerID="2479a3585910eba55c2897c3b2d13822bc63d58b9f9f577654d2b606f8026b58" exitCode=0 Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.392990 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"536b9f40-3e07-4874-857f-d71b27b1bdc7","Type":"ContainerDied","Data":"2479a3585910eba55c2897c3b2d13822bc63d58b9f9f577654d2b606f8026b58"} Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.393027 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.393052 4826 scope.go:117] "RemoveContainer" containerID="2479a3585910eba55c2897c3b2d13822bc63d58b9f9f577654d2b606f8026b58" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.393037 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"536b9f40-3e07-4874-857f-d71b27b1bdc7","Type":"ContainerDied","Data":"101d25fdae8f359d1b9c9921681e8c7675b59888c9c199a2a715bf5957c039c7"} Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.394697 4826 generic.go:334] "Generic (PLEG): container finished" podID="45fab69a-7c08-45a2-99f4-1686d2e89f2c" containerID="8825ea0c41626b3259b0e36656ac85d2e4904b8fb977b0c21d16ad1eb32b7df0" exitCode=0 Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.395971 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"45fab69a-7c08-45a2-99f4-1686d2e89f2c","Type":"ContainerDied","Data":"8825ea0c41626b3259b0e36656ac85d2e4904b8fb977b0c21d16ad1eb32b7df0"} Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.406535 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-config-data\") pod \"536b9f40-3e07-4874-857f-d71b27b1bdc7\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.406642 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/536b9f40-3e07-4874-857f-d71b27b1bdc7-logs\") pod \"536b9f40-3e07-4874-857f-d71b27b1bdc7\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.406705 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-internal-tls-certs\") pod \"536b9f40-3e07-4874-857f-d71b27b1bdc7\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.406823 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8hgt\" (UniqueName: \"kubernetes.io/projected/536b9f40-3e07-4874-857f-d71b27b1bdc7-kube-api-access-s8hgt\") pod \"536b9f40-3e07-4874-857f-d71b27b1bdc7\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.406856 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-combined-ca-bundle\") pod \"536b9f40-3e07-4874-857f-d71b27b1bdc7\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.406950 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-public-tls-certs\") pod \"536b9f40-3e07-4874-857f-d71b27b1bdc7\" (UID: \"536b9f40-3e07-4874-857f-d71b27b1bdc7\") " Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.408110 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/536b9f40-3e07-4874-857f-d71b27b1bdc7-logs" (OuterVolumeSpecName: "logs") pod "536b9f40-3e07-4874-857f-d71b27b1bdc7" (UID: "536b9f40-3e07-4874-857f-d71b27b1bdc7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.416401 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536b9f40-3e07-4874-857f-d71b27b1bdc7-kube-api-access-s8hgt" (OuterVolumeSpecName: "kube-api-access-s8hgt") pod "536b9f40-3e07-4874-857f-d71b27b1bdc7" (UID: "536b9f40-3e07-4874-857f-d71b27b1bdc7"). InnerVolumeSpecName "kube-api-access-s8hgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.433950 4826 scope.go:117] "RemoveContainer" containerID="930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.441011 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-config-data" (OuterVolumeSpecName: "config-data") pod "536b9f40-3e07-4874-857f-d71b27b1bdc7" (UID: "536b9f40-3e07-4874-857f-d71b27b1bdc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.453646 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "536b9f40-3e07-4874-857f-d71b27b1bdc7" (UID: "536b9f40-3e07-4874-857f-d71b27b1bdc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.478263 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "536b9f40-3e07-4874-857f-d71b27b1bdc7" (UID: "536b9f40-3e07-4874-857f-d71b27b1bdc7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.479235 4826 scope.go:117] "RemoveContainer" containerID="2479a3585910eba55c2897c3b2d13822bc63d58b9f9f577654d2b606f8026b58" Jan 29 09:01:19 crc kubenswrapper[4826]: E0129 09:01:19.479657 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2479a3585910eba55c2897c3b2d13822bc63d58b9f9f577654d2b606f8026b58\": container with ID starting with 2479a3585910eba55c2897c3b2d13822bc63d58b9f9f577654d2b606f8026b58 not found: ID does not exist" containerID="2479a3585910eba55c2897c3b2d13822bc63d58b9f9f577654d2b606f8026b58" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.479686 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2479a3585910eba55c2897c3b2d13822bc63d58b9f9f577654d2b606f8026b58"} err="failed to get container status \"2479a3585910eba55c2897c3b2d13822bc63d58b9f9f577654d2b606f8026b58\": rpc error: code = NotFound desc = could not find container \"2479a3585910eba55c2897c3b2d13822bc63d58b9f9f577654d2b606f8026b58\": container with ID starting with 2479a3585910eba55c2897c3b2d13822bc63d58b9f9f577654d2b606f8026b58 not found: ID does not exist" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.479707 4826 scope.go:117] "RemoveContainer" containerID="930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b" Jan 29 09:01:19 crc kubenswrapper[4826]: E0129 09:01:19.479995 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b\": container with ID starting with 930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b not found: ID does not exist" containerID="930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.480020 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b"} err="failed to get container status \"930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b\": rpc error: code = NotFound desc = could not find container \"930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b\": container with ID starting with 930969aabf8466e4ab40d9f8ad4c752e7cb54ef66faefc03f028223f2c2f944b not found: ID does not exist" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.490800 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "536b9f40-3e07-4874-857f-d71b27b1bdc7" (UID: "536b9f40-3e07-4874-857f-d71b27b1bdc7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.511891 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.511926 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/536b9f40-3e07-4874-857f-d71b27b1bdc7-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.511938 4826 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.511952 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8hgt\" (UniqueName: \"kubernetes.io/projected/536b9f40-3e07-4874-857f-d71b27b1bdc7-kube-api-access-s8hgt\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.511963 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.511973 4826 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/536b9f40-3e07-4874-857f-d71b27b1bdc7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.730481 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.740600 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.762205 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:19 crc kubenswrapper[4826]: E0129 09:01:19.762626 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536b9f40-3e07-4874-857f-d71b27b1bdc7" containerName="nova-api-api" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.762639 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="536b9f40-3e07-4874-857f-d71b27b1bdc7" containerName="nova-api-api" Jan 29 09:01:19 crc kubenswrapper[4826]: E0129 09:01:19.762681 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536b9f40-3e07-4874-857f-d71b27b1bdc7" containerName="nova-api-log" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.762687 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="536b9f40-3e07-4874-857f-d71b27b1bdc7" containerName="nova-api-log" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.762856 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="536b9f40-3e07-4874-857f-d71b27b1bdc7" containerName="nova-api-api" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.762877 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="536b9f40-3e07-4874-857f-d71b27b1bdc7" containerName="nova-api-log" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.764048 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.774333 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.774617 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.784411 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.800701 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.819159 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3143baa-65d4-4709-8c2e-57c85764d189-logs\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.819204 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3143baa-65d4-4709-8c2e-57c85764d189-config-data\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.819227 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3143baa-65d4-4709-8c2e-57c85764d189-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.819246 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3143baa-65d4-4709-8c2e-57c85764d189-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.819348 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3143baa-65d4-4709-8c2e-57c85764d189-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.821606 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npkwn\" (UniqueName: \"kubernetes.io/projected/b3143baa-65d4-4709-8c2e-57c85764d189-kube-api-access-npkwn\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.925847 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3143baa-65d4-4709-8c2e-57c85764d189-config-data\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.925904 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3143baa-65d4-4709-8c2e-57c85764d189-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.926008 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3143baa-65d4-4709-8c2e-57c85764d189-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.927214 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3143baa-65d4-4709-8c2e-57c85764d189-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.927414 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npkwn\" (UniqueName: \"kubernetes.io/projected/b3143baa-65d4-4709-8c2e-57c85764d189-kube-api-access-npkwn\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.927583 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3143baa-65d4-4709-8c2e-57c85764d189-logs\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.928051 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3143baa-65d4-4709-8c2e-57c85764d189-logs\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.933491 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3143baa-65d4-4709-8c2e-57c85764d189-public-tls-certs\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.933528 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3143baa-65d4-4709-8c2e-57c85764d189-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.937172 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3143baa-65d4-4709-8c2e-57c85764d189-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.937639 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3143baa-65d4-4709-8c2e-57c85764d189-config-data\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:19 crc kubenswrapper[4826]: I0129 09:01:19.946813 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npkwn\" (UniqueName: \"kubernetes.io/projected/b3143baa-65d4-4709-8c2e-57c85764d189-kube-api-access-npkwn\") pod \"nova-api-0\" (UID: \"b3143baa-65d4-4709-8c2e-57c85764d189\") " pod="openstack/nova-api-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.048227 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.057835 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.107009 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.131721 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgxsh\" (UniqueName: \"kubernetes.io/projected/1ef24356-51c0-45f8-b98e-6c694ae2f61b-kube-api-access-kgxsh\") pod \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.131791 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fab69a-7c08-45a2-99f4-1686d2e89f2c-config-data\") pod \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\" (UID: \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\") " Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.131970 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-nova-metadata-tls-certs\") pod \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.132032 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ef24356-51c0-45f8-b98e-6c694ae2f61b-logs\") pod \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.132056 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-combined-ca-bundle\") pod \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.132136 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fab69a-7c08-45a2-99f4-1686d2e89f2c-combined-ca-bundle\") pod \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\" (UID: \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\") " Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.132197 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbk24\" (UniqueName: \"kubernetes.io/projected/45fab69a-7c08-45a2-99f4-1686d2e89f2c-kube-api-access-hbk24\") pod \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\" (UID: \"45fab69a-7c08-45a2-99f4-1686d2e89f2c\") " Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.132236 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-config-data\") pod \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\" (UID: \"1ef24356-51c0-45f8-b98e-6c694ae2f61b\") " Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.138014 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ef24356-51c0-45f8-b98e-6c694ae2f61b-logs" (OuterVolumeSpecName: "logs") pod "1ef24356-51c0-45f8-b98e-6c694ae2f61b" (UID: "1ef24356-51c0-45f8-b98e-6c694ae2f61b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.140599 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef24356-51c0-45f8-b98e-6c694ae2f61b-kube-api-access-kgxsh" (OuterVolumeSpecName: "kube-api-access-kgxsh") pod "1ef24356-51c0-45f8-b98e-6c694ae2f61b" (UID: "1ef24356-51c0-45f8-b98e-6c694ae2f61b"). InnerVolumeSpecName "kube-api-access-kgxsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.153443 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45fab69a-7c08-45a2-99f4-1686d2e89f2c-kube-api-access-hbk24" (OuterVolumeSpecName: "kube-api-access-hbk24") pod "45fab69a-7c08-45a2-99f4-1686d2e89f2c" (UID: "45fab69a-7c08-45a2-99f4-1686d2e89f2c"). InnerVolumeSpecName "kube-api-access-hbk24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.176471 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45fab69a-7c08-45a2-99f4-1686d2e89f2c-config-data" (OuterVolumeSpecName: "config-data") pod "45fab69a-7c08-45a2-99f4-1686d2e89f2c" (UID: "45fab69a-7c08-45a2-99f4-1686d2e89f2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.180276 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45fab69a-7c08-45a2-99f4-1686d2e89f2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45fab69a-7c08-45a2-99f4-1686d2e89f2c" (UID: "45fab69a-7c08-45a2-99f4-1686d2e89f2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.212530 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-config-data" (OuterVolumeSpecName: "config-data") pod "1ef24356-51c0-45f8-b98e-6c694ae2f61b" (UID: "1ef24356-51c0-45f8-b98e-6c694ae2f61b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.219525 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ef24356-51c0-45f8-b98e-6c694ae2f61b" (UID: "1ef24356-51c0-45f8-b98e-6c694ae2f61b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.228167 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1ef24356-51c0-45f8-b98e-6c694ae2f61b" (UID: "1ef24356-51c0-45f8-b98e-6c694ae2f61b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.237848 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fab69a-7c08-45a2-99f4-1686d2e89f2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.237886 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbk24\" (UniqueName: \"kubernetes.io/projected/45fab69a-7c08-45a2-99f4-1686d2e89f2c-kube-api-access-hbk24\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.237899 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.237909 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgxsh\" (UniqueName: \"kubernetes.io/projected/1ef24356-51c0-45f8-b98e-6c694ae2f61b-kube-api-access-kgxsh\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.237920 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fab69a-7c08-45a2-99f4-1686d2e89f2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.237930 4826 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.237943 4826 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ef24356-51c0-45f8-b98e-6c694ae2f61b-logs\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.237955 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef24356-51c0-45f8-b98e-6c694ae2f61b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.420487 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1ef24356-51c0-45f8-b98e-6c694ae2f61b","Type":"ContainerDied","Data":"380ccb44874f648e71476d79af104502252e89de504804a2426b6471d8b2913f"} Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.420980 4826 scope.go:117] "RemoveContainer" containerID="eb955492493b97ecc22f8505debd9f845936c63c0749aff9c47f2bbce9f35cd0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.420932 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.439868 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"45fab69a-7c08-45a2-99f4-1686d2e89f2c","Type":"ContainerDied","Data":"e97cb34153edc5a75b66ce1c1e2f4bf2eb48adc1f2c5a796e90c956ee00be4ba"} Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.439962 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.463071 4826 scope.go:117] "RemoveContainer" containerID="e89a37b46a685e1c9a20a1def84b5602d8d70b2bb92c94f611e6e2ed977df9ab" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.466631 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.475591 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.514478 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.516798 4826 scope.go:117] "RemoveContainer" containerID="8825ea0c41626b3259b0e36656ac85d2e4904b8fb977b0c21d16ad1eb32b7df0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.527208 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.541740 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:20 crc kubenswrapper[4826]: E0129 09:01:20.542192 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerName="nova-metadata-log" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.542211 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerName="nova-metadata-log" Jan 29 09:01:20 crc kubenswrapper[4826]: E0129 09:01:20.542237 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fab69a-7c08-45a2-99f4-1686d2e89f2c" containerName="nova-cell0-conductor-conductor" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.542244 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fab69a-7c08-45a2-99f4-1686d2e89f2c" containerName="nova-cell0-conductor-conductor" Jan 29 09:01:20 crc kubenswrapper[4826]: E0129 09:01:20.542253 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerName="nova-metadata-metadata" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.542259 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerName="nova-metadata-metadata" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.542549 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerName="nova-metadata-metadata" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.542573 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" containerName="nova-metadata-log" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.542588 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="45fab69a-7c08-45a2-99f4-1686d2e89f2c" containerName="nova-cell0-conductor-conductor" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.543697 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.548943 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.549013 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.560413 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.577314 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.579367 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.582163 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.595849 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.638123 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.650053 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb1d5abc-ba89-42e8-b750-84ee3c7ab606-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bb1d5abc-ba89-42e8-b750-84ee3c7ab606\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.650128 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb1d5abc-ba89-42e8-b750-84ee3c7ab606-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bb1d5abc-ba89-42e8-b750-84ee3c7ab606\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.650156 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e62da8-b996-4163-bb10-3afdba722b5a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.650191 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e62da8-b996-4163-bb10-3afdba722b5a-config-data\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.650484 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmhfj\" (UniqueName: \"kubernetes.io/projected/bb1d5abc-ba89-42e8-b750-84ee3c7ab606-kube-api-access-tmhfj\") pod \"nova-cell0-conductor-0\" (UID: \"bb1d5abc-ba89-42e8-b750-84ee3c7ab606\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.650618 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e62da8-b996-4163-bb10-3afdba722b5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.650758 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ptz8\" (UniqueName: \"kubernetes.io/projected/26e62da8-b996-4163-bb10-3afdba722b5a-kube-api-access-7ptz8\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.650858 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26e62da8-b996-4163-bb10-3afdba722b5a-logs\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.752902 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb1d5abc-ba89-42e8-b750-84ee3c7ab606-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bb1d5abc-ba89-42e8-b750-84ee3c7ab606\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.752961 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e62da8-b996-4163-bb10-3afdba722b5a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.753004 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e62da8-b996-4163-bb10-3afdba722b5a-config-data\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.753096 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmhfj\" (UniqueName: \"kubernetes.io/projected/bb1d5abc-ba89-42e8-b750-84ee3c7ab606-kube-api-access-tmhfj\") pod \"nova-cell0-conductor-0\" (UID: \"bb1d5abc-ba89-42e8-b750-84ee3c7ab606\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.753151 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e62da8-b996-4163-bb10-3afdba722b5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.753209 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ptz8\" (UniqueName: \"kubernetes.io/projected/26e62da8-b996-4163-bb10-3afdba722b5a-kube-api-access-7ptz8\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.753275 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26e62da8-b996-4163-bb10-3afdba722b5a-logs\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.753337 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb1d5abc-ba89-42e8-b750-84ee3c7ab606-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bb1d5abc-ba89-42e8-b750-84ee3c7ab606\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.754832 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26e62da8-b996-4163-bb10-3afdba722b5a-logs\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.758872 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb1d5abc-ba89-42e8-b750-84ee3c7ab606-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bb1d5abc-ba89-42e8-b750-84ee3c7ab606\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.759346 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e62da8-b996-4163-bb10-3afdba722b5a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.760790 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/26e62da8-b996-4163-bb10-3afdba722b5a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.770516 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e62da8-b996-4163-bb10-3afdba722b5a-config-data\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.771156 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb1d5abc-ba89-42e8-b750-84ee3c7ab606-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bb1d5abc-ba89-42e8-b750-84ee3c7ab606\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.773037 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmhfj\" (UniqueName: \"kubernetes.io/projected/bb1d5abc-ba89-42e8-b750-84ee3c7ab606-kube-api-access-tmhfj\") pod \"nova-cell0-conductor-0\" (UID: \"bb1d5abc-ba89-42e8-b750-84ee3c7ab606\") " pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.774035 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ptz8\" (UniqueName: \"kubernetes.io/projected/26e62da8-b996-4163-bb10-3afdba722b5a-kube-api-access-7ptz8\") pod \"nova-metadata-0\" (UID: \"26e62da8-b996-4163-bb10-3afdba722b5a\") " pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.822065 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ef24356-51c0-45f8-b98e-6c694ae2f61b" path="/var/lib/kubelet/pods/1ef24356-51c0-45f8-b98e-6c694ae2f61b/volumes" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.822910 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45fab69a-7c08-45a2-99f4-1686d2e89f2c" path="/var/lib/kubelet/pods/45fab69a-7c08-45a2-99f4-1686d2e89f2c/volumes" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.823463 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536b9f40-3e07-4874-857f-d71b27b1bdc7" path="/var/lib/kubelet/pods/536b9f40-3e07-4874-857f-d71b27b1bdc7/volumes" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.880818 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 09:01:20 crc kubenswrapper[4826]: I0129 09:01:20.897598 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:21 crc kubenswrapper[4826]: I0129 09:01:21.422324 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 09:01:21 crc kubenswrapper[4826]: I0129 09:01:21.436452 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 09:01:21 crc kubenswrapper[4826]: I0129 09:01:21.453411 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bb1d5abc-ba89-42e8-b750-84ee3c7ab606","Type":"ContainerStarted","Data":"a50bccbd400f60cb10d3686957e22e7555f128d756baa7dc63f55daf1ccf4fc1"} Jan 29 09:01:21 crc kubenswrapper[4826]: I0129 09:01:21.454880 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3143baa-65d4-4709-8c2e-57c85764d189","Type":"ContainerStarted","Data":"a763b429bb19d1faa42b330a10d38d6c9076bbf68c18062cecead105430aa029"} Jan 29 09:01:21 crc kubenswrapper[4826]: I0129 09:01:21.454904 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3143baa-65d4-4709-8c2e-57c85764d189","Type":"ContainerStarted","Data":"e894183148c9c2e11eacba0586daf9fd9616e2ea4945a626f4206de6bfc5448d"} Jan 29 09:01:21 crc kubenswrapper[4826]: I0129 09:01:21.454913 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b3143baa-65d4-4709-8c2e-57c85764d189","Type":"ContainerStarted","Data":"5494cef82ccda877959c4e5801fc44cdba4424b0d9f3ecb3b7f1f51fbc35d3b2"} Jan 29 09:01:21 crc kubenswrapper[4826]: I0129 09:01:21.460045 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26e62da8-b996-4163-bb10-3afdba722b5a","Type":"ContainerStarted","Data":"734f7ea3aa5cec62e3b13b0208d339a27a1e776723591e40b9ae2db42147cf0d"} Jan 29 09:01:21 crc kubenswrapper[4826]: I0129 09:01:21.486222 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.486201674 podStartE2EDuration="2.486201674s" podCreationTimestamp="2026-01-29 09:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:21.474868421 +0000 UTC m=+8265.336661480" watchObservedRunningTime="2026-01-29 09:01:21.486201674 +0000 UTC m=+8265.347994743" Jan 29 09:01:22 crc kubenswrapper[4826]: I0129 09:01:22.468948 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26e62da8-b996-4163-bb10-3afdba722b5a","Type":"ContainerStarted","Data":"aa8760017386026d145660418d05e4a661304157157337539c3b7e955487ae49"} Jan 29 09:01:22 crc kubenswrapper[4826]: I0129 09:01:22.469278 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"26e62da8-b996-4163-bb10-3afdba722b5a","Type":"ContainerStarted","Data":"a6b3ec15dc356c3bf87806f3f4e944aa8a4b0ab0d5b00c3406bade2c5fbdcfa3"} Jan 29 09:01:22 crc kubenswrapper[4826]: I0129 09:01:22.471519 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bb1d5abc-ba89-42e8-b750-84ee3c7ab606","Type":"ContainerStarted","Data":"a016136982d2fc745e6d9bc10ae0fa8ee4ae094b608ac69d605982545edb503f"} Jan 29 09:01:22 crc kubenswrapper[4826]: I0129 09:01:22.491406 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.49138598 podStartE2EDuration="2.49138598s" podCreationTimestamp="2026-01-29 09:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:22.485250907 +0000 UTC m=+8266.347043986" watchObservedRunningTime="2026-01-29 09:01:22.49138598 +0000 UTC m=+8266.353179039" Jan 29 09:01:22 crc kubenswrapper[4826]: I0129 09:01:22.516191 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.516173471 podStartE2EDuration="2.516173471s" podCreationTimestamp="2026-01-29 09:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:22.511662431 +0000 UTC m=+8266.373455500" watchObservedRunningTime="2026-01-29 09:01:22.516173471 +0000 UTC m=+8266.377966540" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.476217 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.485794 4826 generic.go:334] "Generic (PLEG): container finished" podID="dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0" containerID="ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f" exitCode=0 Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.486118 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.486609 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0","Type":"ContainerDied","Data":"ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f"} Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.486640 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0","Type":"ContainerDied","Data":"247cdd108fcf03fc8f55751310aec1176dc60d7aa9b07854a1dfb3f81df509ec"} Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.486659 4826 scope.go:117] "RemoveContainer" containerID="ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.486916 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.517653 4826 scope.go:117] "RemoveContainer" containerID="ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f" Jan 29 09:01:23 crc kubenswrapper[4826]: E0129 09:01:23.518217 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f\": container with ID starting with ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f not found: ID does not exist" containerID="ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.518260 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f"} err="failed to get container status \"ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f\": rpc error: code = NotFound desc = could not find container \"ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f\": container with ID starting with ffd0f14e230f7f6f4c3e35ce38df8401cb40abe3d356faf983b32535a3b4553f not found: ID does not exist" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.526345 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-combined-ca-bundle\") pod \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\" (UID: \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\") " Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.526557 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wplt6\" (UniqueName: \"kubernetes.io/projected/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-kube-api-access-wplt6\") pod \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\" (UID: \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\") " Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.526642 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-config-data\") pod \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\" (UID: \"dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0\") " Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.533559 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-kube-api-access-wplt6" (OuterVolumeSpecName: "kube-api-access-wplt6") pod "dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0" (UID: "dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0"). InnerVolumeSpecName "kube-api-access-wplt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.556252 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0" (UID: "dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.565401 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-config-data" (OuterVolumeSpecName: "config-data") pod "dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0" (UID: "dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.630871 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wplt6\" (UniqueName: \"kubernetes.io/projected/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-kube-api-access-wplt6\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.630911 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.630921 4826 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.827591 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.836823 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.847474 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:23 crc kubenswrapper[4826]: E0129 09:01:23.848051 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0" containerName="nova-scheduler-scheduler" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.848076 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0" containerName="nova-scheduler-scheduler" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.848356 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0" containerName="nova-scheduler-scheduler" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.849234 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.852760 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.875059 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.936089 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzz5t\" (UniqueName: \"kubernetes.io/projected/f0ffec67-acda-4aa6-924e-b94c3e09fc8f-kube-api-access-kzz5t\") pod \"nova-scheduler-0\" (UID: \"f0ffec67-acda-4aa6-924e-b94c3e09fc8f\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.936149 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0ffec67-acda-4aa6-924e-b94c3e09fc8f-config-data\") pod \"nova-scheduler-0\" (UID: \"f0ffec67-acda-4aa6-924e-b94c3e09fc8f\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:23 crc kubenswrapper[4826]: I0129 09:01:23.936182 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ffec67-acda-4aa6-924e-b94c3e09fc8f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0ffec67-acda-4aa6-924e-b94c3e09fc8f\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:24 crc kubenswrapper[4826]: I0129 09:01:24.037928 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzz5t\" (UniqueName: \"kubernetes.io/projected/f0ffec67-acda-4aa6-924e-b94c3e09fc8f-kube-api-access-kzz5t\") pod \"nova-scheduler-0\" (UID: \"f0ffec67-acda-4aa6-924e-b94c3e09fc8f\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:24 crc kubenswrapper[4826]: I0129 09:01:24.037995 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0ffec67-acda-4aa6-924e-b94c3e09fc8f-config-data\") pod \"nova-scheduler-0\" (UID: \"f0ffec67-acda-4aa6-924e-b94c3e09fc8f\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:24 crc kubenswrapper[4826]: I0129 09:01:24.038037 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ffec67-acda-4aa6-924e-b94c3e09fc8f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0ffec67-acda-4aa6-924e-b94c3e09fc8f\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:24 crc kubenswrapper[4826]: I0129 09:01:24.042397 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0ffec67-acda-4aa6-924e-b94c3e09fc8f-config-data\") pod \"nova-scheduler-0\" (UID: \"f0ffec67-acda-4aa6-924e-b94c3e09fc8f\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:24 crc kubenswrapper[4826]: I0129 09:01:24.042740 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ffec67-acda-4aa6-924e-b94c3e09fc8f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0ffec67-acda-4aa6-924e-b94c3e09fc8f\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:24 crc kubenswrapper[4826]: I0129 09:01:24.055247 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzz5t\" (UniqueName: \"kubernetes.io/projected/f0ffec67-acda-4aa6-924e-b94c3e09fc8f-kube-api-access-kzz5t\") pod \"nova-scheduler-0\" (UID: \"f0ffec67-acda-4aa6-924e-b94c3e09fc8f\") " pod="openstack/nova-scheduler-0" Jan 29 09:01:24 crc kubenswrapper[4826]: I0129 09:01:24.170638 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 09:01:24 crc kubenswrapper[4826]: I0129 09:01:24.596742 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 09:01:24 crc kubenswrapper[4826]: W0129 09:01:24.606142 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0ffec67_acda_4aa6_924e_b94c3e09fc8f.slice/crio-2ccb657e3d35b3659967f92d31253e86bb5ffe650bef786a5516bed4510d57ac WatchSource:0}: Error finding container 2ccb657e3d35b3659967f92d31253e86bb5ffe650bef786a5516bed4510d57ac: Status 404 returned error can't find the container with id 2ccb657e3d35b3659967f92d31253e86bb5ffe650bef786a5516bed4510d57ac Jan 29 09:01:24 crc kubenswrapper[4826]: I0129 09:01:24.823218 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0" path="/var/lib/kubelet/pods/dc4d9cb1-1ceb-4f3a-80d8-ba17115b1ad0/volumes" Jan 29 09:01:25 crc kubenswrapper[4826]: I0129 09:01:25.506360 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0ffec67-acda-4aa6-924e-b94c3e09fc8f","Type":"ContainerStarted","Data":"9399af73d15d2aab42021a2bf131aef04158f327fcafb3710d322d37e549cb42"} Jan 29 09:01:25 crc kubenswrapper[4826]: I0129 09:01:25.506660 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0ffec67-acda-4aa6-924e-b94c3e09fc8f","Type":"ContainerStarted","Data":"2ccb657e3d35b3659967f92d31253e86bb5ffe650bef786a5516bed4510d57ac"} Jan 29 09:01:25 crc kubenswrapper[4826]: I0129 09:01:25.523640 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.523624397 podStartE2EDuration="2.523624397s" podCreationTimestamp="2026-01-29 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:01:25.521947602 +0000 UTC m=+8269.383740671" watchObservedRunningTime="2026-01-29 09:01:25.523624397 +0000 UTC m=+8269.385417466" Jan 29 09:01:25 crc kubenswrapper[4826]: I0129 09:01:25.881222 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:01:25 crc kubenswrapper[4826]: I0129 09:01:25.882626 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 09:01:26 crc kubenswrapper[4826]: I0129 09:01:26.776415 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 09:01:28 crc kubenswrapper[4826]: I0129 09:01:28.809259 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 09:01:28 crc kubenswrapper[4826]: E0129 09:01:28.809866 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:01:29 crc kubenswrapper[4826]: I0129 09:01:29.171431 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 09:01:30 crc kubenswrapper[4826]: I0129 09:01:30.108089 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:01:30 crc kubenswrapper[4826]: I0129 09:01:30.108164 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 09:01:30 crc kubenswrapper[4826]: I0129 09:01:30.881668 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:01:30 crc kubenswrapper[4826]: I0129 09:01:30.881717 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 09:01:30 crc kubenswrapper[4826]: I0129 09:01:30.931427 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 09:01:31 crc kubenswrapper[4826]: I0129 09:01:31.128728 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3143baa-65d4-4709-8c2e-57c85764d189" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.178:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:31 crc kubenswrapper[4826]: I0129 09:01:31.128743 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b3143baa-65d4-4709-8c2e-57c85764d189" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.178:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:31 crc kubenswrapper[4826]: I0129 09:01:31.897485 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="26e62da8-b996-4163-bb10-3afdba722b5a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.179:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:31 crc kubenswrapper[4826]: I0129 09:01:31.897511 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="26e62da8-b996-4163-bb10-3afdba722b5a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 09:01:34 crc kubenswrapper[4826]: I0129 09:01:34.171187 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 09:01:34 crc kubenswrapper[4826]: I0129 09:01:34.220564 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 09:01:34 crc kubenswrapper[4826]: I0129 09:01:34.628384 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 09:01:39 crc kubenswrapper[4826]: I0129 09:01:39.809761 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 09:01:40 crc kubenswrapper[4826]: I0129 09:01:40.117869 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:01:40 crc kubenswrapper[4826]: I0129 09:01:40.118479 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 09:01:40 crc kubenswrapper[4826]: I0129 09:01:40.118502 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:01:40 crc kubenswrapper[4826]: I0129 09:01:40.124513 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:01:40 crc kubenswrapper[4826]: I0129 09:01:40.666715 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"2af13fc74b9b9ef9a03ea411f88a9223259f0cd3bbca719eaa0a1a52cc09f980"} Jan 29 09:01:40 crc kubenswrapper[4826]: I0129 09:01:40.667050 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 09:01:40 crc kubenswrapper[4826]: I0129 09:01:40.693824 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 09:01:40 crc kubenswrapper[4826]: I0129 09:01:40.888578 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:01:40 crc kubenswrapper[4826]: I0129 09:01:40.891992 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 09:01:40 crc kubenswrapper[4826]: I0129 09:01:40.909306 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:01:41 crc kubenswrapper[4826]: I0129 09:01:41.684115 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.968360 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn"] Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.970006 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.972275 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn"] Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.983157 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.983251 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.983312 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-bz2p6" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.983700 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.984396 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.994062 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.994128 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.994159 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.994201 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpl6l\" (UniqueName: \"kubernetes.io/projected/8bacd48c-ec38-45a4-825d-0684192208bd-kube-api-access-fpl6l\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.994224 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.994423 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.994485 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.994580 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.994650 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.994792 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Jan 29 09:01:42 crc kubenswrapper[4826]: I0129 09:01:42.995120 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.096918 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.096984 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.097047 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.097095 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.097194 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.097251 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.097288 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.097365 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpl6l\" (UniqueName: \"kubernetes.io/projected/8bacd48c-ec38-45a4-825d-0684192208bd-kube-api-access-fpl6l\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.097402 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.102564 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.103458 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.103926 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.104883 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.105427 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.107544 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.113278 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.113888 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.127748 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpl6l\" (UniqueName: \"kubernetes.io/projected/8bacd48c-ec38-45a4-825d-0684192208bd-kube-api-access-fpl6l\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.313395 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:01:43 crc kubenswrapper[4826]: I0129 09:01:43.890678 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn"] Jan 29 09:01:44 crc kubenswrapper[4826]: I0129 09:01:44.703128 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" event={"ID":"8bacd48c-ec38-45a4-825d-0684192208bd","Type":"ContainerStarted","Data":"bc5686eff07d17214aa0de222be06724e9fcdb29878e364f5be856091accc10a"} Jan 29 09:01:44 crc kubenswrapper[4826]: I0129 09:01:44.703170 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" event={"ID":"8bacd48c-ec38-45a4-825d-0684192208bd","Type":"ContainerStarted","Data":"f4b16f5c45a664ce51e2f38240185e7c71b0ef27e8f2e98971071e5e5d1ce58f"} Jan 29 09:01:44 crc kubenswrapper[4826]: I0129 09:01:44.725176 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" podStartSLOduration=2.333770775 podStartE2EDuration="2.725156063s" podCreationTimestamp="2026-01-29 09:01:42 +0000 UTC" firstStartedPulling="2026-01-29 09:01:43.896537514 +0000 UTC m=+8287.758330583" lastFinishedPulling="2026-01-29 09:01:44.287922792 +0000 UTC m=+8288.149715871" observedRunningTime="2026-01-29 09:01:44.720119718 +0000 UTC m=+8288.581912807" watchObservedRunningTime="2026-01-29 09:01:44.725156063 +0000 UTC m=+8288.586949132" Jan 29 09:03:52 crc kubenswrapper[4826]: I0129 09:03:52.707511 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-njzkh"] Jan 29 09:03:52 crc kubenswrapper[4826]: I0129 09:03:52.710790 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:03:52 crc kubenswrapper[4826]: I0129 09:03:52.720120 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-njzkh"] Jan 29 09:03:52 crc kubenswrapper[4826]: I0129 09:03:52.873614 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx9pf\" (UniqueName: \"kubernetes.io/projected/67e38830-4667-4376-aba0-861b12d2864a-kube-api-access-jx9pf\") pod \"community-operators-njzkh\" (UID: \"67e38830-4667-4376-aba0-861b12d2864a\") " pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:03:52 crc kubenswrapper[4826]: I0129 09:03:52.873990 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e38830-4667-4376-aba0-861b12d2864a-utilities\") pod \"community-operators-njzkh\" (UID: \"67e38830-4667-4376-aba0-861b12d2864a\") " pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:03:52 crc kubenswrapper[4826]: I0129 09:03:52.874062 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e38830-4667-4376-aba0-861b12d2864a-catalog-content\") pod \"community-operators-njzkh\" (UID: \"67e38830-4667-4376-aba0-861b12d2864a\") " pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:03:52 crc kubenswrapper[4826]: I0129 09:03:52.976197 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e38830-4667-4376-aba0-861b12d2864a-utilities\") pod \"community-operators-njzkh\" (UID: \"67e38830-4667-4376-aba0-861b12d2864a\") " pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:03:52 crc kubenswrapper[4826]: I0129 09:03:52.976275 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e38830-4667-4376-aba0-861b12d2864a-catalog-content\") pod \"community-operators-njzkh\" (UID: \"67e38830-4667-4376-aba0-861b12d2864a\") " pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:03:52 crc kubenswrapper[4826]: I0129 09:03:52.976367 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx9pf\" (UniqueName: \"kubernetes.io/projected/67e38830-4667-4376-aba0-861b12d2864a-kube-api-access-jx9pf\") pod \"community-operators-njzkh\" (UID: \"67e38830-4667-4376-aba0-861b12d2864a\") " pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:03:52 crc kubenswrapper[4826]: I0129 09:03:52.976928 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e38830-4667-4376-aba0-861b12d2864a-catalog-content\") pod \"community-operators-njzkh\" (UID: \"67e38830-4667-4376-aba0-861b12d2864a\") " pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:03:52 crc kubenswrapper[4826]: I0129 09:03:52.976976 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e38830-4667-4376-aba0-861b12d2864a-utilities\") pod \"community-operators-njzkh\" (UID: \"67e38830-4667-4376-aba0-861b12d2864a\") " pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:03:52 crc kubenswrapper[4826]: I0129 09:03:52.998655 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx9pf\" (UniqueName: \"kubernetes.io/projected/67e38830-4667-4376-aba0-861b12d2864a-kube-api-access-jx9pf\") pod \"community-operators-njzkh\" (UID: \"67e38830-4667-4376-aba0-861b12d2864a\") " pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:03:53 crc kubenswrapper[4826]: I0129 09:03:53.038522 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:03:53 crc kubenswrapper[4826]: I0129 09:03:53.608602 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-njzkh"] Jan 29 09:03:53 crc kubenswrapper[4826]: W0129 09:03:53.610986 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67e38830_4667_4376_aba0_861b12d2864a.slice/crio-3aab5413e4a7a3b5dbec77c2b39949a35738a1861c07181431fd36cc29641293 WatchSource:0}: Error finding container 3aab5413e4a7a3b5dbec77c2b39949a35738a1861c07181431fd36cc29641293: Status 404 returned error can't find the container with id 3aab5413e4a7a3b5dbec77c2b39949a35738a1861c07181431fd36cc29641293 Jan 29 09:03:53 crc kubenswrapper[4826]: I0129 09:03:53.956195 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzkh" event={"ID":"67e38830-4667-4376-aba0-861b12d2864a","Type":"ContainerStarted","Data":"3aab5413e4a7a3b5dbec77c2b39949a35738a1861c07181431fd36cc29641293"} Jan 29 09:03:55 crc kubenswrapper[4826]: I0129 09:03:55.022033 4826 generic.go:334] "Generic (PLEG): container finished" podID="67e38830-4667-4376-aba0-861b12d2864a" containerID="da45407eb6f3bb579e70754f598538e842d82e70416edadd10d80da7aae1b002" exitCode=0 Jan 29 09:03:55 crc kubenswrapper[4826]: I0129 09:03:55.022590 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzkh" event={"ID":"67e38830-4667-4376-aba0-861b12d2864a","Type":"ContainerDied","Data":"da45407eb6f3bb579e70754f598538e842d82e70416edadd10d80da7aae1b002"} Jan 29 09:03:57 crc kubenswrapper[4826]: I0129 09:03:57.042863 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzkh" event={"ID":"67e38830-4667-4376-aba0-861b12d2864a","Type":"ContainerStarted","Data":"07f191b7f4e594494608c157d140a460b04fcbf6c6b3ed65f1aa6de3bf9a244f"} Jan 29 09:03:58 crc kubenswrapper[4826]: I0129 09:03:58.054572 4826 generic.go:334] "Generic (PLEG): container finished" podID="67e38830-4667-4376-aba0-861b12d2864a" containerID="07f191b7f4e594494608c157d140a460b04fcbf6c6b3ed65f1aa6de3bf9a244f" exitCode=0 Jan 29 09:03:58 crc kubenswrapper[4826]: I0129 09:03:58.054653 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzkh" event={"ID":"67e38830-4667-4376-aba0-861b12d2864a","Type":"ContainerDied","Data":"07f191b7f4e594494608c157d140a460b04fcbf6c6b3ed65f1aa6de3bf9a244f"} Jan 29 09:03:59 crc kubenswrapper[4826]: I0129 09:03:59.065185 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzkh" event={"ID":"67e38830-4667-4376-aba0-861b12d2864a","Type":"ContainerStarted","Data":"91357f83f798e0eb2fb52125537b1d859e681374003ea6dfbfb70c83e945ebbf"} Jan 29 09:03:59 crc kubenswrapper[4826]: I0129 09:03:59.084744 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-njzkh" podStartSLOduration=3.543042963 podStartE2EDuration="7.084721966s" podCreationTimestamp="2026-01-29 09:03:52 +0000 UTC" firstStartedPulling="2026-01-29 09:03:55.02955866 +0000 UTC m=+8418.891351729" lastFinishedPulling="2026-01-29 09:03:58.571237653 +0000 UTC m=+8422.433030732" observedRunningTime="2026-01-29 09:03:59.080876043 +0000 UTC m=+8422.942669112" watchObservedRunningTime="2026-01-29 09:03:59.084721966 +0000 UTC m=+8422.946515025" Jan 29 09:04:03 crc kubenswrapper[4826]: I0129 09:04:03.038704 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:04:03 crc kubenswrapper[4826]: I0129 09:04:03.039188 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:04:03 crc kubenswrapper[4826]: I0129 09:04:03.083203 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:04:03 crc kubenswrapper[4826]: I0129 09:04:03.160252 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:04:03 crc kubenswrapper[4826]: I0129 09:04:03.319051 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-njzkh"] Jan 29 09:04:05 crc kubenswrapper[4826]: I0129 09:04:05.128368 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-njzkh" podUID="67e38830-4667-4376-aba0-861b12d2864a" containerName="registry-server" containerID="cri-o://91357f83f798e0eb2fb52125537b1d859e681374003ea6dfbfb70c83e945ebbf" gracePeriod=2 Jan 29 09:04:05 crc kubenswrapper[4826]: I0129 09:04:05.657455 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:04:05 crc kubenswrapper[4826]: I0129 09:04:05.657795 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:04:05 crc kubenswrapper[4826]: I0129 09:04:05.764639 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:04:05 crc kubenswrapper[4826]: I0129 09:04:05.777137 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e38830-4667-4376-aba0-861b12d2864a-utilities\") pod \"67e38830-4667-4376-aba0-861b12d2864a\" (UID: \"67e38830-4667-4376-aba0-861b12d2864a\") " Jan 29 09:04:05 crc kubenswrapper[4826]: I0129 09:04:05.778094 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e38830-4667-4376-aba0-861b12d2864a-utilities" (OuterVolumeSpecName: "utilities") pod "67e38830-4667-4376-aba0-861b12d2864a" (UID: "67e38830-4667-4376-aba0-861b12d2864a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:04:05 crc kubenswrapper[4826]: I0129 09:04:05.779017 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e38830-4667-4376-aba0-861b12d2864a-catalog-content\") pod \"67e38830-4667-4376-aba0-861b12d2864a\" (UID: \"67e38830-4667-4376-aba0-861b12d2864a\") " Jan 29 09:04:05 crc kubenswrapper[4826]: I0129 09:04:05.779076 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx9pf\" (UniqueName: \"kubernetes.io/projected/67e38830-4667-4376-aba0-861b12d2864a-kube-api-access-jx9pf\") pod \"67e38830-4667-4376-aba0-861b12d2864a\" (UID: \"67e38830-4667-4376-aba0-861b12d2864a\") " Jan 29 09:04:05 crc kubenswrapper[4826]: I0129 09:04:05.797360 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e38830-4667-4376-aba0-861b12d2864a-kube-api-access-jx9pf" (OuterVolumeSpecName: "kube-api-access-jx9pf") pod "67e38830-4667-4376-aba0-861b12d2864a" (UID: "67e38830-4667-4376-aba0-861b12d2864a"). InnerVolumeSpecName "kube-api-access-jx9pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:05 crc kubenswrapper[4826]: I0129 09:04:05.797852 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e38830-4667-4376-aba0-861b12d2864a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:05 crc kubenswrapper[4826]: I0129 09:04:05.797872 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx9pf\" (UniqueName: \"kubernetes.io/projected/67e38830-4667-4376-aba0-861b12d2864a-kube-api-access-jx9pf\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:05 crc kubenswrapper[4826]: I0129 09:04:05.869455 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e38830-4667-4376-aba0-861b12d2864a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67e38830-4667-4376-aba0-861b12d2864a" (UID: "67e38830-4667-4376-aba0-861b12d2864a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:04:05 crc kubenswrapper[4826]: I0129 09:04:05.905874 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e38830-4667-4376-aba0-861b12d2864a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.139412 4826 generic.go:334] "Generic (PLEG): container finished" podID="67e38830-4667-4376-aba0-861b12d2864a" containerID="91357f83f798e0eb2fb52125537b1d859e681374003ea6dfbfb70c83e945ebbf" exitCode=0 Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.139454 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzkh" event={"ID":"67e38830-4667-4376-aba0-861b12d2864a","Type":"ContainerDied","Data":"91357f83f798e0eb2fb52125537b1d859e681374003ea6dfbfb70c83e945ebbf"} Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.139479 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzkh" event={"ID":"67e38830-4667-4376-aba0-861b12d2864a","Type":"ContainerDied","Data":"3aab5413e4a7a3b5dbec77c2b39949a35738a1861c07181431fd36cc29641293"} Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.139500 4826 scope.go:117] "RemoveContainer" containerID="91357f83f798e0eb2fb52125537b1d859e681374003ea6dfbfb70c83e945ebbf" Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.140680 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njzkh" Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.161384 4826 scope.go:117] "RemoveContainer" containerID="07f191b7f4e594494608c157d140a460b04fcbf6c6b3ed65f1aa6de3bf9a244f" Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.177190 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-njzkh"] Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.202461 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-njzkh"] Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.203075 4826 scope.go:117] "RemoveContainer" containerID="da45407eb6f3bb579e70754f598538e842d82e70416edadd10d80da7aae1b002" Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.243189 4826 scope.go:117] "RemoveContainer" containerID="91357f83f798e0eb2fb52125537b1d859e681374003ea6dfbfb70c83e945ebbf" Jan 29 09:04:06 crc kubenswrapper[4826]: E0129 09:04:06.243833 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91357f83f798e0eb2fb52125537b1d859e681374003ea6dfbfb70c83e945ebbf\": container with ID starting with 91357f83f798e0eb2fb52125537b1d859e681374003ea6dfbfb70c83e945ebbf not found: ID does not exist" containerID="91357f83f798e0eb2fb52125537b1d859e681374003ea6dfbfb70c83e945ebbf" Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.243884 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91357f83f798e0eb2fb52125537b1d859e681374003ea6dfbfb70c83e945ebbf"} err="failed to get container status \"91357f83f798e0eb2fb52125537b1d859e681374003ea6dfbfb70c83e945ebbf\": rpc error: code = NotFound desc = could not find container \"91357f83f798e0eb2fb52125537b1d859e681374003ea6dfbfb70c83e945ebbf\": container with ID starting with 91357f83f798e0eb2fb52125537b1d859e681374003ea6dfbfb70c83e945ebbf not found: ID does not exist" Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.243916 4826 scope.go:117] "RemoveContainer" containerID="07f191b7f4e594494608c157d140a460b04fcbf6c6b3ed65f1aa6de3bf9a244f" Jan 29 09:04:06 crc kubenswrapper[4826]: E0129 09:04:06.244402 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07f191b7f4e594494608c157d140a460b04fcbf6c6b3ed65f1aa6de3bf9a244f\": container with ID starting with 07f191b7f4e594494608c157d140a460b04fcbf6c6b3ed65f1aa6de3bf9a244f not found: ID does not exist" containerID="07f191b7f4e594494608c157d140a460b04fcbf6c6b3ed65f1aa6de3bf9a244f" Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.244432 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f191b7f4e594494608c157d140a460b04fcbf6c6b3ed65f1aa6de3bf9a244f"} err="failed to get container status \"07f191b7f4e594494608c157d140a460b04fcbf6c6b3ed65f1aa6de3bf9a244f\": rpc error: code = NotFound desc = could not find container \"07f191b7f4e594494608c157d140a460b04fcbf6c6b3ed65f1aa6de3bf9a244f\": container with ID starting with 07f191b7f4e594494608c157d140a460b04fcbf6c6b3ed65f1aa6de3bf9a244f not found: ID does not exist" Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.244451 4826 scope.go:117] "RemoveContainer" containerID="da45407eb6f3bb579e70754f598538e842d82e70416edadd10d80da7aae1b002" Jan 29 09:04:06 crc kubenswrapper[4826]: E0129 09:04:06.244652 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da45407eb6f3bb579e70754f598538e842d82e70416edadd10d80da7aae1b002\": container with ID starting with da45407eb6f3bb579e70754f598538e842d82e70416edadd10d80da7aae1b002 not found: ID does not exist" containerID="da45407eb6f3bb579e70754f598538e842d82e70416edadd10d80da7aae1b002" Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.244679 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da45407eb6f3bb579e70754f598538e842d82e70416edadd10d80da7aae1b002"} err="failed to get container status \"da45407eb6f3bb579e70754f598538e842d82e70416edadd10d80da7aae1b002\": rpc error: code = NotFound desc = could not find container \"da45407eb6f3bb579e70754f598538e842d82e70416edadd10d80da7aae1b002\": container with ID starting with da45407eb6f3bb579e70754f598538e842d82e70416edadd10d80da7aae1b002 not found: ID does not exist" Jan 29 09:04:06 crc kubenswrapper[4826]: I0129 09:04:06.818624 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e38830-4667-4376-aba0-861b12d2864a" path="/var/lib/kubelet/pods/67e38830-4667-4376-aba0-861b12d2864a/volumes" Jan 29 09:04:27 crc kubenswrapper[4826]: I0129 09:04:27.331557 4826 generic.go:334] "Generic (PLEG): container finished" podID="8bacd48c-ec38-45a4-825d-0684192208bd" containerID="bc5686eff07d17214aa0de222be06724e9fcdb29878e364f5be856091accc10a" exitCode=0 Jan 29 09:04:27 crc kubenswrapper[4826]: I0129 09:04:27.331703 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" event={"ID":"8bacd48c-ec38-45a4-825d-0684192208bd","Type":"ContainerDied","Data":"bc5686eff07d17214aa0de222be06724e9fcdb29878e364f5be856091accc10a"} Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.004358 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.118367 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-compute-config-0\") pod \"8bacd48c-ec38-45a4-825d-0684192208bd\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.118491 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cells-global-config-0\") pod \"8bacd48c-ec38-45a4-825d-0684192208bd\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.118576 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-compute-config-1\") pod \"8bacd48c-ec38-45a4-825d-0684192208bd\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.118657 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpl6l\" (UniqueName: \"kubernetes.io/projected/8bacd48c-ec38-45a4-825d-0684192208bd-kube-api-access-fpl6l\") pod \"8bacd48c-ec38-45a4-825d-0684192208bd\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.118853 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-combined-ca-bundle\") pod \"8bacd48c-ec38-45a4-825d-0684192208bd\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.118919 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-migration-ssh-key-1\") pod \"8bacd48c-ec38-45a4-825d-0684192208bd\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.118950 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-ssh-key-openstack-cell1\") pod \"8bacd48c-ec38-45a4-825d-0684192208bd\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.119256 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-inventory\") pod \"8bacd48c-ec38-45a4-825d-0684192208bd\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.119330 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-migration-ssh-key-0\") pod \"8bacd48c-ec38-45a4-825d-0684192208bd\" (UID: \"8bacd48c-ec38-45a4-825d-0684192208bd\") " Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.125508 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bacd48c-ec38-45a4-825d-0684192208bd-kube-api-access-fpl6l" (OuterVolumeSpecName: "kube-api-access-fpl6l") pod "8bacd48c-ec38-45a4-825d-0684192208bd" (UID: "8bacd48c-ec38-45a4-825d-0684192208bd"). InnerVolumeSpecName "kube-api-access-fpl6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.127724 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "8bacd48c-ec38-45a4-825d-0684192208bd" (UID: "8bacd48c-ec38-45a4-825d-0684192208bd"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.148060 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "8bacd48c-ec38-45a4-825d-0684192208bd" (UID: "8bacd48c-ec38-45a4-825d-0684192208bd"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.149653 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-inventory" (OuterVolumeSpecName: "inventory") pod "8bacd48c-ec38-45a4-825d-0684192208bd" (UID: "8bacd48c-ec38-45a4-825d-0684192208bd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.150189 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8bacd48c-ec38-45a4-825d-0684192208bd" (UID: "8bacd48c-ec38-45a4-825d-0684192208bd"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.150690 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8bacd48c-ec38-45a4-825d-0684192208bd" (UID: "8bacd48c-ec38-45a4-825d-0684192208bd"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.157493 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8bacd48c-ec38-45a4-825d-0684192208bd" (UID: "8bacd48c-ec38-45a4-825d-0684192208bd"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.159225 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8bacd48c-ec38-45a4-825d-0684192208bd" (UID: "8bacd48c-ec38-45a4-825d-0684192208bd"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.183664 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8bacd48c-ec38-45a4-825d-0684192208bd" (UID: "8bacd48c-ec38-45a4-825d-0684192208bd"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.222222 4826 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.222262 4826 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.222272 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.222281 4826 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.222290 4826 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.222313 4826 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.222324 4826 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.222335 4826 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8bacd48c-ec38-45a4-825d-0684192208bd-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.222347 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpl6l\" (UniqueName: \"kubernetes.io/projected/8bacd48c-ec38-45a4-825d-0684192208bd-kube-api-access-fpl6l\") on node \"crc\" DevicePath \"\"" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.350148 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" event={"ID":"8bacd48c-ec38-45a4-825d-0684192208bd","Type":"ContainerDied","Data":"f4b16f5c45a664ce51e2f38240185e7c71b0ef27e8f2e98971071e5e5d1ce58f"} Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.350205 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4b16f5c45a664ce51e2f38240185e7c71b0ef27e8f2e98971071e5e5d1ce58f" Jan 29 09:04:29 crc kubenswrapper[4826]: I0129 09:04:29.350289 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn" Jan 29 09:04:35 crc kubenswrapper[4826]: I0129 09:04:35.656130 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:04:35 crc kubenswrapper[4826]: I0129 09:04:35.656738 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:05:05 crc kubenswrapper[4826]: I0129 09:05:05.656394 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:05:05 crc kubenswrapper[4826]: I0129 09:05:05.656859 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:05:05 crc kubenswrapper[4826]: I0129 09:05:05.656893 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 09:05:05 crc kubenswrapper[4826]: I0129 09:05:05.657454 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2af13fc74b9b9ef9a03ea411f88a9223259f0cd3bbca719eaa0a1a52cc09f980"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:05:05 crc kubenswrapper[4826]: I0129 09:05:05.657502 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://2af13fc74b9b9ef9a03ea411f88a9223259f0cd3bbca719eaa0a1a52cc09f980" gracePeriod=600 Jan 29 09:05:06 crc kubenswrapper[4826]: I0129 09:05:06.726548 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="2af13fc74b9b9ef9a03ea411f88a9223259f0cd3bbca719eaa0a1a52cc09f980" exitCode=0 Jan 29 09:05:06 crc kubenswrapper[4826]: I0129 09:05:06.726798 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"2af13fc74b9b9ef9a03ea411f88a9223259f0cd3bbca719eaa0a1a52cc09f980"} Jan 29 09:05:06 crc kubenswrapper[4826]: I0129 09:05:06.727766 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8"} Jan 29 09:05:06 crc kubenswrapper[4826]: I0129 09:05:06.727797 4826 scope.go:117] "RemoveContainer" containerID="687be0e4fab9ea6b1c277db6848c566541ae32496c85f97fc6f9e12e832a22fd" Jan 29 09:06:12 crc kubenswrapper[4826]: I0129 09:06:12.224597 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Jan 29 09:06:12 crc kubenswrapper[4826]: I0129 09:06:12.225396 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="3481317b-b919-4828-8880-1b5446b88adb" containerName="adoption" containerID="cri-o://1ffb4254a5ed7663a653716ce73b15da9075226ab195f9b2d15f6c724fcd86d1" gracePeriod=30 Jan 29 09:06:42 crc kubenswrapper[4826]: I0129 09:06:42.682946 4826 generic.go:334] "Generic (PLEG): container finished" podID="3481317b-b919-4828-8880-1b5446b88adb" containerID="1ffb4254a5ed7663a653716ce73b15da9075226ab195f9b2d15f6c724fcd86d1" exitCode=137 Jan 29 09:06:42 crc kubenswrapper[4826]: I0129 09:06:42.683080 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"3481317b-b919-4828-8880-1b5446b88adb","Type":"ContainerDied","Data":"1ffb4254a5ed7663a653716ce73b15da9075226ab195f9b2d15f6c724fcd86d1"} Jan 29 09:06:42 crc kubenswrapper[4826]: I0129 09:06:42.683482 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"3481317b-b919-4828-8880-1b5446b88adb","Type":"ContainerDied","Data":"16ff19e51cd5430c96e77fb2877d7572df08d4b87026c81689d565ed6abcbd2d"} Jan 29 09:06:42 crc kubenswrapper[4826]: I0129 09:06:42.683672 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ff19e51cd5430c96e77fb2877d7572df08d4b87026c81689d565ed6abcbd2d" Jan 29 09:06:42 crc kubenswrapper[4826]: I0129 09:06:42.713628 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 29 09:06:42 crc kubenswrapper[4826]: I0129 09:06:42.852133 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e\") pod \"3481317b-b919-4828-8880-1b5446b88adb\" (UID: \"3481317b-b919-4828-8880-1b5446b88adb\") " Jan 29 09:06:42 crc kubenswrapper[4826]: I0129 09:06:42.852288 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8q76\" (UniqueName: \"kubernetes.io/projected/3481317b-b919-4828-8880-1b5446b88adb-kube-api-access-n8q76\") pod \"3481317b-b919-4828-8880-1b5446b88adb\" (UID: \"3481317b-b919-4828-8880-1b5446b88adb\") " Jan 29 09:06:42 crc kubenswrapper[4826]: I0129 09:06:42.865593 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3481317b-b919-4828-8880-1b5446b88adb-kube-api-access-n8q76" (OuterVolumeSpecName: "kube-api-access-n8q76") pod "3481317b-b919-4828-8880-1b5446b88adb" (UID: "3481317b-b919-4828-8880-1b5446b88adb"). InnerVolumeSpecName "kube-api-access-n8q76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:06:42 crc kubenswrapper[4826]: I0129 09:06:42.871767 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e" (OuterVolumeSpecName: "mariadb-data") pod "3481317b-b919-4828-8880-1b5446b88adb" (UID: "3481317b-b919-4828-8880-1b5446b88adb"). InnerVolumeSpecName "pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 09:06:42 crc kubenswrapper[4826]: I0129 09:06:42.954994 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e\") on node \"crc\" " Jan 29 09:06:42 crc kubenswrapper[4826]: I0129 09:06:42.955038 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8q76\" (UniqueName: \"kubernetes.io/projected/3481317b-b919-4828-8880-1b5446b88adb-kube-api-access-n8q76\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:42 crc kubenswrapper[4826]: I0129 09:06:42.979068 4826 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 29 09:06:42 crc kubenswrapper[4826]: I0129 09:06:42.979242 4826 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e") on node "crc" Jan 29 09:06:43 crc kubenswrapper[4826]: I0129 09:06:43.056677 4826 reconciler_common.go:293] "Volume detached for volume \"pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fbf5e6ad-7b00-417e-b058-e0146e11439e\") on node \"crc\" DevicePath \"\"" Jan 29 09:06:43 crc kubenswrapper[4826]: I0129 09:06:43.693465 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 29 09:06:43 crc kubenswrapper[4826]: I0129 09:06:43.734580 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Jan 29 09:06:43 crc kubenswrapper[4826]: I0129 09:06:43.746753 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Jan 29 09:06:44 crc kubenswrapper[4826]: I0129 09:06:44.296669 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Jan 29 09:06:44 crc kubenswrapper[4826]: I0129 09:06:44.297171 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="05cde62b-3d8f-402e-993b-244facc60c7f" containerName="adoption" containerID="cri-o://7fbe0117a117180fec7ede910f9a96340719f29886cb5d0282e444ad134255a4" gracePeriod=30 Jan 29 09:06:44 crc kubenswrapper[4826]: I0129 09:06:44.820389 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3481317b-b919-4828-8880-1b5446b88adb" path="/var/lib/kubelet/pods/3481317b-b919-4828-8880-1b5446b88adb/volumes" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.591019 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nkb7s"] Jan 29 09:06:50 crc kubenswrapper[4826]: E0129 09:06:50.591991 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e38830-4667-4376-aba0-861b12d2864a" containerName="extract-content" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.592004 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e38830-4667-4376-aba0-861b12d2864a" containerName="extract-content" Jan 29 09:06:50 crc kubenswrapper[4826]: E0129 09:06:50.592034 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bacd48c-ec38-45a4-825d-0684192208bd" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.592048 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bacd48c-ec38-45a4-825d-0684192208bd" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 29 09:06:50 crc kubenswrapper[4826]: E0129 09:06:50.592075 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3481317b-b919-4828-8880-1b5446b88adb" containerName="adoption" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.592082 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3481317b-b919-4828-8880-1b5446b88adb" containerName="adoption" Jan 29 09:06:50 crc kubenswrapper[4826]: E0129 09:06:50.592102 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e38830-4667-4376-aba0-861b12d2864a" containerName="extract-utilities" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.592108 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e38830-4667-4376-aba0-861b12d2864a" containerName="extract-utilities" Jan 29 09:06:50 crc kubenswrapper[4826]: E0129 09:06:50.592119 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e38830-4667-4376-aba0-861b12d2864a" containerName="registry-server" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.592128 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e38830-4667-4376-aba0-861b12d2864a" containerName="registry-server" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.592325 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e38830-4667-4376-aba0-861b12d2864a" containerName="registry-server" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.592336 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bacd48c-ec38-45a4-825d-0684192208bd" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.592358 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3481317b-b919-4828-8880-1b5446b88adb" containerName="adoption" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.593896 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.612725 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkb7s"] Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.725244 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sddqh\" (UniqueName: \"kubernetes.io/projected/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-kube-api-access-sddqh\") pod \"redhat-marketplace-nkb7s\" (UID: \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\") " pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.725389 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-catalog-content\") pod \"redhat-marketplace-nkb7s\" (UID: \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\") " pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.725580 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-utilities\") pod \"redhat-marketplace-nkb7s\" (UID: \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\") " pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.826791 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-utilities\") pod \"redhat-marketplace-nkb7s\" (UID: \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\") " pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.826867 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sddqh\" (UniqueName: \"kubernetes.io/projected/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-kube-api-access-sddqh\") pod \"redhat-marketplace-nkb7s\" (UID: \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\") " pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.826949 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-catalog-content\") pod \"redhat-marketplace-nkb7s\" (UID: \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\") " pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.827408 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-utilities\") pod \"redhat-marketplace-nkb7s\" (UID: \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\") " pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.827478 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-catalog-content\") pod \"redhat-marketplace-nkb7s\" (UID: \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\") " pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.850277 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sddqh\" (UniqueName: \"kubernetes.io/projected/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-kube-api-access-sddqh\") pod \"redhat-marketplace-nkb7s\" (UID: \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\") " pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:06:50 crc kubenswrapper[4826]: I0129 09:06:50.920501 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:06:51 crc kubenswrapper[4826]: I0129 09:06:51.409495 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkb7s"] Jan 29 09:06:51 crc kubenswrapper[4826]: I0129 09:06:51.776146 4826 generic.go:334] "Generic (PLEG): container finished" podID="2b17903e-dd8a-4dcc-808a-bf6b74daec8a" containerID="e12fe07429545fcfd05de38820a9138eca6e7ec95e641c2e1de4c0a35fd9d192" exitCode=0 Jan 29 09:06:51 crc kubenswrapper[4826]: I0129 09:06:51.776256 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkb7s" event={"ID":"2b17903e-dd8a-4dcc-808a-bf6b74daec8a","Type":"ContainerDied","Data":"e12fe07429545fcfd05de38820a9138eca6e7ec95e641c2e1de4c0a35fd9d192"} Jan 29 09:06:51 crc kubenswrapper[4826]: I0129 09:06:51.776538 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkb7s" event={"ID":"2b17903e-dd8a-4dcc-808a-bf6b74daec8a","Type":"ContainerStarted","Data":"61a48e067918eb155f4e0409c07949b8071ad427ad3a1398cf87e9d939bd9143"} Jan 29 09:06:51 crc kubenswrapper[4826]: I0129 09:06:51.778894 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:06:53 crc kubenswrapper[4826]: I0129 09:06:53.804323 4826 generic.go:334] "Generic (PLEG): container finished" podID="2b17903e-dd8a-4dcc-808a-bf6b74daec8a" containerID="811e658d067643d1e0ab212ccfde4009709403d0450cfa08706d7a2060749dab" exitCode=0 Jan 29 09:06:53 crc kubenswrapper[4826]: I0129 09:06:53.804363 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkb7s" event={"ID":"2b17903e-dd8a-4dcc-808a-bf6b74daec8a","Type":"ContainerDied","Data":"811e658d067643d1e0ab212ccfde4009709403d0450cfa08706d7a2060749dab"} Jan 29 09:06:54 crc kubenswrapper[4826]: I0129 09:06:54.825429 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkb7s" event={"ID":"2b17903e-dd8a-4dcc-808a-bf6b74daec8a","Type":"ContainerStarted","Data":"cac5050fba2c0cb6b8d631f82634ffb7352fd65a7a122bff53d364f8b84e20af"} Jan 29 09:06:54 crc kubenswrapper[4826]: I0129 09:06:54.851484 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nkb7s" podStartSLOduration=2.17897595 podStartE2EDuration="4.851465682s" podCreationTimestamp="2026-01-29 09:06:50 +0000 UTC" firstStartedPulling="2026-01-29 09:06:51.778600242 +0000 UTC m=+8595.640393311" lastFinishedPulling="2026-01-29 09:06:54.451089974 +0000 UTC m=+8598.312883043" observedRunningTime="2026-01-29 09:06:54.847378473 +0000 UTC m=+8598.709171542" watchObservedRunningTime="2026-01-29 09:06:54.851465682 +0000 UTC m=+8598.713258751" Jan 29 09:07:00 crc kubenswrapper[4826]: I0129 09:07:00.921174 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:07:00 crc kubenswrapper[4826]: I0129 09:07:00.921775 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:07:00 crc kubenswrapper[4826]: I0129 09:07:00.972599 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:07:01 crc kubenswrapper[4826]: I0129 09:07:01.967991 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:07:04 crc kubenswrapper[4826]: I0129 09:07:04.568945 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkb7s"] Jan 29 09:07:04 crc kubenswrapper[4826]: I0129 09:07:04.569587 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nkb7s" podUID="2b17903e-dd8a-4dcc-808a-bf6b74daec8a" containerName="registry-server" containerID="cri-o://cac5050fba2c0cb6b8d631f82634ffb7352fd65a7a122bff53d364f8b84e20af" gracePeriod=2 Jan 29 09:07:04 crc kubenswrapper[4826]: I0129 09:07:04.932366 4826 generic.go:334] "Generic (PLEG): container finished" podID="2b17903e-dd8a-4dcc-808a-bf6b74daec8a" containerID="cac5050fba2c0cb6b8d631f82634ffb7352fd65a7a122bff53d364f8b84e20af" exitCode=0 Jan 29 09:07:04 crc kubenswrapper[4826]: I0129 09:07:04.932741 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkb7s" event={"ID":"2b17903e-dd8a-4dcc-808a-bf6b74daec8a","Type":"ContainerDied","Data":"cac5050fba2c0cb6b8d631f82634ffb7352fd65a7a122bff53d364f8b84e20af"} Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.091271 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.258657 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sddqh\" (UniqueName: \"kubernetes.io/projected/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-kube-api-access-sddqh\") pod \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\" (UID: \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\") " Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.259810 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-catalog-content\") pod \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\" (UID: \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\") " Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.260086 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-utilities\") pod \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\" (UID: \"2b17903e-dd8a-4dcc-808a-bf6b74daec8a\") " Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.261012 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-utilities" (OuterVolumeSpecName: "utilities") pod "2b17903e-dd8a-4dcc-808a-bf6b74daec8a" (UID: "2b17903e-dd8a-4dcc-808a-bf6b74daec8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.269183 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-kube-api-access-sddqh" (OuterVolumeSpecName: "kube-api-access-sddqh") pod "2b17903e-dd8a-4dcc-808a-bf6b74daec8a" (UID: "2b17903e-dd8a-4dcc-808a-bf6b74daec8a"). InnerVolumeSpecName "kube-api-access-sddqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.285744 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b17903e-dd8a-4dcc-808a-bf6b74daec8a" (UID: "2b17903e-dd8a-4dcc-808a-bf6b74daec8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.363242 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sddqh\" (UniqueName: \"kubernetes.io/projected/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-kube-api-access-sddqh\") on node \"crc\" DevicePath \"\"" Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.363273 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.363283 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b17903e-dd8a-4dcc-808a-bf6b74daec8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.944895 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkb7s" event={"ID":"2b17903e-dd8a-4dcc-808a-bf6b74daec8a","Type":"ContainerDied","Data":"61a48e067918eb155f4e0409c07949b8071ad427ad3a1398cf87e9d939bd9143"} Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.944970 4826 scope.go:117] "RemoveContainer" containerID="cac5050fba2c0cb6b8d631f82634ffb7352fd65a7a122bff53d364f8b84e20af" Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.944987 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkb7s" Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.982431 4826 scope.go:117] "RemoveContainer" containerID="811e658d067643d1e0ab212ccfde4009709403d0450cfa08706d7a2060749dab" Jan 29 09:07:05 crc kubenswrapper[4826]: I0129 09:07:05.989340 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkb7s"] Jan 29 09:07:06 crc kubenswrapper[4826]: I0129 09:07:06.004733 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkb7s"] Jan 29 09:07:06 crc kubenswrapper[4826]: I0129 09:07:06.013518 4826 scope.go:117] "RemoveContainer" containerID="e12fe07429545fcfd05de38820a9138eca6e7ec95e641c2e1de4c0a35fd9d192" Jan 29 09:07:06 crc kubenswrapper[4826]: I0129 09:07:06.820033 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b17903e-dd8a-4dcc-808a-bf6b74daec8a" path="/var/lib/kubelet/pods/2b17903e-dd8a-4dcc-808a-bf6b74daec8a/volumes" Jan 29 09:07:14 crc kubenswrapper[4826]: I0129 09:07:14.790238 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 29 09:07:14 crc kubenswrapper[4826]: I0129 09:07:14.899140 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/05cde62b-3d8f-402e-993b-244facc60c7f-ovn-data-cert\") pod \"05cde62b-3d8f-402e-993b-244facc60c7f\" (UID: \"05cde62b-3d8f-402e-993b-244facc60c7f\") " Jan 29 09:07:14 crc kubenswrapper[4826]: I0129 09:07:14.899275 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4lbl\" (UniqueName: \"kubernetes.io/projected/05cde62b-3d8f-402e-993b-244facc60c7f-kube-api-access-z4lbl\") pod \"05cde62b-3d8f-402e-993b-244facc60c7f\" (UID: \"05cde62b-3d8f-402e-993b-244facc60c7f\") " Jan 29 09:07:14 crc kubenswrapper[4826]: I0129 09:07:14.907863 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05cde62b-3d8f-402e-993b-244facc60c7f-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "05cde62b-3d8f-402e-993b-244facc60c7f" (UID: "05cde62b-3d8f-402e-993b-244facc60c7f"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:07:14 crc kubenswrapper[4826]: I0129 09:07:14.908533 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05cde62b-3d8f-402e-993b-244facc60c7f-kube-api-access-z4lbl" (OuterVolumeSpecName: "kube-api-access-z4lbl") pod "05cde62b-3d8f-402e-993b-244facc60c7f" (UID: "05cde62b-3d8f-402e-993b-244facc60c7f"). InnerVolumeSpecName "kube-api-access-z4lbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:07:14 crc kubenswrapper[4826]: I0129 09:07:14.990454 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-32315dc2-dce8-4946-a51f-2775a0aab875\") pod \"05cde62b-3d8f-402e-993b-244facc60c7f\" (UID: \"05cde62b-3d8f-402e-993b-244facc60c7f\") " Jan 29 09:07:14 crc kubenswrapper[4826]: I0129 09:07:14.991631 4826 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/05cde62b-3d8f-402e-993b-244facc60c7f-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Jan 29 09:07:14 crc kubenswrapper[4826]: I0129 09:07:14.991667 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4lbl\" (UniqueName: \"kubernetes.io/projected/05cde62b-3d8f-402e-993b-244facc60c7f-kube-api-access-z4lbl\") on node \"crc\" DevicePath \"\"" Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.017385 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-32315dc2-dce8-4946-a51f-2775a0aab875" (OuterVolumeSpecName: "ovn-data") pod "05cde62b-3d8f-402e-993b-244facc60c7f" (UID: "05cde62b-3d8f-402e-993b-244facc60c7f"). InnerVolumeSpecName "pvc-32315dc2-dce8-4946-a51f-2775a0aab875". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.030853 4826 generic.go:334] "Generic (PLEG): container finished" podID="05cde62b-3d8f-402e-993b-244facc60c7f" containerID="7fbe0117a117180fec7ede910f9a96340719f29886cb5d0282e444ad134255a4" exitCode=137 Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.030898 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"05cde62b-3d8f-402e-993b-244facc60c7f","Type":"ContainerDied","Data":"7fbe0117a117180fec7ede910f9a96340719f29886cb5d0282e444ad134255a4"} Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.030918 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.030936 4826 scope.go:117] "RemoveContainer" containerID="7fbe0117a117180fec7ede910f9a96340719f29886cb5d0282e444ad134255a4" Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.030925 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"05cde62b-3d8f-402e-993b-244facc60c7f","Type":"ContainerDied","Data":"d68e39ce2a9b2359f9af1de0a2725358fbbb0c4a0ab48d586c862c68d6e8c985"} Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.077747 4826 scope.go:117] "RemoveContainer" containerID="7fbe0117a117180fec7ede910f9a96340719f29886cb5d0282e444ad134255a4" Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.078423 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Jan 29 09:07:15 crc kubenswrapper[4826]: E0129 09:07:15.078923 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fbe0117a117180fec7ede910f9a96340719f29886cb5d0282e444ad134255a4\": container with ID starting with 7fbe0117a117180fec7ede910f9a96340719f29886cb5d0282e444ad134255a4 not found: ID does not exist" containerID="7fbe0117a117180fec7ede910f9a96340719f29886cb5d0282e444ad134255a4" Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.078976 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fbe0117a117180fec7ede910f9a96340719f29886cb5d0282e444ad134255a4"} err="failed to get container status \"7fbe0117a117180fec7ede910f9a96340719f29886cb5d0282e444ad134255a4\": rpc error: code = NotFound desc = could not find container \"7fbe0117a117180fec7ede910f9a96340719f29886cb5d0282e444ad134255a4\": container with ID starting with 7fbe0117a117180fec7ede910f9a96340719f29886cb5d0282e444ad134255a4 not found: ID does not exist" Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.086520 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.094106 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-32315dc2-dce8-4946-a51f-2775a0aab875\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-32315dc2-dce8-4946-a51f-2775a0aab875\") on node \"crc\" " Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.119092 4826 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.119342 4826 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-32315dc2-dce8-4946-a51f-2775a0aab875" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-32315dc2-dce8-4946-a51f-2775a0aab875") on node "crc" Jan 29 09:07:15 crc kubenswrapper[4826]: I0129 09:07:15.196693 4826 reconciler_common.go:293] "Volume detached for volume \"pvc-32315dc2-dce8-4946-a51f-2775a0aab875\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-32315dc2-dce8-4946-a51f-2775a0aab875\") on node \"crc\" DevicePath \"\"" Jan 29 09:07:16 crc kubenswrapper[4826]: I0129 09:07:16.823731 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05cde62b-3d8f-402e-993b-244facc60c7f" path="/var/lib/kubelet/pods/05cde62b-3d8f-402e-993b-244facc60c7f/volumes" Jan 29 09:07:25 crc kubenswrapper[4826]: I0129 09:07:25.547424 4826 scope.go:117] "RemoveContainer" containerID="1ffb4254a5ed7663a653716ce73b15da9075226ab195f9b2d15f6c724fcd86d1" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.522751 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 09:07:34 crc kubenswrapper[4826]: E0129 09:07:34.525435 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b17903e-dd8a-4dcc-808a-bf6b74daec8a" containerName="extract-content" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.525458 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b17903e-dd8a-4dcc-808a-bf6b74daec8a" containerName="extract-content" Jan 29 09:07:34 crc kubenswrapper[4826]: E0129 09:07:34.525478 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b17903e-dd8a-4dcc-808a-bf6b74daec8a" containerName="extract-utilities" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.525486 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b17903e-dd8a-4dcc-808a-bf6b74daec8a" containerName="extract-utilities" Jan 29 09:07:34 crc kubenswrapper[4826]: E0129 09:07:34.525500 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05cde62b-3d8f-402e-993b-244facc60c7f" containerName="adoption" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.525506 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="05cde62b-3d8f-402e-993b-244facc60c7f" containerName="adoption" Jan 29 09:07:34 crc kubenswrapper[4826]: E0129 09:07:34.525514 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b17903e-dd8a-4dcc-808a-bf6b74daec8a" containerName="registry-server" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.525519 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b17903e-dd8a-4dcc-808a-bf6b74daec8a" containerName="registry-server" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.525812 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b17903e-dd8a-4dcc-808a-bf6b74daec8a" containerName="registry-server" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.525851 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="05cde62b-3d8f-402e-993b-244facc60c7f" containerName="adoption" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.528044 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.530985 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qlrsf" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.531192 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.531604 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.534633 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.538080 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.603095 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.603186 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzz8b\" (UniqueName: \"kubernetes.io/projected/0e54d8ec-766b-4711-94e7-08fcfe836c67-kube-api-access-qzz8b\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.603234 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e54d8ec-766b-4711-94e7-08fcfe836c67-config-data\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.603490 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e54d8ec-766b-4711-94e7-08fcfe836c67-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.603596 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.603638 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e54d8ec-766b-4711-94e7-08fcfe836c67-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.603740 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e54d8ec-766b-4711-94e7-08fcfe836c67-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.603841 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.603868 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.706313 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzz8b\" (UniqueName: \"kubernetes.io/projected/0e54d8ec-766b-4711-94e7-08fcfe836c67-kube-api-access-qzz8b\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.706405 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e54d8ec-766b-4711-94e7-08fcfe836c67-config-data\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.706512 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e54d8ec-766b-4711-94e7-08fcfe836c67-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.706562 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.706600 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e54d8ec-766b-4711-94e7-08fcfe836c67-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.706652 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e54d8ec-766b-4711-94e7-08fcfe836c67-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.706710 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.706742 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.706814 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.707291 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.707427 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e54d8ec-766b-4711-94e7-08fcfe836c67-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.707500 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e54d8ec-766b-4711-94e7-08fcfe836c67-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.707797 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e54d8ec-766b-4711-94e7-08fcfe836c67-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.708288 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e54d8ec-766b-4711-94e7-08fcfe836c67-config-data\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.712338 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.713004 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.714532 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.732028 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzz8b\" (UniqueName: \"kubernetes.io/projected/0e54d8ec-766b-4711-94e7-08fcfe836c67-kube-api-access-qzz8b\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.747738 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " pod="openstack/tempest-tests-tempest" Jan 29 09:07:34 crc kubenswrapper[4826]: I0129 09:07:34.846904 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 09:07:35 crc kubenswrapper[4826]: I0129 09:07:35.307209 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 09:07:35 crc kubenswrapper[4826]: I0129 09:07:35.656387 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:07:35 crc kubenswrapper[4826]: I0129 09:07:35.656757 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:07:36 crc kubenswrapper[4826]: I0129 09:07:36.227719 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e54d8ec-766b-4711-94e7-08fcfe836c67","Type":"ContainerStarted","Data":"9eda60cfaae70ce853662b3a7babee1b1d31a48837226841a4d5af8023258075"} Jan 29 09:08:05 crc kubenswrapper[4826]: I0129 09:08:05.656361 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:08:05 crc kubenswrapper[4826]: I0129 09:08:05.657061 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:08:13 crc kubenswrapper[4826]: I0129 09:08:13.458887 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qmn4v"] Jan 29 09:08:13 crc kubenswrapper[4826]: I0129 09:08:13.462013 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:08:13 crc kubenswrapper[4826]: I0129 09:08:13.467657 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qmn4v"] Jan 29 09:08:13 crc kubenswrapper[4826]: I0129 09:08:13.637466 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrc9n\" (UniqueName: \"kubernetes.io/projected/6c8d6124-f19f-4340-84c9-4fee9226c642-kube-api-access-jrc9n\") pod \"redhat-operators-qmn4v\" (UID: \"6c8d6124-f19f-4340-84c9-4fee9226c642\") " pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:08:13 crc kubenswrapper[4826]: I0129 09:08:13.637534 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8d6124-f19f-4340-84c9-4fee9226c642-utilities\") pod \"redhat-operators-qmn4v\" (UID: \"6c8d6124-f19f-4340-84c9-4fee9226c642\") " pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:08:13 crc kubenswrapper[4826]: I0129 09:08:13.637576 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8d6124-f19f-4340-84c9-4fee9226c642-catalog-content\") pod \"redhat-operators-qmn4v\" (UID: \"6c8d6124-f19f-4340-84c9-4fee9226c642\") " pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:08:13 crc kubenswrapper[4826]: I0129 09:08:13.739879 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8d6124-f19f-4340-84c9-4fee9226c642-catalog-content\") pod \"redhat-operators-qmn4v\" (UID: \"6c8d6124-f19f-4340-84c9-4fee9226c642\") " pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:08:13 crc kubenswrapper[4826]: I0129 09:08:13.740135 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrc9n\" (UniqueName: \"kubernetes.io/projected/6c8d6124-f19f-4340-84c9-4fee9226c642-kube-api-access-jrc9n\") pod \"redhat-operators-qmn4v\" (UID: \"6c8d6124-f19f-4340-84c9-4fee9226c642\") " pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:08:13 crc kubenswrapper[4826]: I0129 09:08:13.740178 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8d6124-f19f-4340-84c9-4fee9226c642-utilities\") pod \"redhat-operators-qmn4v\" (UID: \"6c8d6124-f19f-4340-84c9-4fee9226c642\") " pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:08:13 crc kubenswrapper[4826]: I0129 09:08:13.740566 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8d6124-f19f-4340-84c9-4fee9226c642-catalog-content\") pod \"redhat-operators-qmn4v\" (UID: \"6c8d6124-f19f-4340-84c9-4fee9226c642\") " pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:08:13 crc kubenswrapper[4826]: I0129 09:08:13.740684 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8d6124-f19f-4340-84c9-4fee9226c642-utilities\") pod \"redhat-operators-qmn4v\" (UID: \"6c8d6124-f19f-4340-84c9-4fee9226c642\") " pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:08:13 crc kubenswrapper[4826]: I0129 09:08:13.772693 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrc9n\" (UniqueName: \"kubernetes.io/projected/6c8d6124-f19f-4340-84c9-4fee9226c642-kube-api-access-jrc9n\") pod \"redhat-operators-qmn4v\" (UID: \"6c8d6124-f19f-4340-84c9-4fee9226c642\") " pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:08:13 crc kubenswrapper[4826]: I0129 09:08:13.794351 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:08:30 crc kubenswrapper[4826]: E0129 09:08:30.511925 4826 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 09:08:30 crc kubenswrapper[4826]: E0129 09:08:30.512995 4826 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:b130bed8e4e0ff029dd29fba80441dc6" Jan 29 09:08:30 crc kubenswrapper[4826]: E0129 09:08:30.513287 4826 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:b130bed8e4e0ff029dd29fba80441dc6,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzz8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(0e54d8ec-766b-4711-94e7-08fcfe836c67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 09:08:30 crc kubenswrapper[4826]: E0129 09:08:30.514674 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="0e54d8ec-766b-4711-94e7-08fcfe836c67" Jan 29 09:08:30 crc kubenswrapper[4826]: E0129 09:08:30.847751 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:b130bed8e4e0ff029dd29fba80441dc6\\\"\"" pod="openstack/tempest-tests-tempest" podUID="0e54d8ec-766b-4711-94e7-08fcfe836c67" Jan 29 09:08:31 crc kubenswrapper[4826]: I0129 09:08:31.049489 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qmn4v"] Jan 29 09:08:31 crc kubenswrapper[4826]: I0129 09:08:31.865375 4826 generic.go:334] "Generic (PLEG): container finished" podID="6c8d6124-f19f-4340-84c9-4fee9226c642" containerID="98157c1e78299c67562b39a01e2468d4036bd89fd64a9424bc3658af1e5c4f86" exitCode=0 Jan 29 09:08:31 crc kubenswrapper[4826]: I0129 09:08:31.865879 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmn4v" event={"ID":"6c8d6124-f19f-4340-84c9-4fee9226c642","Type":"ContainerDied","Data":"98157c1e78299c67562b39a01e2468d4036bd89fd64a9424bc3658af1e5c4f86"} Jan 29 09:08:31 crc kubenswrapper[4826]: I0129 09:08:31.865925 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmn4v" event={"ID":"6c8d6124-f19f-4340-84c9-4fee9226c642","Type":"ContainerStarted","Data":"d45c44e11da83afe0370fb2507c9ec0659b2a60531aa79e0a96dd3b33cd22ece"} Jan 29 09:08:33 crc kubenswrapper[4826]: I0129 09:08:33.890803 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmn4v" event={"ID":"6c8d6124-f19f-4340-84c9-4fee9226c642","Type":"ContainerStarted","Data":"bb44156cce5c72482d25ca140502a15a2420e6901751e4adee222fe4750bb1dd"} Jan 29 09:08:35 crc kubenswrapper[4826]: I0129 09:08:35.657073 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:08:35 crc kubenswrapper[4826]: I0129 09:08:35.657724 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:08:35 crc kubenswrapper[4826]: I0129 09:08:35.657812 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 09:08:35 crc kubenswrapper[4826]: I0129 09:08:35.659419 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:08:35 crc kubenswrapper[4826]: I0129 09:08:35.659529 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" gracePeriod=600 Jan 29 09:08:35 crc kubenswrapper[4826]: E0129 09:08:35.814682 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:08:35 crc kubenswrapper[4826]: I0129 09:08:35.921655 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" exitCode=0 Jan 29 09:08:35 crc kubenswrapper[4826]: I0129 09:08:35.921899 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8"} Jan 29 09:08:35 crc kubenswrapper[4826]: I0129 09:08:35.922330 4826 scope.go:117] "RemoveContainer" containerID="2af13fc74b9b9ef9a03ea411f88a9223259f0cd3bbca719eaa0a1a52cc09f980" Jan 29 09:08:35 crc kubenswrapper[4826]: I0129 09:08:35.923859 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:08:35 crc kubenswrapper[4826]: E0129 09:08:35.924467 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:08:43 crc kubenswrapper[4826]: I0129 09:08:43.783787 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 29 09:08:47 crc kubenswrapper[4826]: I0129 09:08:47.074980 4826 generic.go:334] "Generic (PLEG): container finished" podID="6c8d6124-f19f-4340-84c9-4fee9226c642" containerID="bb44156cce5c72482d25ca140502a15a2420e6901751e4adee222fe4750bb1dd" exitCode=0 Jan 29 09:08:47 crc kubenswrapper[4826]: I0129 09:08:47.075159 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmn4v" event={"ID":"6c8d6124-f19f-4340-84c9-4fee9226c642","Type":"ContainerDied","Data":"bb44156cce5c72482d25ca140502a15a2420e6901751e4adee222fe4750bb1dd"} Jan 29 09:08:47 crc kubenswrapper[4826]: I0129 09:08:47.809154 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:08:47 crc kubenswrapper[4826]: E0129 09:08:47.810036 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:08:48 crc kubenswrapper[4826]: I0129 09:08:48.091082 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e54d8ec-766b-4711-94e7-08fcfe836c67","Type":"ContainerStarted","Data":"6bae026feb1c754ad80de4807ddba642bfae83d79fc8dedce622f166f50df6ee"} Jan 29 09:08:48 crc kubenswrapper[4826]: I0129 09:08:48.127387 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=6.668698951 podStartE2EDuration="1m15.127351065s" podCreationTimestamp="2026-01-29 09:07:33 +0000 UTC" firstStartedPulling="2026-01-29 09:07:35.319153462 +0000 UTC m=+8639.180946531" lastFinishedPulling="2026-01-29 09:08:43.777805556 +0000 UTC m=+8707.639598645" observedRunningTime="2026-01-29 09:08:48.112147039 +0000 UTC m=+8711.973940108" watchObservedRunningTime="2026-01-29 09:08:48.127351065 +0000 UTC m=+8711.989144134" Jan 29 09:08:49 crc kubenswrapper[4826]: I0129 09:08:49.103750 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmn4v" event={"ID":"6c8d6124-f19f-4340-84c9-4fee9226c642","Type":"ContainerStarted","Data":"cc7865f4e173fc73d704d93dc7dba55929fd9a56e5f9b5f4957dce4ef41922d7"} Jan 29 09:08:49 crc kubenswrapper[4826]: I0129 09:08:49.126724 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qmn4v" podStartSLOduration=19.909979372 podStartE2EDuration="36.126683545s" podCreationTimestamp="2026-01-29 09:08:13 +0000 UTC" firstStartedPulling="2026-01-29 09:08:31.870063289 +0000 UTC m=+8695.731856368" lastFinishedPulling="2026-01-29 09:08:48.086767472 +0000 UTC m=+8711.948560541" observedRunningTime="2026-01-29 09:08:49.124568049 +0000 UTC m=+8712.986361138" watchObservedRunningTime="2026-01-29 09:08:49.126683545 +0000 UTC m=+8712.988476614" Jan 29 09:08:53 crc kubenswrapper[4826]: I0129 09:08:53.794870 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:08:53 crc kubenswrapper[4826]: I0129 09:08:53.795655 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:08:54 crc kubenswrapper[4826]: I0129 09:08:54.860761 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qmn4v" podUID="6c8d6124-f19f-4340-84c9-4fee9226c642" containerName="registry-server" probeResult="failure" output=< Jan 29 09:08:54 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 09:08:54 crc kubenswrapper[4826]: > Jan 29 09:09:00 crc kubenswrapper[4826]: I0129 09:09:00.809626 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:09:00 crc kubenswrapper[4826]: E0129 09:09:00.810538 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:09:03 crc kubenswrapper[4826]: I0129 09:09:03.852425 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:09:03 crc kubenswrapper[4826]: I0129 09:09:03.922409 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:09:04 crc kubenswrapper[4826]: I0129 09:09:04.093942 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qmn4v"] Jan 29 09:09:05 crc kubenswrapper[4826]: I0129 09:09:05.266582 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qmn4v" podUID="6c8d6124-f19f-4340-84c9-4fee9226c642" containerName="registry-server" containerID="cri-o://cc7865f4e173fc73d704d93dc7dba55929fd9a56e5f9b5f4957dce4ef41922d7" gracePeriod=2 Jan 29 09:09:05 crc kubenswrapper[4826]: I0129 09:09:05.834613 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:09:05 crc kubenswrapper[4826]: I0129 09:09:05.909615 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8d6124-f19f-4340-84c9-4fee9226c642-utilities\") pod \"6c8d6124-f19f-4340-84c9-4fee9226c642\" (UID: \"6c8d6124-f19f-4340-84c9-4fee9226c642\") " Jan 29 09:09:05 crc kubenswrapper[4826]: I0129 09:09:05.909730 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrc9n\" (UniqueName: \"kubernetes.io/projected/6c8d6124-f19f-4340-84c9-4fee9226c642-kube-api-access-jrc9n\") pod \"6c8d6124-f19f-4340-84c9-4fee9226c642\" (UID: \"6c8d6124-f19f-4340-84c9-4fee9226c642\") " Jan 29 09:09:05 crc kubenswrapper[4826]: I0129 09:09:05.909969 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8d6124-f19f-4340-84c9-4fee9226c642-catalog-content\") pod \"6c8d6124-f19f-4340-84c9-4fee9226c642\" (UID: \"6c8d6124-f19f-4340-84c9-4fee9226c642\") " Jan 29 09:09:05 crc kubenswrapper[4826]: I0129 09:09:05.917962 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c8d6124-f19f-4340-84c9-4fee9226c642-utilities" (OuterVolumeSpecName: "utilities") pod "6c8d6124-f19f-4340-84c9-4fee9226c642" (UID: "6c8d6124-f19f-4340-84c9-4fee9226c642"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:09:05 crc kubenswrapper[4826]: I0129 09:09:05.919277 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c8d6124-f19f-4340-84c9-4fee9226c642-kube-api-access-jrc9n" (OuterVolumeSpecName: "kube-api-access-jrc9n") pod "6c8d6124-f19f-4340-84c9-4fee9226c642" (UID: "6c8d6124-f19f-4340-84c9-4fee9226c642"). InnerVolumeSpecName "kube-api-access-jrc9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.013286 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c8d6124-f19f-4340-84c9-4fee9226c642-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.013352 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrc9n\" (UniqueName: \"kubernetes.io/projected/6c8d6124-f19f-4340-84c9-4fee9226c642-kube-api-access-jrc9n\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.052193 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c8d6124-f19f-4340-84c9-4fee9226c642-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c8d6124-f19f-4340-84c9-4fee9226c642" (UID: "6c8d6124-f19f-4340-84c9-4fee9226c642"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.115936 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c8d6124-f19f-4340-84c9-4fee9226c642-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.282050 4826 generic.go:334] "Generic (PLEG): container finished" podID="6c8d6124-f19f-4340-84c9-4fee9226c642" containerID="cc7865f4e173fc73d704d93dc7dba55929fd9a56e5f9b5f4957dce4ef41922d7" exitCode=0 Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.282121 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmn4v" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.283411 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmn4v" event={"ID":"6c8d6124-f19f-4340-84c9-4fee9226c642","Type":"ContainerDied","Data":"cc7865f4e173fc73d704d93dc7dba55929fd9a56e5f9b5f4957dce4ef41922d7"} Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.283593 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmn4v" event={"ID":"6c8d6124-f19f-4340-84c9-4fee9226c642","Type":"ContainerDied","Data":"d45c44e11da83afe0370fb2507c9ec0659b2a60531aa79e0a96dd3b33cd22ece"} Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.283708 4826 scope.go:117] "RemoveContainer" containerID="cc7865f4e173fc73d704d93dc7dba55929fd9a56e5f9b5f4957dce4ef41922d7" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.309385 4826 scope.go:117] "RemoveContainer" containerID="bb44156cce5c72482d25ca140502a15a2420e6901751e4adee222fe4750bb1dd" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.334985 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qmn4v"] Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.337421 4826 scope.go:117] "RemoveContainer" containerID="98157c1e78299c67562b39a01e2468d4036bd89fd64a9424bc3658af1e5c4f86" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.356223 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qmn4v"] Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.409586 4826 scope.go:117] "RemoveContainer" containerID="cc7865f4e173fc73d704d93dc7dba55929fd9a56e5f9b5f4957dce4ef41922d7" Jan 29 09:09:06 crc kubenswrapper[4826]: E0129 09:09:06.410270 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc7865f4e173fc73d704d93dc7dba55929fd9a56e5f9b5f4957dce4ef41922d7\": container with ID starting with cc7865f4e173fc73d704d93dc7dba55929fd9a56e5f9b5f4957dce4ef41922d7 not found: ID does not exist" containerID="cc7865f4e173fc73d704d93dc7dba55929fd9a56e5f9b5f4957dce4ef41922d7" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.410395 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc7865f4e173fc73d704d93dc7dba55929fd9a56e5f9b5f4957dce4ef41922d7"} err="failed to get container status \"cc7865f4e173fc73d704d93dc7dba55929fd9a56e5f9b5f4957dce4ef41922d7\": rpc error: code = NotFound desc = could not find container \"cc7865f4e173fc73d704d93dc7dba55929fd9a56e5f9b5f4957dce4ef41922d7\": container with ID starting with cc7865f4e173fc73d704d93dc7dba55929fd9a56e5f9b5f4957dce4ef41922d7 not found: ID does not exist" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.410451 4826 scope.go:117] "RemoveContainer" containerID="bb44156cce5c72482d25ca140502a15a2420e6901751e4adee222fe4750bb1dd" Jan 29 09:09:06 crc kubenswrapper[4826]: E0129 09:09:06.412030 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb44156cce5c72482d25ca140502a15a2420e6901751e4adee222fe4750bb1dd\": container with ID starting with bb44156cce5c72482d25ca140502a15a2420e6901751e4adee222fe4750bb1dd not found: ID does not exist" containerID="bb44156cce5c72482d25ca140502a15a2420e6901751e4adee222fe4750bb1dd" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.412079 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb44156cce5c72482d25ca140502a15a2420e6901751e4adee222fe4750bb1dd"} err="failed to get container status \"bb44156cce5c72482d25ca140502a15a2420e6901751e4adee222fe4750bb1dd\": rpc error: code = NotFound desc = could not find container \"bb44156cce5c72482d25ca140502a15a2420e6901751e4adee222fe4750bb1dd\": container with ID starting with bb44156cce5c72482d25ca140502a15a2420e6901751e4adee222fe4750bb1dd not found: ID does not exist" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.412112 4826 scope.go:117] "RemoveContainer" containerID="98157c1e78299c67562b39a01e2468d4036bd89fd64a9424bc3658af1e5c4f86" Jan 29 09:09:06 crc kubenswrapper[4826]: E0129 09:09:06.413446 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98157c1e78299c67562b39a01e2468d4036bd89fd64a9424bc3658af1e5c4f86\": container with ID starting with 98157c1e78299c67562b39a01e2468d4036bd89fd64a9424bc3658af1e5c4f86 not found: ID does not exist" containerID="98157c1e78299c67562b39a01e2468d4036bd89fd64a9424bc3658af1e5c4f86" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.413483 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98157c1e78299c67562b39a01e2468d4036bd89fd64a9424bc3658af1e5c4f86"} err="failed to get container status \"98157c1e78299c67562b39a01e2468d4036bd89fd64a9424bc3658af1e5c4f86\": rpc error: code = NotFound desc = could not find container \"98157c1e78299c67562b39a01e2468d4036bd89fd64a9424bc3658af1e5c4f86\": container with ID starting with 98157c1e78299c67562b39a01e2468d4036bd89fd64a9424bc3658af1e5c4f86 not found: ID does not exist" Jan 29 09:09:06 crc kubenswrapper[4826]: I0129 09:09:06.826412 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c8d6124-f19f-4340-84c9-4fee9226c642" path="/var/lib/kubelet/pods/6c8d6124-f19f-4340-84c9-4fee9226c642/volumes" Jan 29 09:09:15 crc kubenswrapper[4826]: I0129 09:09:15.809613 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:09:15 crc kubenswrapper[4826]: E0129 09:09:15.810598 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:09:26 crc kubenswrapper[4826]: I0129 09:09:26.816275 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:09:26 crc kubenswrapper[4826]: E0129 09:09:26.817622 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:09:39 crc kubenswrapper[4826]: I0129 09:09:39.809046 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:09:39 crc kubenswrapper[4826]: E0129 09:09:39.813125 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:09:50 crc kubenswrapper[4826]: I0129 09:09:50.809281 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:09:50 crc kubenswrapper[4826]: E0129 09:09:50.810173 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:10:05 crc kubenswrapper[4826]: I0129 09:10:05.809046 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:10:05 crc kubenswrapper[4826]: E0129 09:10:05.809759 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:10:19 crc kubenswrapper[4826]: I0129 09:10:19.808710 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:10:19 crc kubenswrapper[4826]: E0129 09:10:19.810553 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:10:30 crc kubenswrapper[4826]: I0129 09:10:30.809873 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:10:30 crc kubenswrapper[4826]: E0129 09:10:30.810693 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:10:42 crc kubenswrapper[4826]: I0129 09:10:42.808529 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:10:42 crc kubenswrapper[4826]: E0129 09:10:42.809236 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:10:45 crc kubenswrapper[4826]: I0129 09:10:45.999090 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s2vvv"] Jan 29 09:10:46 crc kubenswrapper[4826]: E0129 09:10:46.000182 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c8d6124-f19f-4340-84c9-4fee9226c642" containerName="registry-server" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.000197 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c8d6124-f19f-4340-84c9-4fee9226c642" containerName="registry-server" Jan 29 09:10:46 crc kubenswrapper[4826]: E0129 09:10:46.000209 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c8d6124-f19f-4340-84c9-4fee9226c642" containerName="extract-content" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.000234 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c8d6124-f19f-4340-84c9-4fee9226c642" containerName="extract-content" Jan 29 09:10:46 crc kubenswrapper[4826]: E0129 09:10:46.000287 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c8d6124-f19f-4340-84c9-4fee9226c642" containerName="extract-utilities" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.000311 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c8d6124-f19f-4340-84c9-4fee9226c642" containerName="extract-utilities" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.000533 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c8d6124-f19f-4340-84c9-4fee9226c642" containerName="registry-server" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.002225 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.018664 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s2vvv"] Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.144746 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbmxb\" (UniqueName: \"kubernetes.io/projected/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-kube-api-access-pbmxb\") pod \"certified-operators-s2vvv\" (UID: \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\") " pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.144851 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-utilities\") pod \"certified-operators-s2vvv\" (UID: \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\") " pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.144901 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-catalog-content\") pod \"certified-operators-s2vvv\" (UID: \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\") " pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.246889 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbmxb\" (UniqueName: \"kubernetes.io/projected/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-kube-api-access-pbmxb\") pod \"certified-operators-s2vvv\" (UID: \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\") " pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.246987 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-utilities\") pod \"certified-operators-s2vvv\" (UID: \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\") " pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.247041 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-catalog-content\") pod \"certified-operators-s2vvv\" (UID: \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\") " pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.247553 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-catalog-content\") pod \"certified-operators-s2vvv\" (UID: \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\") " pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.248062 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-utilities\") pod \"certified-operators-s2vvv\" (UID: \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\") " pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.405405 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbmxb\" (UniqueName: \"kubernetes.io/projected/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-kube-api-access-pbmxb\") pod \"certified-operators-s2vvv\" (UID: \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\") " pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:10:46 crc kubenswrapper[4826]: I0129 09:10:46.634554 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:10:47 crc kubenswrapper[4826]: I0129 09:10:47.196596 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s2vvv"] Jan 29 09:10:47 crc kubenswrapper[4826]: I0129 09:10:47.322955 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2vvv" event={"ID":"6d17d1b3-a443-446c-9bc6-b68ec9d272e4","Type":"ContainerStarted","Data":"02a30f6f4a306f6e5ea61fbdca2db4d77f1e438c16cb88b4d8515dfa31244101"} Jan 29 09:10:48 crc kubenswrapper[4826]: I0129 09:10:48.334887 4826 generic.go:334] "Generic (PLEG): container finished" podID="6d17d1b3-a443-446c-9bc6-b68ec9d272e4" containerID="0123249892b245f6e9c0dfb7132edb12c04acdc6c29a2e1480946d1911fd53c8" exitCode=0 Jan 29 09:10:48 crc kubenswrapper[4826]: I0129 09:10:48.335114 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2vvv" event={"ID":"6d17d1b3-a443-446c-9bc6-b68ec9d272e4","Type":"ContainerDied","Data":"0123249892b245f6e9c0dfb7132edb12c04acdc6c29a2e1480946d1911fd53c8"} Jan 29 09:10:51 crc kubenswrapper[4826]: I0129 09:10:51.367238 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2vvv" event={"ID":"6d17d1b3-a443-446c-9bc6-b68ec9d272e4","Type":"ContainerStarted","Data":"25c1b940040c8f9a3880b64d57aa3a94baf6215e84d479cbb87f3170dce206f5"} Jan 29 09:10:53 crc kubenswrapper[4826]: I0129 09:10:53.386280 4826 generic.go:334] "Generic (PLEG): container finished" podID="6d17d1b3-a443-446c-9bc6-b68ec9d272e4" containerID="25c1b940040c8f9a3880b64d57aa3a94baf6215e84d479cbb87f3170dce206f5" exitCode=0 Jan 29 09:10:53 crc kubenswrapper[4826]: I0129 09:10:53.386480 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2vvv" event={"ID":"6d17d1b3-a443-446c-9bc6-b68ec9d272e4","Type":"ContainerDied","Data":"25c1b940040c8f9a3880b64d57aa3a94baf6215e84d479cbb87f3170dce206f5"} Jan 29 09:10:53 crc kubenswrapper[4826]: I0129 09:10:53.808505 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:10:53 crc kubenswrapper[4826]: E0129 09:10:53.808762 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:10:54 crc kubenswrapper[4826]: I0129 09:10:54.407711 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2vvv" event={"ID":"6d17d1b3-a443-446c-9bc6-b68ec9d272e4","Type":"ContainerStarted","Data":"502f5fe0ebdd70fc92128c709c7dfcebe4fd1cbf2d80bc924531cee8643ddb30"} Jan 29 09:10:54 crc kubenswrapper[4826]: I0129 09:10:54.455630 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s2vvv" podStartSLOduration=3.9983147580000002 podStartE2EDuration="9.455612588s" podCreationTimestamp="2026-01-29 09:10:45 +0000 UTC" firstStartedPulling="2026-01-29 09:10:48.337335668 +0000 UTC m=+8832.199128737" lastFinishedPulling="2026-01-29 09:10:53.794633498 +0000 UTC m=+8837.656426567" observedRunningTime="2026-01-29 09:10:54.447882446 +0000 UTC m=+8838.309675535" watchObservedRunningTime="2026-01-29 09:10:54.455612588 +0000 UTC m=+8838.317405657" Jan 29 09:10:56 crc kubenswrapper[4826]: I0129 09:10:56.635694 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:10:56 crc kubenswrapper[4826]: I0129 09:10:56.636049 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:10:58 crc kubenswrapper[4826]: I0129 09:10:58.141068 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-s2vvv" podUID="6d17d1b3-a443-446c-9bc6-b68ec9d272e4" containerName="registry-server" probeResult="failure" output=< Jan 29 09:10:58 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 09:10:58 crc kubenswrapper[4826]: > Jan 29 09:11:06 crc kubenswrapper[4826]: I0129 09:11:06.688283 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:11:06 crc kubenswrapper[4826]: I0129 09:11:06.757168 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:11:06 crc kubenswrapper[4826]: I0129 09:11:06.948058 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s2vvv"] Jan 29 09:11:07 crc kubenswrapper[4826]: I0129 09:11:07.809388 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:11:07 crc kubenswrapper[4826]: E0129 09:11:07.809741 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:11:08 crc kubenswrapper[4826]: I0129 09:11:08.545493 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s2vvv" podUID="6d17d1b3-a443-446c-9bc6-b68ec9d272e4" containerName="registry-server" containerID="cri-o://502f5fe0ebdd70fc92128c709c7dfcebe4fd1cbf2d80bc924531cee8643ddb30" gracePeriod=2 Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.365472 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.463157 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-utilities\") pod \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\" (UID: \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\") " Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.463229 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-catalog-content\") pod \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\" (UID: \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\") " Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.463272 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbmxb\" (UniqueName: \"kubernetes.io/projected/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-kube-api-access-pbmxb\") pod \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\" (UID: \"6d17d1b3-a443-446c-9bc6-b68ec9d272e4\") " Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.464432 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-utilities" (OuterVolumeSpecName: "utilities") pod "6d17d1b3-a443-446c-9bc6-b68ec9d272e4" (UID: "6d17d1b3-a443-446c-9bc6-b68ec9d272e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.465579 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.474187 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-kube-api-access-pbmxb" (OuterVolumeSpecName: "kube-api-access-pbmxb") pod "6d17d1b3-a443-446c-9bc6-b68ec9d272e4" (UID: "6d17d1b3-a443-446c-9bc6-b68ec9d272e4"). InnerVolumeSpecName "kube-api-access-pbmxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.522478 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d17d1b3-a443-446c-9bc6-b68ec9d272e4" (UID: "6d17d1b3-a443-446c-9bc6-b68ec9d272e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.555850 4826 generic.go:334] "Generic (PLEG): container finished" podID="6d17d1b3-a443-446c-9bc6-b68ec9d272e4" containerID="502f5fe0ebdd70fc92128c709c7dfcebe4fd1cbf2d80bc924531cee8643ddb30" exitCode=0 Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.555907 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s2vvv" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.555911 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2vvv" event={"ID":"6d17d1b3-a443-446c-9bc6-b68ec9d272e4","Type":"ContainerDied","Data":"502f5fe0ebdd70fc92128c709c7dfcebe4fd1cbf2d80bc924531cee8643ddb30"} Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.555990 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s2vvv" event={"ID":"6d17d1b3-a443-446c-9bc6-b68ec9d272e4","Type":"ContainerDied","Data":"02a30f6f4a306f6e5ea61fbdca2db4d77f1e438c16cb88b4d8515dfa31244101"} Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.556014 4826 scope.go:117] "RemoveContainer" containerID="502f5fe0ebdd70fc92128c709c7dfcebe4fd1cbf2d80bc924531cee8643ddb30" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.567284 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.567338 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbmxb\" (UniqueName: \"kubernetes.io/projected/6d17d1b3-a443-446c-9bc6-b68ec9d272e4-kube-api-access-pbmxb\") on node \"crc\" DevicePath \"\"" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.576739 4826 scope.go:117] "RemoveContainer" containerID="25c1b940040c8f9a3880b64d57aa3a94baf6215e84d479cbb87f3170dce206f5" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.606376 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s2vvv"] Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.612999 4826 scope.go:117] "RemoveContainer" containerID="0123249892b245f6e9c0dfb7132edb12c04acdc6c29a2e1480946d1911fd53c8" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.616714 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s2vvv"] Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.657588 4826 scope.go:117] "RemoveContainer" containerID="502f5fe0ebdd70fc92128c709c7dfcebe4fd1cbf2d80bc924531cee8643ddb30" Jan 29 09:11:09 crc kubenswrapper[4826]: E0129 09:11:09.659714 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"502f5fe0ebdd70fc92128c709c7dfcebe4fd1cbf2d80bc924531cee8643ddb30\": container with ID starting with 502f5fe0ebdd70fc92128c709c7dfcebe4fd1cbf2d80bc924531cee8643ddb30 not found: ID does not exist" containerID="502f5fe0ebdd70fc92128c709c7dfcebe4fd1cbf2d80bc924531cee8643ddb30" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.659803 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502f5fe0ebdd70fc92128c709c7dfcebe4fd1cbf2d80bc924531cee8643ddb30"} err="failed to get container status \"502f5fe0ebdd70fc92128c709c7dfcebe4fd1cbf2d80bc924531cee8643ddb30\": rpc error: code = NotFound desc = could not find container \"502f5fe0ebdd70fc92128c709c7dfcebe4fd1cbf2d80bc924531cee8643ddb30\": container with ID starting with 502f5fe0ebdd70fc92128c709c7dfcebe4fd1cbf2d80bc924531cee8643ddb30 not found: ID does not exist" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.659836 4826 scope.go:117] "RemoveContainer" containerID="25c1b940040c8f9a3880b64d57aa3a94baf6215e84d479cbb87f3170dce206f5" Jan 29 09:11:09 crc kubenswrapper[4826]: E0129 09:11:09.660363 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c1b940040c8f9a3880b64d57aa3a94baf6215e84d479cbb87f3170dce206f5\": container with ID starting with 25c1b940040c8f9a3880b64d57aa3a94baf6215e84d479cbb87f3170dce206f5 not found: ID does not exist" containerID="25c1b940040c8f9a3880b64d57aa3a94baf6215e84d479cbb87f3170dce206f5" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.660392 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c1b940040c8f9a3880b64d57aa3a94baf6215e84d479cbb87f3170dce206f5"} err="failed to get container status \"25c1b940040c8f9a3880b64d57aa3a94baf6215e84d479cbb87f3170dce206f5\": rpc error: code = NotFound desc = could not find container \"25c1b940040c8f9a3880b64d57aa3a94baf6215e84d479cbb87f3170dce206f5\": container with ID starting with 25c1b940040c8f9a3880b64d57aa3a94baf6215e84d479cbb87f3170dce206f5 not found: ID does not exist" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.660406 4826 scope.go:117] "RemoveContainer" containerID="0123249892b245f6e9c0dfb7132edb12c04acdc6c29a2e1480946d1911fd53c8" Jan 29 09:11:09 crc kubenswrapper[4826]: E0129 09:11:09.660638 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0123249892b245f6e9c0dfb7132edb12c04acdc6c29a2e1480946d1911fd53c8\": container with ID starting with 0123249892b245f6e9c0dfb7132edb12c04acdc6c29a2e1480946d1911fd53c8 not found: ID does not exist" containerID="0123249892b245f6e9c0dfb7132edb12c04acdc6c29a2e1480946d1911fd53c8" Jan 29 09:11:09 crc kubenswrapper[4826]: I0129 09:11:09.660668 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0123249892b245f6e9c0dfb7132edb12c04acdc6c29a2e1480946d1911fd53c8"} err="failed to get container status \"0123249892b245f6e9c0dfb7132edb12c04acdc6c29a2e1480946d1911fd53c8\": rpc error: code = NotFound desc = could not find container \"0123249892b245f6e9c0dfb7132edb12c04acdc6c29a2e1480946d1911fd53c8\": container with ID starting with 0123249892b245f6e9c0dfb7132edb12c04acdc6c29a2e1480946d1911fd53c8 not found: ID does not exist" Jan 29 09:11:10 crc kubenswrapper[4826]: I0129 09:11:10.819966 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d17d1b3-a443-446c-9bc6-b68ec9d272e4" path="/var/lib/kubelet/pods/6d17d1b3-a443-446c-9bc6-b68ec9d272e4/volumes" Jan 29 09:11:21 crc kubenswrapper[4826]: I0129 09:11:21.809340 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:11:21 crc kubenswrapper[4826]: E0129 09:11:21.810177 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:11:34 crc kubenswrapper[4826]: I0129 09:11:34.809673 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:11:34 crc kubenswrapper[4826]: E0129 09:11:34.810467 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:11:49 crc kubenswrapper[4826]: I0129 09:11:49.809277 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:11:49 crc kubenswrapper[4826]: E0129 09:11:49.810161 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:12:04 crc kubenswrapper[4826]: I0129 09:12:04.810981 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:12:04 crc kubenswrapper[4826]: E0129 09:12:04.814976 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:12:15 crc kubenswrapper[4826]: I0129 09:12:15.809499 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:12:15 crc kubenswrapper[4826]: E0129 09:12:15.810213 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:12:26 crc kubenswrapper[4826]: I0129 09:12:26.809257 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:12:26 crc kubenswrapper[4826]: E0129 09:12:26.810162 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:12:41 crc kubenswrapper[4826]: I0129 09:12:41.809188 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:12:41 crc kubenswrapper[4826]: E0129 09:12:41.809977 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:12:54 crc kubenswrapper[4826]: I0129 09:12:54.812454 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:12:54 crc kubenswrapper[4826]: E0129 09:12:54.813600 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:13:07 crc kubenswrapper[4826]: I0129 09:13:07.810192 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:13:07 crc kubenswrapper[4826]: E0129 09:13:07.811068 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:13:21 crc kubenswrapper[4826]: I0129 09:13:21.809227 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:13:21 crc kubenswrapper[4826]: E0129 09:13:21.810171 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:13:36 crc kubenswrapper[4826]: I0129 09:13:36.815737 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:13:37 crc kubenswrapper[4826]: I0129 09:13:37.317983 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"9ba5ebbaa4eceee82e188aaf12baedbd94c80489195366feb67cd4ac984acff6"} Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.259359 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9"] Jan 29 09:15:00 crc kubenswrapper[4826]: E0129 09:15:00.260969 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d17d1b3-a443-446c-9bc6-b68ec9d272e4" containerName="extract-utilities" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.260992 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d17d1b3-a443-446c-9bc6-b68ec9d272e4" containerName="extract-utilities" Jan 29 09:15:00 crc kubenswrapper[4826]: E0129 09:15:00.261039 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d17d1b3-a443-446c-9bc6-b68ec9d272e4" containerName="registry-server" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.261050 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d17d1b3-a443-446c-9bc6-b68ec9d272e4" containerName="registry-server" Jan 29 09:15:00 crc kubenswrapper[4826]: E0129 09:15:00.261071 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d17d1b3-a443-446c-9bc6-b68ec9d272e4" containerName="extract-content" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.261084 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d17d1b3-a443-446c-9bc6-b68ec9d272e4" containerName="extract-content" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.261371 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d17d1b3-a443-446c-9bc6-b68ec9d272e4" containerName="registry-server" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.263018 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.266833 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.272044 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9"] Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.273353 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.402548 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/92ab3ce6-4808-4a2c-b918-200f9477668f-secret-volume\") pod \"collect-profiles-29494635-zzmv9\" (UID: \"92ab3ce6-4808-4a2c-b918-200f9477668f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.402974 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92ab3ce6-4808-4a2c-b918-200f9477668f-config-volume\") pod \"collect-profiles-29494635-zzmv9\" (UID: \"92ab3ce6-4808-4a2c-b918-200f9477668f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.403035 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbxss\" (UniqueName: \"kubernetes.io/projected/92ab3ce6-4808-4a2c-b918-200f9477668f-kube-api-access-dbxss\") pod \"collect-profiles-29494635-zzmv9\" (UID: \"92ab3ce6-4808-4a2c-b918-200f9477668f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.505995 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/92ab3ce6-4808-4a2c-b918-200f9477668f-secret-volume\") pod \"collect-profiles-29494635-zzmv9\" (UID: \"92ab3ce6-4808-4a2c-b918-200f9477668f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.506064 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92ab3ce6-4808-4a2c-b918-200f9477668f-config-volume\") pod \"collect-profiles-29494635-zzmv9\" (UID: \"92ab3ce6-4808-4a2c-b918-200f9477668f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.506104 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbxss\" (UniqueName: \"kubernetes.io/projected/92ab3ce6-4808-4a2c-b918-200f9477668f-kube-api-access-dbxss\") pod \"collect-profiles-29494635-zzmv9\" (UID: \"92ab3ce6-4808-4a2c-b918-200f9477668f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.507468 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92ab3ce6-4808-4a2c-b918-200f9477668f-config-volume\") pod \"collect-profiles-29494635-zzmv9\" (UID: \"92ab3ce6-4808-4a2c-b918-200f9477668f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.513259 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/92ab3ce6-4808-4a2c-b918-200f9477668f-secret-volume\") pod \"collect-profiles-29494635-zzmv9\" (UID: \"92ab3ce6-4808-4a2c-b918-200f9477668f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.528527 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbxss\" (UniqueName: \"kubernetes.io/projected/92ab3ce6-4808-4a2c-b918-200f9477668f-kube-api-access-dbxss\") pod \"collect-profiles-29494635-zzmv9\" (UID: \"92ab3ce6-4808-4a2c-b918-200f9477668f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" Jan 29 09:15:00 crc kubenswrapper[4826]: I0129 09:15:00.599836 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" Jan 29 09:15:01 crc kubenswrapper[4826]: I0129 09:15:01.234552 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9"] Jan 29 09:15:02 crc kubenswrapper[4826]: I0129 09:15:02.149313 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" event={"ID":"92ab3ce6-4808-4a2c-b918-200f9477668f","Type":"ContainerStarted","Data":"7d883203121bd45b947371e22fe020168453b3b7cc4545f6bb6dbdda741664b7"} Jan 29 09:15:02 crc kubenswrapper[4826]: I0129 09:15:02.149639 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" event={"ID":"92ab3ce6-4808-4a2c-b918-200f9477668f","Type":"ContainerStarted","Data":"9417f8eb06dcc1c6b41114c8e44ba79d3d5c3cd5bf6ab5eafda53d51f0fe8a54"} Jan 29 09:15:02 crc kubenswrapper[4826]: I0129 09:15:02.174161 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" podStartSLOduration=2.174142637 podStartE2EDuration="2.174142637s" podCreationTimestamp="2026-01-29 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:15:02.163521093 +0000 UTC m=+9086.025314162" watchObservedRunningTime="2026-01-29 09:15:02.174142637 +0000 UTC m=+9086.035935706" Jan 29 09:15:03 crc kubenswrapper[4826]: I0129 09:15:03.160244 4826 generic.go:334] "Generic (PLEG): container finished" podID="92ab3ce6-4808-4a2c-b918-200f9477668f" containerID="7d883203121bd45b947371e22fe020168453b3b7cc4545f6bb6dbdda741664b7" exitCode=0 Jan 29 09:15:03 crc kubenswrapper[4826]: I0129 09:15:03.160312 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" event={"ID":"92ab3ce6-4808-4a2c-b918-200f9477668f","Type":"ContainerDied","Data":"7d883203121bd45b947371e22fe020168453b3b7cc4545f6bb6dbdda741664b7"} Jan 29 09:15:04 crc kubenswrapper[4826]: I0129 09:15:04.680477 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" Jan 29 09:15:04 crc kubenswrapper[4826]: I0129 09:15:04.809722 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/92ab3ce6-4808-4a2c-b918-200f9477668f-secret-volume\") pod \"92ab3ce6-4808-4a2c-b918-200f9477668f\" (UID: \"92ab3ce6-4808-4a2c-b918-200f9477668f\") " Jan 29 09:15:04 crc kubenswrapper[4826]: I0129 09:15:04.809886 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92ab3ce6-4808-4a2c-b918-200f9477668f-config-volume\") pod \"92ab3ce6-4808-4a2c-b918-200f9477668f\" (UID: \"92ab3ce6-4808-4a2c-b918-200f9477668f\") " Jan 29 09:15:04 crc kubenswrapper[4826]: I0129 09:15:04.809979 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbxss\" (UniqueName: \"kubernetes.io/projected/92ab3ce6-4808-4a2c-b918-200f9477668f-kube-api-access-dbxss\") pod \"92ab3ce6-4808-4a2c-b918-200f9477668f\" (UID: \"92ab3ce6-4808-4a2c-b918-200f9477668f\") " Jan 29 09:15:04 crc kubenswrapper[4826]: I0129 09:15:04.810598 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92ab3ce6-4808-4a2c-b918-200f9477668f-config-volume" (OuterVolumeSpecName: "config-volume") pod "92ab3ce6-4808-4a2c-b918-200f9477668f" (UID: "92ab3ce6-4808-4a2c-b918-200f9477668f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:15:04 crc kubenswrapper[4826]: I0129 09:15:04.810977 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92ab3ce6-4808-4a2c-b918-200f9477668f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:04 crc kubenswrapper[4826]: I0129 09:15:04.819044 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92ab3ce6-4808-4a2c-b918-200f9477668f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "92ab3ce6-4808-4a2c-b918-200f9477668f" (UID: "92ab3ce6-4808-4a2c-b918-200f9477668f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:15:04 crc kubenswrapper[4826]: I0129 09:15:04.820179 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92ab3ce6-4808-4a2c-b918-200f9477668f-kube-api-access-dbxss" (OuterVolumeSpecName: "kube-api-access-dbxss") pod "92ab3ce6-4808-4a2c-b918-200f9477668f" (UID: "92ab3ce6-4808-4a2c-b918-200f9477668f"). InnerVolumeSpecName "kube-api-access-dbxss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:15:04 crc kubenswrapper[4826]: I0129 09:15:04.913701 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/92ab3ce6-4808-4a2c-b918-200f9477668f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:04 crc kubenswrapper[4826]: I0129 09:15:04.914047 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbxss\" (UniqueName: \"kubernetes.io/projected/92ab3ce6-4808-4a2c-b918-200f9477668f-kube-api-access-dbxss\") on node \"crc\" DevicePath \"\"" Jan 29 09:15:05 crc kubenswrapper[4826]: I0129 09:15:05.182732 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" event={"ID":"92ab3ce6-4808-4a2c-b918-200f9477668f","Type":"ContainerDied","Data":"9417f8eb06dcc1c6b41114c8e44ba79d3d5c3cd5bf6ab5eafda53d51f0fe8a54"} Jan 29 09:15:05 crc kubenswrapper[4826]: I0129 09:15:05.182781 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9417f8eb06dcc1c6b41114c8e44ba79d3d5c3cd5bf6ab5eafda53d51f0fe8a54" Jan 29 09:15:05 crc kubenswrapper[4826]: I0129 09:15:05.182839 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494635-zzmv9" Jan 29 09:15:05 crc kubenswrapper[4826]: I0129 09:15:05.323587 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn"] Jan 29 09:15:05 crc kubenswrapper[4826]: I0129 09:15:05.334668 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494590-99cmn"] Jan 29 09:15:06 crc kubenswrapper[4826]: I0129 09:15:06.834260 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773" path="/var/lib/kubelet/pods/1b9f5c12-dc7b-4b9d-9dbf-dc6a4c542773/volumes" Jan 29 09:15:25 crc kubenswrapper[4826]: I0129 09:15:25.944071 4826 scope.go:117] "RemoveContainer" containerID="4092bae3e04c42d6237216faa6b767404ae55a635d96baba88ed833c299c5a98" Jan 29 09:16:05 crc kubenswrapper[4826]: I0129 09:16:05.660116 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:16:05 crc kubenswrapper[4826]: I0129 09:16:05.660602 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:16:35 crc kubenswrapper[4826]: I0129 09:16:35.656890 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:16:35 crc kubenswrapper[4826]: I0129 09:16:35.658020 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:17:05 crc kubenswrapper[4826]: I0129 09:17:05.656413 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:17:05 crc kubenswrapper[4826]: I0129 09:17:05.657010 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:17:05 crc kubenswrapper[4826]: I0129 09:17:05.657053 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 09:17:05 crc kubenswrapper[4826]: I0129 09:17:05.657998 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ba5ebbaa4eceee82e188aaf12baedbd94c80489195366feb67cd4ac984acff6"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:17:05 crc kubenswrapper[4826]: I0129 09:17:05.658062 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://9ba5ebbaa4eceee82e188aaf12baedbd94c80489195366feb67cd4ac984acff6" gracePeriod=600 Jan 29 09:17:06 crc kubenswrapper[4826]: I0129 09:17:06.425220 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"9ba5ebbaa4eceee82e188aaf12baedbd94c80489195366feb67cd4ac984acff6"} Jan 29 09:17:06 crc kubenswrapper[4826]: I0129 09:17:06.425572 4826 scope.go:117] "RemoveContainer" containerID="060f3e6ddb7d2341eb86bbef7e0e8efdc59bfb9a3b64d8e2544dc73ba67480a8" Jan 29 09:17:06 crc kubenswrapper[4826]: I0129 09:17:06.425137 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="9ba5ebbaa4eceee82e188aaf12baedbd94c80489195366feb67cd4ac984acff6" exitCode=0 Jan 29 09:17:06 crc kubenswrapper[4826]: I0129 09:17:06.425664 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42"} Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.452345 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cp5b7"] Jan 29 09:18:32 crc kubenswrapper[4826]: E0129 09:18:32.454451 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ab3ce6-4808-4a2c-b918-200f9477668f" containerName="collect-profiles" Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.454598 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ab3ce6-4808-4a2c-b918-200f9477668f" containerName="collect-profiles" Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.454942 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="92ab3ce6-4808-4a2c-b918-200f9477668f" containerName="collect-profiles" Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.457048 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.461822 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cp5b7"] Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.569927 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3917de66-a39a-4e89-b192-5e2f342827d3-catalog-content\") pod \"redhat-operators-cp5b7\" (UID: \"3917de66-a39a-4e89-b192-5e2f342827d3\") " pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.569997 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3917de66-a39a-4e89-b192-5e2f342827d3-utilities\") pod \"redhat-operators-cp5b7\" (UID: \"3917de66-a39a-4e89-b192-5e2f342827d3\") " pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.570331 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m7xc\" (UniqueName: \"kubernetes.io/projected/3917de66-a39a-4e89-b192-5e2f342827d3-kube-api-access-2m7xc\") pod \"redhat-operators-cp5b7\" (UID: \"3917de66-a39a-4e89-b192-5e2f342827d3\") " pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.672719 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m7xc\" (UniqueName: \"kubernetes.io/projected/3917de66-a39a-4e89-b192-5e2f342827d3-kube-api-access-2m7xc\") pod \"redhat-operators-cp5b7\" (UID: \"3917de66-a39a-4e89-b192-5e2f342827d3\") " pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.672958 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3917de66-a39a-4e89-b192-5e2f342827d3-catalog-content\") pod \"redhat-operators-cp5b7\" (UID: \"3917de66-a39a-4e89-b192-5e2f342827d3\") " pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.673024 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3917de66-a39a-4e89-b192-5e2f342827d3-utilities\") pod \"redhat-operators-cp5b7\" (UID: \"3917de66-a39a-4e89-b192-5e2f342827d3\") " pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.674127 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3917de66-a39a-4e89-b192-5e2f342827d3-utilities\") pod \"redhat-operators-cp5b7\" (UID: \"3917de66-a39a-4e89-b192-5e2f342827d3\") " pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.687837 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3917de66-a39a-4e89-b192-5e2f342827d3-catalog-content\") pod \"redhat-operators-cp5b7\" (UID: \"3917de66-a39a-4e89-b192-5e2f342827d3\") " pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.717320 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m7xc\" (UniqueName: \"kubernetes.io/projected/3917de66-a39a-4e89-b192-5e2f342827d3-kube-api-access-2m7xc\") pod \"redhat-operators-cp5b7\" (UID: \"3917de66-a39a-4e89-b192-5e2f342827d3\") " pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:32 crc kubenswrapper[4826]: I0129 09:18:32.785396 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:33 crc kubenswrapper[4826]: I0129 09:18:33.332727 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cp5b7"] Jan 29 09:18:33 crc kubenswrapper[4826]: I0129 09:18:33.622259 4826 generic.go:334] "Generic (PLEG): container finished" podID="3917de66-a39a-4e89-b192-5e2f342827d3" containerID="692f3dc033759c343a047a44ef1718418485d9399d00dbad1fa90d251b7a5de4" exitCode=0 Jan 29 09:18:33 crc kubenswrapper[4826]: I0129 09:18:33.622414 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cp5b7" event={"ID":"3917de66-a39a-4e89-b192-5e2f342827d3","Type":"ContainerDied","Data":"692f3dc033759c343a047a44ef1718418485d9399d00dbad1fa90d251b7a5de4"} Jan 29 09:18:33 crc kubenswrapper[4826]: I0129 09:18:33.622451 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cp5b7" event={"ID":"3917de66-a39a-4e89-b192-5e2f342827d3","Type":"ContainerStarted","Data":"f0cd65d5f2fa075257a22afb48d31f2be07fbd35802c36c0a8342abc9bf4c3a8"} Jan 29 09:18:33 crc kubenswrapper[4826]: I0129 09:18:33.625038 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:18:35 crc kubenswrapper[4826]: I0129 09:18:35.649083 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cp5b7" event={"ID":"3917de66-a39a-4e89-b192-5e2f342827d3","Type":"ContainerStarted","Data":"a6ffd234e33d8aa1ea9200a46fc6885d4ee86f04acf0d3754067f7fc41b3a6bf"} Jan 29 09:18:38 crc kubenswrapper[4826]: I0129 09:18:38.677586 4826 generic.go:334] "Generic (PLEG): container finished" podID="3917de66-a39a-4e89-b192-5e2f342827d3" containerID="a6ffd234e33d8aa1ea9200a46fc6885d4ee86f04acf0d3754067f7fc41b3a6bf" exitCode=0 Jan 29 09:18:38 crc kubenswrapper[4826]: I0129 09:18:38.677688 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cp5b7" event={"ID":"3917de66-a39a-4e89-b192-5e2f342827d3","Type":"ContainerDied","Data":"a6ffd234e33d8aa1ea9200a46fc6885d4ee86f04acf0d3754067f7fc41b3a6bf"} Jan 29 09:18:39 crc kubenswrapper[4826]: I0129 09:18:39.690390 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cp5b7" event={"ID":"3917de66-a39a-4e89-b192-5e2f342827d3","Type":"ContainerStarted","Data":"076d6d6a300694ff9566f3fff5c221d121c7f15cb0448ce3701a9f13279a2929"} Jan 29 09:18:39 crc kubenswrapper[4826]: I0129 09:18:39.717411 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cp5b7" podStartSLOduration=2.151123093 podStartE2EDuration="7.717388842s" podCreationTimestamp="2026-01-29 09:18:32 +0000 UTC" firstStartedPulling="2026-01-29 09:18:33.624735503 +0000 UTC m=+9297.486528572" lastFinishedPulling="2026-01-29 09:18:39.191001252 +0000 UTC m=+9303.052794321" observedRunningTime="2026-01-29 09:18:39.710951621 +0000 UTC m=+9303.572744720" watchObservedRunningTime="2026-01-29 09:18:39.717388842 +0000 UTC m=+9303.579181911" Jan 29 09:18:42 crc kubenswrapper[4826]: I0129 09:18:42.786506 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:42 crc kubenswrapper[4826]: I0129 09:18:42.787954 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:44 crc kubenswrapper[4826]: I0129 09:18:44.336899 4826 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cp5b7" podUID="3917de66-a39a-4e89-b192-5e2f342827d3" containerName="registry-server" probeResult="failure" output=< Jan 29 09:18:44 crc kubenswrapper[4826]: timeout: failed to connect service ":50051" within 1s Jan 29 09:18:44 crc kubenswrapper[4826]: > Jan 29 09:18:53 crc kubenswrapper[4826]: I0129 09:18:53.336782 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:53 crc kubenswrapper[4826]: I0129 09:18:53.388173 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:53 crc kubenswrapper[4826]: I0129 09:18:53.575375 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cp5b7"] Jan 29 09:18:54 crc kubenswrapper[4826]: I0129 09:18:54.850511 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cp5b7" podUID="3917de66-a39a-4e89-b192-5e2f342827d3" containerName="registry-server" containerID="cri-o://076d6d6a300694ff9566f3fff5c221d121c7f15cb0448ce3701a9f13279a2929" gracePeriod=2 Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.652788 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.750731 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3917de66-a39a-4e89-b192-5e2f342827d3-catalog-content\") pod \"3917de66-a39a-4e89-b192-5e2f342827d3\" (UID: \"3917de66-a39a-4e89-b192-5e2f342827d3\") " Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.750852 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m7xc\" (UniqueName: \"kubernetes.io/projected/3917de66-a39a-4e89-b192-5e2f342827d3-kube-api-access-2m7xc\") pod \"3917de66-a39a-4e89-b192-5e2f342827d3\" (UID: \"3917de66-a39a-4e89-b192-5e2f342827d3\") " Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.750961 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3917de66-a39a-4e89-b192-5e2f342827d3-utilities\") pod \"3917de66-a39a-4e89-b192-5e2f342827d3\" (UID: \"3917de66-a39a-4e89-b192-5e2f342827d3\") " Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.751949 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3917de66-a39a-4e89-b192-5e2f342827d3-utilities" (OuterVolumeSpecName: "utilities") pod "3917de66-a39a-4e89-b192-5e2f342827d3" (UID: "3917de66-a39a-4e89-b192-5e2f342827d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.756864 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3917de66-a39a-4e89-b192-5e2f342827d3-kube-api-access-2m7xc" (OuterVolumeSpecName: "kube-api-access-2m7xc") pod "3917de66-a39a-4e89-b192-5e2f342827d3" (UID: "3917de66-a39a-4e89-b192-5e2f342827d3"). InnerVolumeSpecName "kube-api-access-2m7xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.853960 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m7xc\" (UniqueName: \"kubernetes.io/projected/3917de66-a39a-4e89-b192-5e2f342827d3-kube-api-access-2m7xc\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.853997 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3917de66-a39a-4e89-b192-5e2f342827d3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.862944 4826 generic.go:334] "Generic (PLEG): container finished" podID="3917de66-a39a-4e89-b192-5e2f342827d3" containerID="076d6d6a300694ff9566f3fff5c221d121c7f15cb0448ce3701a9f13279a2929" exitCode=0 Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.862999 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cp5b7" event={"ID":"3917de66-a39a-4e89-b192-5e2f342827d3","Type":"ContainerDied","Data":"076d6d6a300694ff9566f3fff5c221d121c7f15cb0448ce3701a9f13279a2929"} Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.863058 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cp5b7" event={"ID":"3917de66-a39a-4e89-b192-5e2f342827d3","Type":"ContainerDied","Data":"f0cd65d5f2fa075257a22afb48d31f2be07fbd35802c36c0a8342abc9bf4c3a8"} Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.863080 4826 scope.go:117] "RemoveContainer" containerID="076d6d6a300694ff9566f3fff5c221d121c7f15cb0448ce3701a9f13279a2929" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.863367 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cp5b7" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.877617 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3917de66-a39a-4e89-b192-5e2f342827d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3917de66-a39a-4e89-b192-5e2f342827d3" (UID: "3917de66-a39a-4e89-b192-5e2f342827d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.887878 4826 scope.go:117] "RemoveContainer" containerID="a6ffd234e33d8aa1ea9200a46fc6885d4ee86f04acf0d3754067f7fc41b3a6bf" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.932114 4826 scope.go:117] "RemoveContainer" containerID="692f3dc033759c343a047a44ef1718418485d9399d00dbad1fa90d251b7a5de4" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.957864 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3917de66-a39a-4e89-b192-5e2f342827d3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.969843 4826 scope.go:117] "RemoveContainer" containerID="076d6d6a300694ff9566f3fff5c221d121c7f15cb0448ce3701a9f13279a2929" Jan 29 09:18:55 crc kubenswrapper[4826]: E0129 09:18:55.971992 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"076d6d6a300694ff9566f3fff5c221d121c7f15cb0448ce3701a9f13279a2929\": container with ID starting with 076d6d6a300694ff9566f3fff5c221d121c7f15cb0448ce3701a9f13279a2929 not found: ID does not exist" containerID="076d6d6a300694ff9566f3fff5c221d121c7f15cb0448ce3701a9f13279a2929" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.972051 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"076d6d6a300694ff9566f3fff5c221d121c7f15cb0448ce3701a9f13279a2929"} err="failed to get container status \"076d6d6a300694ff9566f3fff5c221d121c7f15cb0448ce3701a9f13279a2929\": rpc error: code = NotFound desc = could not find container \"076d6d6a300694ff9566f3fff5c221d121c7f15cb0448ce3701a9f13279a2929\": container with ID starting with 076d6d6a300694ff9566f3fff5c221d121c7f15cb0448ce3701a9f13279a2929 not found: ID does not exist" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.972077 4826 scope.go:117] "RemoveContainer" containerID="a6ffd234e33d8aa1ea9200a46fc6885d4ee86f04acf0d3754067f7fc41b3a6bf" Jan 29 09:18:55 crc kubenswrapper[4826]: E0129 09:18:55.972805 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6ffd234e33d8aa1ea9200a46fc6885d4ee86f04acf0d3754067f7fc41b3a6bf\": container with ID starting with a6ffd234e33d8aa1ea9200a46fc6885d4ee86f04acf0d3754067f7fc41b3a6bf not found: ID does not exist" containerID="a6ffd234e33d8aa1ea9200a46fc6885d4ee86f04acf0d3754067f7fc41b3a6bf" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.972845 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6ffd234e33d8aa1ea9200a46fc6885d4ee86f04acf0d3754067f7fc41b3a6bf"} err="failed to get container status \"a6ffd234e33d8aa1ea9200a46fc6885d4ee86f04acf0d3754067f7fc41b3a6bf\": rpc error: code = NotFound desc = could not find container \"a6ffd234e33d8aa1ea9200a46fc6885d4ee86f04acf0d3754067f7fc41b3a6bf\": container with ID starting with a6ffd234e33d8aa1ea9200a46fc6885d4ee86f04acf0d3754067f7fc41b3a6bf not found: ID does not exist" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.972861 4826 scope.go:117] "RemoveContainer" containerID="692f3dc033759c343a047a44ef1718418485d9399d00dbad1fa90d251b7a5de4" Jan 29 09:18:55 crc kubenswrapper[4826]: E0129 09:18:55.973203 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"692f3dc033759c343a047a44ef1718418485d9399d00dbad1fa90d251b7a5de4\": container with ID starting with 692f3dc033759c343a047a44ef1718418485d9399d00dbad1fa90d251b7a5de4 not found: ID does not exist" containerID="692f3dc033759c343a047a44ef1718418485d9399d00dbad1fa90d251b7a5de4" Jan 29 09:18:55 crc kubenswrapper[4826]: I0129 09:18:55.973257 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"692f3dc033759c343a047a44ef1718418485d9399d00dbad1fa90d251b7a5de4"} err="failed to get container status \"692f3dc033759c343a047a44ef1718418485d9399d00dbad1fa90d251b7a5de4\": rpc error: code = NotFound desc = could not find container \"692f3dc033759c343a047a44ef1718418485d9399d00dbad1fa90d251b7a5de4\": container with ID starting with 692f3dc033759c343a047a44ef1718418485d9399d00dbad1fa90d251b7a5de4 not found: ID does not exist" Jan 29 09:18:56 crc kubenswrapper[4826]: I0129 09:18:56.199349 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cp5b7"] Jan 29 09:18:56 crc kubenswrapper[4826]: I0129 09:18:56.208888 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cp5b7"] Jan 29 09:18:56 crc kubenswrapper[4826]: I0129 09:18:56.821881 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3917de66-a39a-4e89-b192-5e2f342827d3" path="/var/lib/kubelet/pods/3917de66-a39a-4e89-b192-5e2f342827d3/volumes" Jan 29 09:19:05 crc kubenswrapper[4826]: I0129 09:19:05.656125 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:19:05 crc kubenswrapper[4826]: I0129 09:19:05.656883 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:19:35 crc kubenswrapper[4826]: I0129 09:19:35.655861 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:19:35 crc kubenswrapper[4826]: I0129 09:19:35.656428 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:20:05 crc kubenswrapper[4826]: I0129 09:20:05.656724 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:20:05 crc kubenswrapper[4826]: I0129 09:20:05.657197 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:20:05 crc kubenswrapper[4826]: I0129 09:20:05.657237 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 09:20:05 crc kubenswrapper[4826]: I0129 09:20:05.657977 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:20:05 crc kubenswrapper[4826]: I0129 09:20:05.658025 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" gracePeriod=600 Jan 29 09:20:06 crc kubenswrapper[4826]: E0129 09:20:06.374395 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:20:06 crc kubenswrapper[4826]: I0129 09:20:06.778213 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" exitCode=0 Jan 29 09:20:06 crc kubenswrapper[4826]: I0129 09:20:06.778287 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42"} Jan 29 09:20:06 crc kubenswrapper[4826]: I0129 09:20:06.778517 4826 scope.go:117] "RemoveContainer" containerID="9ba5ebbaa4eceee82e188aaf12baedbd94c80489195366feb67cd4ac984acff6" Jan 29 09:20:06 crc kubenswrapper[4826]: I0129 09:20:06.779162 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:20:06 crc kubenswrapper[4826]: E0129 09:20:06.779460 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:20:19 crc kubenswrapper[4826]: I0129 09:20:19.810287 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:20:19 crc kubenswrapper[4826]: E0129 09:20:19.811134 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:20:32 crc kubenswrapper[4826]: I0129 09:20:32.808830 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:20:32 crc kubenswrapper[4826]: E0129 09:20:32.809875 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:20:37 crc kubenswrapper[4826]: I0129 09:20:37.863453 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dvrpt"] Jan 29 09:20:37 crc kubenswrapper[4826]: E0129 09:20:37.864495 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3917de66-a39a-4e89-b192-5e2f342827d3" containerName="registry-server" Jan 29 09:20:37 crc kubenswrapper[4826]: I0129 09:20:37.864510 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3917de66-a39a-4e89-b192-5e2f342827d3" containerName="registry-server" Jan 29 09:20:37 crc kubenswrapper[4826]: E0129 09:20:37.864526 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3917de66-a39a-4e89-b192-5e2f342827d3" containerName="extract-utilities" Jan 29 09:20:37 crc kubenswrapper[4826]: I0129 09:20:37.864534 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3917de66-a39a-4e89-b192-5e2f342827d3" containerName="extract-utilities" Jan 29 09:20:37 crc kubenswrapper[4826]: E0129 09:20:37.864550 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3917de66-a39a-4e89-b192-5e2f342827d3" containerName="extract-content" Jan 29 09:20:37 crc kubenswrapper[4826]: I0129 09:20:37.864557 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="3917de66-a39a-4e89-b192-5e2f342827d3" containerName="extract-content" Jan 29 09:20:37 crc kubenswrapper[4826]: I0129 09:20:37.864749 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="3917de66-a39a-4e89-b192-5e2f342827d3" containerName="registry-server" Jan 29 09:20:37 crc kubenswrapper[4826]: I0129 09:20:37.866340 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:37 crc kubenswrapper[4826]: I0129 09:20:37.875372 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvrpt"] Jan 29 09:20:37 crc kubenswrapper[4826]: I0129 09:20:37.970160 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6vxb\" (UniqueName: \"kubernetes.io/projected/73516359-d048-4197-8561-fa436555045e-kube-api-access-z6vxb\") pod \"redhat-marketplace-dvrpt\" (UID: \"73516359-d048-4197-8561-fa436555045e\") " pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:37 crc kubenswrapper[4826]: I0129 09:20:37.970566 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73516359-d048-4197-8561-fa436555045e-catalog-content\") pod \"redhat-marketplace-dvrpt\" (UID: \"73516359-d048-4197-8561-fa436555045e\") " pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:37 crc kubenswrapper[4826]: I0129 09:20:37.970614 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73516359-d048-4197-8561-fa436555045e-utilities\") pod \"redhat-marketplace-dvrpt\" (UID: \"73516359-d048-4197-8561-fa436555045e\") " pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:38 crc kubenswrapper[4826]: I0129 09:20:38.073445 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6vxb\" (UniqueName: \"kubernetes.io/projected/73516359-d048-4197-8561-fa436555045e-kube-api-access-z6vxb\") pod \"redhat-marketplace-dvrpt\" (UID: \"73516359-d048-4197-8561-fa436555045e\") " pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:38 crc kubenswrapper[4826]: I0129 09:20:38.073524 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73516359-d048-4197-8561-fa436555045e-catalog-content\") pod \"redhat-marketplace-dvrpt\" (UID: \"73516359-d048-4197-8561-fa436555045e\") " pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:38 crc kubenswrapper[4826]: I0129 09:20:38.073559 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73516359-d048-4197-8561-fa436555045e-utilities\") pod \"redhat-marketplace-dvrpt\" (UID: \"73516359-d048-4197-8561-fa436555045e\") " pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:38 crc kubenswrapper[4826]: I0129 09:20:38.074319 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73516359-d048-4197-8561-fa436555045e-utilities\") pod \"redhat-marketplace-dvrpt\" (UID: \"73516359-d048-4197-8561-fa436555045e\") " pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:38 crc kubenswrapper[4826]: I0129 09:20:38.074389 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73516359-d048-4197-8561-fa436555045e-catalog-content\") pod \"redhat-marketplace-dvrpt\" (UID: \"73516359-d048-4197-8561-fa436555045e\") " pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:38 crc kubenswrapper[4826]: I0129 09:20:38.101389 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6vxb\" (UniqueName: \"kubernetes.io/projected/73516359-d048-4197-8561-fa436555045e-kube-api-access-z6vxb\") pod \"redhat-marketplace-dvrpt\" (UID: \"73516359-d048-4197-8561-fa436555045e\") " pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:38 crc kubenswrapper[4826]: I0129 09:20:38.195031 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:38 crc kubenswrapper[4826]: I0129 09:20:38.705665 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvrpt"] Jan 29 09:20:39 crc kubenswrapper[4826]: I0129 09:20:39.137826 4826 generic.go:334] "Generic (PLEG): container finished" podID="73516359-d048-4197-8561-fa436555045e" containerID="dbac2d0a58f4d4d05b826fa6f966cfdb1197203a4c081caf117408ebd79ddfba" exitCode=0 Jan 29 09:20:39 crc kubenswrapper[4826]: I0129 09:20:39.137922 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvrpt" event={"ID":"73516359-d048-4197-8561-fa436555045e","Type":"ContainerDied","Data":"dbac2d0a58f4d4d05b826fa6f966cfdb1197203a4c081caf117408ebd79ddfba"} Jan 29 09:20:39 crc kubenswrapper[4826]: I0129 09:20:39.139114 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvrpt" event={"ID":"73516359-d048-4197-8561-fa436555045e","Type":"ContainerStarted","Data":"88f960cd875f7343ccb58151b508696b9b2b53a961f883309c03e739faafd8c1"} Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.152244 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvrpt" event={"ID":"73516359-d048-4197-8561-fa436555045e","Type":"ContainerStarted","Data":"da37e42fd532332ddb12a4b3c9996ea345348675f7ac70768dff88ede7fe060e"} Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.253664 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fjvml"] Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.256065 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.266091 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fjvml"] Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.330711 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd53002-0ad0-403a-9651-8721309cd602-catalog-content\") pod \"community-operators-fjvml\" (UID: \"4cd53002-0ad0-403a-9651-8721309cd602\") " pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.330790 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd53002-0ad0-403a-9651-8721309cd602-utilities\") pod \"community-operators-fjvml\" (UID: \"4cd53002-0ad0-403a-9651-8721309cd602\") " pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.330890 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5z7z\" (UniqueName: \"kubernetes.io/projected/4cd53002-0ad0-403a-9651-8721309cd602-kube-api-access-j5z7z\") pod \"community-operators-fjvml\" (UID: \"4cd53002-0ad0-403a-9651-8721309cd602\") " pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.432993 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5z7z\" (UniqueName: \"kubernetes.io/projected/4cd53002-0ad0-403a-9651-8721309cd602-kube-api-access-j5z7z\") pod \"community-operators-fjvml\" (UID: \"4cd53002-0ad0-403a-9651-8721309cd602\") " pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.433114 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd53002-0ad0-403a-9651-8721309cd602-catalog-content\") pod \"community-operators-fjvml\" (UID: \"4cd53002-0ad0-403a-9651-8721309cd602\") " pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.433183 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd53002-0ad0-403a-9651-8721309cd602-utilities\") pod \"community-operators-fjvml\" (UID: \"4cd53002-0ad0-403a-9651-8721309cd602\") " pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.433701 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd53002-0ad0-403a-9651-8721309cd602-utilities\") pod \"community-operators-fjvml\" (UID: \"4cd53002-0ad0-403a-9651-8721309cd602\") " pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.434251 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd53002-0ad0-403a-9651-8721309cd602-catalog-content\") pod \"community-operators-fjvml\" (UID: \"4cd53002-0ad0-403a-9651-8721309cd602\") " pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.452802 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5z7z\" (UniqueName: \"kubernetes.io/projected/4cd53002-0ad0-403a-9651-8721309cd602-kube-api-access-j5z7z\") pod \"community-operators-fjvml\" (UID: \"4cd53002-0ad0-403a-9651-8721309cd602\") " pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:40 crc kubenswrapper[4826]: I0129 09:20:40.588273 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:41 crc kubenswrapper[4826]: I0129 09:20:41.089609 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fjvml"] Jan 29 09:20:41 crc kubenswrapper[4826]: I0129 09:20:41.163873 4826 generic.go:334] "Generic (PLEG): container finished" podID="73516359-d048-4197-8561-fa436555045e" containerID="da37e42fd532332ddb12a4b3c9996ea345348675f7ac70768dff88ede7fe060e" exitCode=0 Jan 29 09:20:41 crc kubenswrapper[4826]: I0129 09:20:41.163973 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvrpt" event={"ID":"73516359-d048-4197-8561-fa436555045e","Type":"ContainerDied","Data":"da37e42fd532332ddb12a4b3c9996ea345348675f7ac70768dff88ede7fe060e"} Jan 29 09:20:41 crc kubenswrapper[4826]: I0129 09:20:41.184347 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvml" event={"ID":"4cd53002-0ad0-403a-9651-8721309cd602","Type":"ContainerStarted","Data":"c633c4628c8364cf72424c64164e9ee1d7792dad3e1d2ee44c4ace219b1c80db"} Jan 29 09:20:42 crc kubenswrapper[4826]: I0129 09:20:42.194484 4826 generic.go:334] "Generic (PLEG): container finished" podID="4cd53002-0ad0-403a-9651-8721309cd602" containerID="3fac5144e4cbc93159157798a83724eb58ee2e29dd4176a80b7c07becf8a472c" exitCode=0 Jan 29 09:20:42 crc kubenswrapper[4826]: I0129 09:20:42.194555 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvml" event={"ID":"4cd53002-0ad0-403a-9651-8721309cd602","Type":"ContainerDied","Data":"3fac5144e4cbc93159157798a83724eb58ee2e29dd4176a80b7c07becf8a472c"} Jan 29 09:20:42 crc kubenswrapper[4826]: I0129 09:20:42.198168 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvrpt" event={"ID":"73516359-d048-4197-8561-fa436555045e","Type":"ContainerStarted","Data":"8f4c9eff10669100f9fc29880ab5575e2e39d0d88551d91b624d54d206e751ff"} Jan 29 09:20:42 crc kubenswrapper[4826]: I0129 09:20:42.241731 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dvrpt" podStartSLOduration=2.759427384 podStartE2EDuration="5.241713996s" podCreationTimestamp="2026-01-29 09:20:37 +0000 UTC" firstStartedPulling="2026-01-29 09:20:39.141834749 +0000 UTC m=+9423.003627818" lastFinishedPulling="2026-01-29 09:20:41.624121361 +0000 UTC m=+9425.485914430" observedRunningTime="2026-01-29 09:20:42.23582491 +0000 UTC m=+9426.097617979" watchObservedRunningTime="2026-01-29 09:20:42.241713996 +0000 UTC m=+9426.103507065" Jan 29 09:20:43 crc kubenswrapper[4826]: I0129 09:20:43.210422 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvml" event={"ID":"4cd53002-0ad0-403a-9651-8721309cd602","Type":"ContainerStarted","Data":"2e0e20f0a1917239401e11a4dc54ec95f8f06857e70c52ea1ac34fb948c49db8"} Jan 29 09:20:45 crc kubenswrapper[4826]: I0129 09:20:45.229229 4826 generic.go:334] "Generic (PLEG): container finished" podID="4cd53002-0ad0-403a-9651-8721309cd602" containerID="2e0e20f0a1917239401e11a4dc54ec95f8f06857e70c52ea1ac34fb948c49db8" exitCode=0 Jan 29 09:20:45 crc kubenswrapper[4826]: I0129 09:20:45.229332 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvml" event={"ID":"4cd53002-0ad0-403a-9651-8721309cd602","Type":"ContainerDied","Data":"2e0e20f0a1917239401e11a4dc54ec95f8f06857e70c52ea1ac34fb948c49db8"} Jan 29 09:20:45 crc kubenswrapper[4826]: I0129 09:20:45.810741 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:20:45 crc kubenswrapper[4826]: E0129 09:20:45.811105 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:20:46 crc kubenswrapper[4826]: I0129 09:20:46.241085 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvml" event={"ID":"4cd53002-0ad0-403a-9651-8721309cd602","Type":"ContainerStarted","Data":"d1bdb1e2290f455b2ff2afb3ec64088b4f3136ffd865b315946fde41946ade47"} Jan 29 09:20:46 crc kubenswrapper[4826]: I0129 09:20:46.264058 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fjvml" podStartSLOduration=2.698070923 podStartE2EDuration="6.264043365s" podCreationTimestamp="2026-01-29 09:20:40 +0000 UTC" firstStartedPulling="2026-01-29 09:20:42.196727385 +0000 UTC m=+9426.058520454" lastFinishedPulling="2026-01-29 09:20:45.762699837 +0000 UTC m=+9429.624492896" observedRunningTime="2026-01-29 09:20:46.262463903 +0000 UTC m=+9430.124256982" watchObservedRunningTime="2026-01-29 09:20:46.264043365 +0000 UTC m=+9430.125836434" Jan 29 09:20:48 crc kubenswrapper[4826]: I0129 09:20:48.195886 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:48 crc kubenswrapper[4826]: I0129 09:20:48.196233 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:48 crc kubenswrapper[4826]: I0129 09:20:48.245655 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:48 crc kubenswrapper[4826]: I0129 09:20:48.305773 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:48 crc kubenswrapper[4826]: I0129 09:20:48.846606 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvrpt"] Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.279246 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dvrpt" podUID="73516359-d048-4197-8561-fa436555045e" containerName="registry-server" containerID="cri-o://8f4c9eff10669100f9fc29880ab5575e2e39d0d88551d91b624d54d206e751ff" gracePeriod=2 Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.588906 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.589343 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.714572 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.796996 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.856652 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73516359-d048-4197-8561-fa436555045e-utilities\") pod \"73516359-d048-4197-8561-fa436555045e\" (UID: \"73516359-d048-4197-8561-fa436555045e\") " Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.856787 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73516359-d048-4197-8561-fa436555045e-catalog-content\") pod \"73516359-d048-4197-8561-fa436555045e\" (UID: \"73516359-d048-4197-8561-fa436555045e\") " Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.857018 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6vxb\" (UniqueName: \"kubernetes.io/projected/73516359-d048-4197-8561-fa436555045e-kube-api-access-z6vxb\") pod \"73516359-d048-4197-8561-fa436555045e\" (UID: \"73516359-d048-4197-8561-fa436555045e\") " Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.857560 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73516359-d048-4197-8561-fa436555045e-utilities" (OuterVolumeSpecName: "utilities") pod "73516359-d048-4197-8561-fa436555045e" (UID: "73516359-d048-4197-8561-fa436555045e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.867050 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73516359-d048-4197-8561-fa436555045e-kube-api-access-z6vxb" (OuterVolumeSpecName: "kube-api-access-z6vxb") pod "73516359-d048-4197-8561-fa436555045e" (UID: "73516359-d048-4197-8561-fa436555045e"). InnerVolumeSpecName "kube-api-access-z6vxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.909002 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73516359-d048-4197-8561-fa436555045e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73516359-d048-4197-8561-fa436555045e" (UID: "73516359-d048-4197-8561-fa436555045e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.960239 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6vxb\" (UniqueName: \"kubernetes.io/projected/73516359-d048-4197-8561-fa436555045e-kube-api-access-z6vxb\") on node \"crc\" DevicePath \"\"" Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.960289 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73516359-d048-4197-8561-fa436555045e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:20:50 crc kubenswrapper[4826]: I0129 09:20:50.960379 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73516359-d048-4197-8561-fa436555045e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.292364 4826 generic.go:334] "Generic (PLEG): container finished" podID="73516359-d048-4197-8561-fa436555045e" containerID="8f4c9eff10669100f9fc29880ab5575e2e39d0d88551d91b624d54d206e751ff" exitCode=0 Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.292406 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvrpt" event={"ID":"73516359-d048-4197-8561-fa436555045e","Type":"ContainerDied","Data":"8f4c9eff10669100f9fc29880ab5575e2e39d0d88551d91b624d54d206e751ff"} Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.292451 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvrpt" event={"ID":"73516359-d048-4197-8561-fa436555045e","Type":"ContainerDied","Data":"88f960cd875f7343ccb58151b508696b9b2b53a961f883309c03e739faafd8c1"} Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.292472 4826 scope.go:117] "RemoveContainer" containerID="8f4c9eff10669100f9fc29880ab5575e2e39d0d88551d91b624d54d206e751ff" Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.292493 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvrpt" Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.328546 4826 scope.go:117] "RemoveContainer" containerID="da37e42fd532332ddb12a4b3c9996ea345348675f7ac70768dff88ede7fe060e" Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.340101 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvrpt"] Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.347742 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.350008 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvrpt"] Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.352842 4826 scope.go:117] "RemoveContainer" containerID="dbac2d0a58f4d4d05b826fa6f966cfdb1197203a4c081caf117408ebd79ddfba" Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.413276 4826 scope.go:117] "RemoveContainer" containerID="8f4c9eff10669100f9fc29880ab5575e2e39d0d88551d91b624d54d206e751ff" Jan 29 09:20:51 crc kubenswrapper[4826]: E0129 09:20:51.414076 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f4c9eff10669100f9fc29880ab5575e2e39d0d88551d91b624d54d206e751ff\": container with ID starting with 8f4c9eff10669100f9fc29880ab5575e2e39d0d88551d91b624d54d206e751ff not found: ID does not exist" containerID="8f4c9eff10669100f9fc29880ab5575e2e39d0d88551d91b624d54d206e751ff" Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.414129 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f4c9eff10669100f9fc29880ab5575e2e39d0d88551d91b624d54d206e751ff"} err="failed to get container status \"8f4c9eff10669100f9fc29880ab5575e2e39d0d88551d91b624d54d206e751ff\": rpc error: code = NotFound desc = could not find container \"8f4c9eff10669100f9fc29880ab5575e2e39d0d88551d91b624d54d206e751ff\": container with ID starting with 8f4c9eff10669100f9fc29880ab5575e2e39d0d88551d91b624d54d206e751ff not found: ID does not exist" Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.414160 4826 scope.go:117] "RemoveContainer" containerID="da37e42fd532332ddb12a4b3c9996ea345348675f7ac70768dff88ede7fe060e" Jan 29 09:20:51 crc kubenswrapper[4826]: E0129 09:20:51.415394 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da37e42fd532332ddb12a4b3c9996ea345348675f7ac70768dff88ede7fe060e\": container with ID starting with da37e42fd532332ddb12a4b3c9996ea345348675f7ac70768dff88ede7fe060e not found: ID does not exist" containerID="da37e42fd532332ddb12a4b3c9996ea345348675f7ac70768dff88ede7fe060e" Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.415433 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da37e42fd532332ddb12a4b3c9996ea345348675f7ac70768dff88ede7fe060e"} err="failed to get container status \"da37e42fd532332ddb12a4b3c9996ea345348675f7ac70768dff88ede7fe060e\": rpc error: code = NotFound desc = could not find container \"da37e42fd532332ddb12a4b3c9996ea345348675f7ac70768dff88ede7fe060e\": container with ID starting with da37e42fd532332ddb12a4b3c9996ea345348675f7ac70768dff88ede7fe060e not found: ID does not exist" Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.415474 4826 scope.go:117] "RemoveContainer" containerID="dbac2d0a58f4d4d05b826fa6f966cfdb1197203a4c081caf117408ebd79ddfba" Jan 29 09:20:51 crc kubenswrapper[4826]: E0129 09:20:51.416049 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbac2d0a58f4d4d05b826fa6f966cfdb1197203a4c081caf117408ebd79ddfba\": container with ID starting with dbac2d0a58f4d4d05b826fa6f966cfdb1197203a4c081caf117408ebd79ddfba not found: ID does not exist" containerID="dbac2d0a58f4d4d05b826fa6f966cfdb1197203a4c081caf117408ebd79ddfba" Jan 29 09:20:51 crc kubenswrapper[4826]: I0129 09:20:51.416095 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbac2d0a58f4d4d05b826fa6f966cfdb1197203a4c081caf117408ebd79ddfba"} err="failed to get container status \"dbac2d0a58f4d4d05b826fa6f966cfdb1197203a4c081caf117408ebd79ddfba\": rpc error: code = NotFound desc = could not find container \"dbac2d0a58f4d4d05b826fa6f966cfdb1197203a4c081caf117408ebd79ddfba\": container with ID starting with dbac2d0a58f4d4d05b826fa6f966cfdb1197203a4c081caf117408ebd79ddfba not found: ID does not exist" Jan 29 09:20:52 crc kubenswrapper[4826]: I0129 09:20:52.822613 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73516359-d048-4197-8561-fa436555045e" path="/var/lib/kubelet/pods/73516359-d048-4197-8561-fa436555045e/volumes" Jan 29 09:20:53 crc kubenswrapper[4826]: I0129 09:20:53.046661 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fjvml"] Jan 29 09:20:53 crc kubenswrapper[4826]: I0129 09:20:53.310199 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fjvml" podUID="4cd53002-0ad0-403a-9651-8721309cd602" containerName="registry-server" containerID="cri-o://d1bdb1e2290f455b2ff2afb3ec64088b4f3136ffd865b315946fde41946ade47" gracePeriod=2 Jan 29 09:20:54 crc kubenswrapper[4826]: I0129 09:20:54.322646 4826 generic.go:334] "Generic (PLEG): container finished" podID="4cd53002-0ad0-403a-9651-8721309cd602" containerID="d1bdb1e2290f455b2ff2afb3ec64088b4f3136ffd865b315946fde41946ade47" exitCode=0 Jan 29 09:20:54 crc kubenswrapper[4826]: I0129 09:20:54.322693 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvml" event={"ID":"4cd53002-0ad0-403a-9651-8721309cd602","Type":"ContainerDied","Data":"d1bdb1e2290f455b2ff2afb3ec64088b4f3136ffd865b315946fde41946ade47"} Jan 29 09:20:54 crc kubenswrapper[4826]: I0129 09:20:54.516724 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:54 crc kubenswrapper[4826]: I0129 09:20:54.636162 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5z7z\" (UniqueName: \"kubernetes.io/projected/4cd53002-0ad0-403a-9651-8721309cd602-kube-api-access-j5z7z\") pod \"4cd53002-0ad0-403a-9651-8721309cd602\" (UID: \"4cd53002-0ad0-403a-9651-8721309cd602\") " Jan 29 09:20:54 crc kubenswrapper[4826]: I0129 09:20:54.636537 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd53002-0ad0-403a-9651-8721309cd602-utilities\") pod \"4cd53002-0ad0-403a-9651-8721309cd602\" (UID: \"4cd53002-0ad0-403a-9651-8721309cd602\") " Jan 29 09:20:54 crc kubenswrapper[4826]: I0129 09:20:54.636677 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd53002-0ad0-403a-9651-8721309cd602-catalog-content\") pod \"4cd53002-0ad0-403a-9651-8721309cd602\" (UID: \"4cd53002-0ad0-403a-9651-8721309cd602\") " Jan 29 09:20:54 crc kubenswrapper[4826]: I0129 09:20:54.637369 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cd53002-0ad0-403a-9651-8721309cd602-utilities" (OuterVolumeSpecName: "utilities") pod "4cd53002-0ad0-403a-9651-8721309cd602" (UID: "4cd53002-0ad0-403a-9651-8721309cd602"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:20:54 crc kubenswrapper[4826]: I0129 09:20:54.637775 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd53002-0ad0-403a-9651-8721309cd602-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:20:54 crc kubenswrapper[4826]: I0129 09:20:54.644076 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd53002-0ad0-403a-9651-8721309cd602-kube-api-access-j5z7z" (OuterVolumeSpecName: "kube-api-access-j5z7z") pod "4cd53002-0ad0-403a-9651-8721309cd602" (UID: "4cd53002-0ad0-403a-9651-8721309cd602"). InnerVolumeSpecName "kube-api-access-j5z7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:20:54 crc kubenswrapper[4826]: I0129 09:20:54.720092 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cd53002-0ad0-403a-9651-8721309cd602-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cd53002-0ad0-403a-9651-8721309cd602" (UID: "4cd53002-0ad0-403a-9651-8721309cd602"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:20:54 crc kubenswrapper[4826]: I0129 09:20:54.739973 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5z7z\" (UniqueName: \"kubernetes.io/projected/4cd53002-0ad0-403a-9651-8721309cd602-kube-api-access-j5z7z\") on node \"crc\" DevicePath \"\"" Jan 29 09:20:54 crc kubenswrapper[4826]: I0129 09:20:54.740011 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd53002-0ad0-403a-9651-8721309cd602-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:20:55 crc kubenswrapper[4826]: I0129 09:20:55.334819 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fjvml" event={"ID":"4cd53002-0ad0-403a-9651-8721309cd602","Type":"ContainerDied","Data":"c633c4628c8364cf72424c64164e9ee1d7792dad3e1d2ee44c4ace219b1c80db"} Jan 29 09:20:55 crc kubenswrapper[4826]: I0129 09:20:55.334871 4826 scope.go:117] "RemoveContainer" containerID="d1bdb1e2290f455b2ff2afb3ec64088b4f3136ffd865b315946fde41946ade47" Jan 29 09:20:55 crc kubenswrapper[4826]: I0129 09:20:55.334881 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fjvml" Jan 29 09:20:55 crc kubenswrapper[4826]: I0129 09:20:55.357933 4826 scope.go:117] "RemoveContainer" containerID="2e0e20f0a1917239401e11a4dc54ec95f8f06857e70c52ea1ac34fb948c49db8" Jan 29 09:20:55 crc kubenswrapper[4826]: I0129 09:20:55.370105 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fjvml"] Jan 29 09:20:55 crc kubenswrapper[4826]: I0129 09:20:55.378622 4826 scope.go:117] "RemoveContainer" containerID="3fac5144e4cbc93159157798a83724eb58ee2e29dd4176a80b7c07becf8a472c" Jan 29 09:20:55 crc kubenswrapper[4826]: I0129 09:20:55.382892 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fjvml"] Jan 29 09:20:56 crc kubenswrapper[4826]: I0129 09:20:56.819185 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd53002-0ad0-403a-9651-8721309cd602" path="/var/lib/kubelet/pods/4cd53002-0ad0-403a-9651-8721309cd602/volumes" Jan 29 09:20:59 crc kubenswrapper[4826]: I0129 09:20:59.809003 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:20:59 crc kubenswrapper[4826]: E0129 09:20:59.809996 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:21:10 crc kubenswrapper[4826]: I0129 09:21:10.809697 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:21:10 crc kubenswrapper[4826]: E0129 09:21:10.810608 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:21:24 crc kubenswrapper[4826]: I0129 09:21:24.809861 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:21:24 crc kubenswrapper[4826]: E0129 09:21:24.810643 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.687078 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-847sl"] Jan 29 09:21:35 crc kubenswrapper[4826]: E0129 09:21:35.688146 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73516359-d048-4197-8561-fa436555045e" containerName="extract-content" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.688163 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="73516359-d048-4197-8561-fa436555045e" containerName="extract-content" Jan 29 09:21:35 crc kubenswrapper[4826]: E0129 09:21:35.688189 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd53002-0ad0-403a-9651-8721309cd602" containerName="extract-utilities" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.688198 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd53002-0ad0-403a-9651-8721309cd602" containerName="extract-utilities" Jan 29 09:21:35 crc kubenswrapper[4826]: E0129 09:21:35.688210 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd53002-0ad0-403a-9651-8721309cd602" containerName="extract-content" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.688220 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd53002-0ad0-403a-9651-8721309cd602" containerName="extract-content" Jan 29 09:21:35 crc kubenswrapper[4826]: E0129 09:21:35.688239 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73516359-d048-4197-8561-fa436555045e" containerName="extract-utilities" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.688248 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="73516359-d048-4197-8561-fa436555045e" containerName="extract-utilities" Jan 29 09:21:35 crc kubenswrapper[4826]: E0129 09:21:35.688259 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73516359-d048-4197-8561-fa436555045e" containerName="registry-server" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.688267 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="73516359-d048-4197-8561-fa436555045e" containerName="registry-server" Jan 29 09:21:35 crc kubenswrapper[4826]: E0129 09:21:35.688283 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd53002-0ad0-403a-9651-8721309cd602" containerName="registry-server" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.688291 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd53002-0ad0-403a-9651-8721309cd602" containerName="registry-server" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.688583 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="73516359-d048-4197-8561-fa436555045e" containerName="registry-server" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.688599 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd53002-0ad0-403a-9651-8721309cd602" containerName="registry-server" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.690540 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.710231 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-847sl"] Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.739490 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45925902-ebc7-4aa8-8997-6046f16d968b-utilities\") pod \"certified-operators-847sl\" (UID: \"45925902-ebc7-4aa8-8997-6046f16d968b\") " pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.739629 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45925902-ebc7-4aa8-8997-6046f16d968b-catalog-content\") pod \"certified-operators-847sl\" (UID: \"45925902-ebc7-4aa8-8997-6046f16d968b\") " pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.739734 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx2sf\" (UniqueName: \"kubernetes.io/projected/45925902-ebc7-4aa8-8997-6046f16d968b-kube-api-access-zx2sf\") pod \"certified-operators-847sl\" (UID: \"45925902-ebc7-4aa8-8997-6046f16d968b\") " pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.841404 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45925902-ebc7-4aa8-8997-6046f16d968b-catalog-content\") pod \"certified-operators-847sl\" (UID: \"45925902-ebc7-4aa8-8997-6046f16d968b\") " pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.841557 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx2sf\" (UniqueName: \"kubernetes.io/projected/45925902-ebc7-4aa8-8997-6046f16d968b-kube-api-access-zx2sf\") pod \"certified-operators-847sl\" (UID: \"45925902-ebc7-4aa8-8997-6046f16d968b\") " pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.841657 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45925902-ebc7-4aa8-8997-6046f16d968b-utilities\") pod \"certified-operators-847sl\" (UID: \"45925902-ebc7-4aa8-8997-6046f16d968b\") " pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.842378 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45925902-ebc7-4aa8-8997-6046f16d968b-catalog-content\") pod \"certified-operators-847sl\" (UID: \"45925902-ebc7-4aa8-8997-6046f16d968b\") " pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.842527 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45925902-ebc7-4aa8-8997-6046f16d968b-utilities\") pod \"certified-operators-847sl\" (UID: \"45925902-ebc7-4aa8-8997-6046f16d968b\") " pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:35 crc kubenswrapper[4826]: I0129 09:21:35.861830 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx2sf\" (UniqueName: \"kubernetes.io/projected/45925902-ebc7-4aa8-8997-6046f16d968b-kube-api-access-zx2sf\") pod \"certified-operators-847sl\" (UID: \"45925902-ebc7-4aa8-8997-6046f16d968b\") " pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:36 crc kubenswrapper[4826]: I0129 09:21:36.011065 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:36 crc kubenswrapper[4826]: I0129 09:21:36.635989 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-847sl"] Jan 29 09:21:36 crc kubenswrapper[4826]: I0129 09:21:36.775029 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-847sl" event={"ID":"45925902-ebc7-4aa8-8997-6046f16d968b","Type":"ContainerStarted","Data":"d9ec51650ec5a67a0c4d2f14b8466fa4a487c58f156a702c00e82f741f7fc13b"} Jan 29 09:21:36 crc kubenswrapper[4826]: I0129 09:21:36.810727 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:21:36 crc kubenswrapper[4826]: E0129 09:21:36.812515 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:21:37 crc kubenswrapper[4826]: I0129 09:21:37.786529 4826 generic.go:334] "Generic (PLEG): container finished" podID="45925902-ebc7-4aa8-8997-6046f16d968b" containerID="f98d168473f0c6d9119cc537a9bdfde151beb2362d9ce1628d6d9e419ee78d0f" exitCode=0 Jan 29 09:21:37 crc kubenswrapper[4826]: I0129 09:21:37.786592 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-847sl" event={"ID":"45925902-ebc7-4aa8-8997-6046f16d968b","Type":"ContainerDied","Data":"f98d168473f0c6d9119cc537a9bdfde151beb2362d9ce1628d6d9e419ee78d0f"} Jan 29 09:21:38 crc kubenswrapper[4826]: I0129 09:21:38.796609 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-847sl" event={"ID":"45925902-ebc7-4aa8-8997-6046f16d968b","Type":"ContainerStarted","Data":"48288a86b952ad01ff9088843cffbb54706fef39a95566ea71d15b7a09763340"} Jan 29 09:21:40 crc kubenswrapper[4826]: I0129 09:21:40.820724 4826 generic.go:334] "Generic (PLEG): container finished" podID="45925902-ebc7-4aa8-8997-6046f16d968b" containerID="48288a86b952ad01ff9088843cffbb54706fef39a95566ea71d15b7a09763340" exitCode=0 Jan 29 09:21:40 crc kubenswrapper[4826]: I0129 09:21:40.821208 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-847sl" event={"ID":"45925902-ebc7-4aa8-8997-6046f16d968b","Type":"ContainerDied","Data":"48288a86b952ad01ff9088843cffbb54706fef39a95566ea71d15b7a09763340"} Jan 29 09:21:41 crc kubenswrapper[4826]: I0129 09:21:41.832067 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-847sl" event={"ID":"45925902-ebc7-4aa8-8997-6046f16d968b","Type":"ContainerStarted","Data":"984a1d5132bdb8813faf0096fae24d219a0fde020c48ae9159ba891b37210d91"} Jan 29 09:21:41 crc kubenswrapper[4826]: I0129 09:21:41.853356 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-847sl" podStartSLOduration=3.406830448 podStartE2EDuration="6.853339248s" podCreationTimestamp="2026-01-29 09:21:35 +0000 UTC" firstStartedPulling="2026-01-29 09:21:37.789437739 +0000 UTC m=+9481.651230808" lastFinishedPulling="2026-01-29 09:21:41.235946539 +0000 UTC m=+9485.097739608" observedRunningTime="2026-01-29 09:21:41.850478332 +0000 UTC m=+9485.712271411" watchObservedRunningTime="2026-01-29 09:21:41.853339248 +0000 UTC m=+9485.715132317" Jan 29 09:21:46 crc kubenswrapper[4826]: I0129 09:21:46.011797 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:46 crc kubenswrapper[4826]: I0129 09:21:46.012445 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:46 crc kubenswrapper[4826]: I0129 09:21:46.067750 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:46 crc kubenswrapper[4826]: I0129 09:21:46.948391 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:47 crc kubenswrapper[4826]: I0129 09:21:47.014495 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-847sl"] Jan 29 09:21:47 crc kubenswrapper[4826]: I0129 09:21:47.809280 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:21:47 crc kubenswrapper[4826]: E0129 09:21:47.809726 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:21:48 crc kubenswrapper[4826]: I0129 09:21:48.907482 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-847sl" podUID="45925902-ebc7-4aa8-8997-6046f16d968b" containerName="registry-server" containerID="cri-o://984a1d5132bdb8813faf0096fae24d219a0fde020c48ae9159ba891b37210d91" gracePeriod=2 Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.439531 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.573950 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx2sf\" (UniqueName: \"kubernetes.io/projected/45925902-ebc7-4aa8-8997-6046f16d968b-kube-api-access-zx2sf\") pod \"45925902-ebc7-4aa8-8997-6046f16d968b\" (UID: \"45925902-ebc7-4aa8-8997-6046f16d968b\") " Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.574113 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45925902-ebc7-4aa8-8997-6046f16d968b-utilities\") pod \"45925902-ebc7-4aa8-8997-6046f16d968b\" (UID: \"45925902-ebc7-4aa8-8997-6046f16d968b\") " Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.574202 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45925902-ebc7-4aa8-8997-6046f16d968b-catalog-content\") pod \"45925902-ebc7-4aa8-8997-6046f16d968b\" (UID: \"45925902-ebc7-4aa8-8997-6046f16d968b\") " Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.575518 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45925902-ebc7-4aa8-8997-6046f16d968b-utilities" (OuterVolumeSpecName: "utilities") pod "45925902-ebc7-4aa8-8997-6046f16d968b" (UID: "45925902-ebc7-4aa8-8997-6046f16d968b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.582958 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45925902-ebc7-4aa8-8997-6046f16d968b-kube-api-access-zx2sf" (OuterVolumeSpecName: "kube-api-access-zx2sf") pod "45925902-ebc7-4aa8-8997-6046f16d968b" (UID: "45925902-ebc7-4aa8-8997-6046f16d968b"). InnerVolumeSpecName "kube-api-access-zx2sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.636502 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45925902-ebc7-4aa8-8997-6046f16d968b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45925902-ebc7-4aa8-8997-6046f16d968b" (UID: "45925902-ebc7-4aa8-8997-6046f16d968b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.677622 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45925902-ebc7-4aa8-8997-6046f16d968b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.677672 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45925902-ebc7-4aa8-8997-6046f16d968b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.677721 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx2sf\" (UniqueName: \"kubernetes.io/projected/45925902-ebc7-4aa8-8997-6046f16d968b-kube-api-access-zx2sf\") on node \"crc\" DevicePath \"\"" Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.921252 4826 generic.go:334] "Generic (PLEG): container finished" podID="45925902-ebc7-4aa8-8997-6046f16d968b" containerID="984a1d5132bdb8813faf0096fae24d219a0fde020c48ae9159ba891b37210d91" exitCode=0 Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.921669 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-847sl" event={"ID":"45925902-ebc7-4aa8-8997-6046f16d968b","Type":"ContainerDied","Data":"984a1d5132bdb8813faf0096fae24d219a0fde020c48ae9159ba891b37210d91"} Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.921698 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-847sl" event={"ID":"45925902-ebc7-4aa8-8997-6046f16d968b","Type":"ContainerDied","Data":"d9ec51650ec5a67a0c4d2f14b8466fa4a487c58f156a702c00e82f741f7fc13b"} Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.921714 4826 scope.go:117] "RemoveContainer" containerID="984a1d5132bdb8813faf0096fae24d219a0fde020c48ae9159ba891b37210d91" Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.921873 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-847sl" Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.952562 4826 scope.go:117] "RemoveContainer" containerID="48288a86b952ad01ff9088843cffbb54706fef39a95566ea71d15b7a09763340" Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.961839 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-847sl"] Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.971115 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-847sl"] Jan 29 09:21:49 crc kubenswrapper[4826]: I0129 09:21:49.982060 4826 scope.go:117] "RemoveContainer" containerID="f98d168473f0c6d9119cc537a9bdfde151beb2362d9ce1628d6d9e419ee78d0f" Jan 29 09:21:50 crc kubenswrapper[4826]: I0129 09:21:50.035660 4826 scope.go:117] "RemoveContainer" containerID="984a1d5132bdb8813faf0096fae24d219a0fde020c48ae9159ba891b37210d91" Jan 29 09:21:50 crc kubenswrapper[4826]: E0129 09:21:50.037082 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"984a1d5132bdb8813faf0096fae24d219a0fde020c48ae9159ba891b37210d91\": container with ID starting with 984a1d5132bdb8813faf0096fae24d219a0fde020c48ae9159ba891b37210d91 not found: ID does not exist" containerID="984a1d5132bdb8813faf0096fae24d219a0fde020c48ae9159ba891b37210d91" Jan 29 09:21:50 crc kubenswrapper[4826]: I0129 09:21:50.037250 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"984a1d5132bdb8813faf0096fae24d219a0fde020c48ae9159ba891b37210d91"} err="failed to get container status \"984a1d5132bdb8813faf0096fae24d219a0fde020c48ae9159ba891b37210d91\": rpc error: code = NotFound desc = could not find container \"984a1d5132bdb8813faf0096fae24d219a0fde020c48ae9159ba891b37210d91\": container with ID starting with 984a1d5132bdb8813faf0096fae24d219a0fde020c48ae9159ba891b37210d91 not found: ID does not exist" Jan 29 09:21:50 crc kubenswrapper[4826]: I0129 09:21:50.037462 4826 scope.go:117] "RemoveContainer" containerID="48288a86b952ad01ff9088843cffbb54706fef39a95566ea71d15b7a09763340" Jan 29 09:21:50 crc kubenswrapper[4826]: E0129 09:21:50.038133 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48288a86b952ad01ff9088843cffbb54706fef39a95566ea71d15b7a09763340\": container with ID starting with 48288a86b952ad01ff9088843cffbb54706fef39a95566ea71d15b7a09763340 not found: ID does not exist" containerID="48288a86b952ad01ff9088843cffbb54706fef39a95566ea71d15b7a09763340" Jan 29 09:21:50 crc kubenswrapper[4826]: I0129 09:21:50.038198 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48288a86b952ad01ff9088843cffbb54706fef39a95566ea71d15b7a09763340"} err="failed to get container status \"48288a86b952ad01ff9088843cffbb54706fef39a95566ea71d15b7a09763340\": rpc error: code = NotFound desc = could not find container \"48288a86b952ad01ff9088843cffbb54706fef39a95566ea71d15b7a09763340\": container with ID starting with 48288a86b952ad01ff9088843cffbb54706fef39a95566ea71d15b7a09763340 not found: ID does not exist" Jan 29 09:21:50 crc kubenswrapper[4826]: I0129 09:21:50.038243 4826 scope.go:117] "RemoveContainer" containerID="f98d168473f0c6d9119cc537a9bdfde151beb2362d9ce1628d6d9e419ee78d0f" Jan 29 09:21:50 crc kubenswrapper[4826]: E0129 09:21:50.038631 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98d168473f0c6d9119cc537a9bdfde151beb2362d9ce1628d6d9e419ee78d0f\": container with ID starting with f98d168473f0c6d9119cc537a9bdfde151beb2362d9ce1628d6d9e419ee78d0f not found: ID does not exist" containerID="f98d168473f0c6d9119cc537a9bdfde151beb2362d9ce1628d6d9e419ee78d0f" Jan 29 09:21:50 crc kubenswrapper[4826]: I0129 09:21:50.038679 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98d168473f0c6d9119cc537a9bdfde151beb2362d9ce1628d6d9e419ee78d0f"} err="failed to get container status \"f98d168473f0c6d9119cc537a9bdfde151beb2362d9ce1628d6d9e419ee78d0f\": rpc error: code = NotFound desc = could not find container \"f98d168473f0c6d9119cc537a9bdfde151beb2362d9ce1628d6d9e419ee78d0f\": container with ID starting with f98d168473f0c6d9119cc537a9bdfde151beb2362d9ce1628d6d9e419ee78d0f not found: ID does not exist" Jan 29 09:21:50 crc kubenswrapper[4826]: I0129 09:21:50.826777 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45925902-ebc7-4aa8-8997-6046f16d968b" path="/var/lib/kubelet/pods/45925902-ebc7-4aa8-8997-6046f16d968b/volumes" Jan 29 09:22:01 crc kubenswrapper[4826]: I0129 09:22:01.809511 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:22:01 crc kubenswrapper[4826]: E0129 09:22:01.810401 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:22:13 crc kubenswrapper[4826]: I0129 09:22:13.808538 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:22:13 crc kubenswrapper[4826]: E0129 09:22:13.809492 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:22:28 crc kubenswrapper[4826]: I0129 09:22:28.809539 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:22:28 crc kubenswrapper[4826]: E0129 09:22:28.810510 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:22:43 crc kubenswrapper[4826]: I0129 09:22:43.809049 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:22:43 crc kubenswrapper[4826]: E0129 09:22:43.810039 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:22:58 crc kubenswrapper[4826]: I0129 09:22:58.808704 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:22:58 crc kubenswrapper[4826]: E0129 09:22:58.809454 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:23:03 crc kubenswrapper[4826]: I0129 09:23:03.655953 4826 generic.go:334] "Generic (PLEG): container finished" podID="0e54d8ec-766b-4711-94e7-08fcfe836c67" containerID="6bae026feb1c754ad80de4807ddba642bfae83d79fc8dedce622f166f50df6ee" exitCode=1 Jan 29 09:23:03 crc kubenswrapper[4826]: I0129 09:23:03.656009 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e54d8ec-766b-4711-94e7-08fcfe836c67","Type":"ContainerDied","Data":"6bae026feb1c754ad80de4807ddba642bfae83d79fc8dedce622f166f50df6ee"} Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.087743 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.174627 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e54d8ec-766b-4711-94e7-08fcfe836c67-test-operator-ephemeral-temporary\") pod \"0e54d8ec-766b-4711-94e7-08fcfe836c67\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.174692 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-openstack-config-secret\") pod \"0e54d8ec-766b-4711-94e7-08fcfe836c67\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.174739 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e54d8ec-766b-4711-94e7-08fcfe836c67-openstack-config\") pod \"0e54d8ec-766b-4711-94e7-08fcfe836c67\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.174778 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e54d8ec-766b-4711-94e7-08fcfe836c67-config-data\") pod \"0e54d8ec-766b-4711-94e7-08fcfe836c67\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.174830 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"0e54d8ec-766b-4711-94e7-08fcfe836c67\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.174853 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e54d8ec-766b-4711-94e7-08fcfe836c67-test-operator-ephemeral-workdir\") pod \"0e54d8ec-766b-4711-94e7-08fcfe836c67\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.174886 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-ssh-key\") pod \"0e54d8ec-766b-4711-94e7-08fcfe836c67\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.174978 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-ca-certs\") pod \"0e54d8ec-766b-4711-94e7-08fcfe836c67\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.175063 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzz8b\" (UniqueName: \"kubernetes.io/projected/0e54d8ec-766b-4711-94e7-08fcfe836c67-kube-api-access-qzz8b\") pod \"0e54d8ec-766b-4711-94e7-08fcfe836c67\" (UID: \"0e54d8ec-766b-4711-94e7-08fcfe836c67\") " Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.175803 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e54d8ec-766b-4711-94e7-08fcfe836c67-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "0e54d8ec-766b-4711-94e7-08fcfe836c67" (UID: "0e54d8ec-766b-4711-94e7-08fcfe836c67"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.175905 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e54d8ec-766b-4711-94e7-08fcfe836c67-config-data" (OuterVolumeSpecName: "config-data") pod "0e54d8ec-766b-4711-94e7-08fcfe836c67" (UID: "0e54d8ec-766b-4711-94e7-08fcfe836c67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.183583 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "test-operator-logs") pod "0e54d8ec-766b-4711-94e7-08fcfe836c67" (UID: "0e54d8ec-766b-4711-94e7-08fcfe836c67"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.183625 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e54d8ec-766b-4711-94e7-08fcfe836c67-kube-api-access-qzz8b" (OuterVolumeSpecName: "kube-api-access-qzz8b") pod "0e54d8ec-766b-4711-94e7-08fcfe836c67" (UID: "0e54d8ec-766b-4711-94e7-08fcfe836c67"). InnerVolumeSpecName "kube-api-access-qzz8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.188333 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e54d8ec-766b-4711-94e7-08fcfe836c67-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "0e54d8ec-766b-4711-94e7-08fcfe836c67" (UID: "0e54d8ec-766b-4711-94e7-08fcfe836c67"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.210230 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0e54d8ec-766b-4711-94e7-08fcfe836c67" (UID: "0e54d8ec-766b-4711-94e7-08fcfe836c67"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.226182 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "0e54d8ec-766b-4711-94e7-08fcfe836c67" (UID: "0e54d8ec-766b-4711-94e7-08fcfe836c67"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.228120 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0e54d8ec-766b-4711-94e7-08fcfe836c67" (UID: "0e54d8ec-766b-4711-94e7-08fcfe836c67"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.228654 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e54d8ec-766b-4711-94e7-08fcfe836c67-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0e54d8ec-766b-4711-94e7-08fcfe836c67" (UID: "0e54d8ec-766b-4711-94e7-08fcfe836c67"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.277553 4826 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.277618 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzz8b\" (UniqueName: \"kubernetes.io/projected/0e54d8ec-766b-4711-94e7-08fcfe836c67-kube-api-access-qzz8b\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.277644 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0e54d8ec-766b-4711-94e7-08fcfe836c67-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.277664 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.277685 4826 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0e54d8ec-766b-4711-94e7-08fcfe836c67-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.277700 4826 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e54d8ec-766b-4711-94e7-08fcfe836c67-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.277759 4826 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.277783 4826 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0e54d8ec-766b-4711-94e7-08fcfe836c67-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.277802 4826 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e54d8ec-766b-4711-94e7-08fcfe836c67-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.315616 4826 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.382566 4826 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.677595 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0e54d8ec-766b-4711-94e7-08fcfe836c67","Type":"ContainerDied","Data":"9eda60cfaae70ce853662b3a7babee1b1d31a48837226841a4d5af8023258075"} Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.677648 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eda60cfaae70ce853662b3a7babee1b1d31a48837226841a4d5af8023258075" Jan 29 09:23:05 crc kubenswrapper[4826]: I0129 09:23:05.677721 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 09:23:10 crc kubenswrapper[4826]: I0129 09:23:10.809314 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:23:10 crc kubenswrapper[4826]: E0129 09:23:10.810137 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.125226 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 09:23:16 crc kubenswrapper[4826]: E0129 09:23:16.126218 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45925902-ebc7-4aa8-8997-6046f16d968b" containerName="extract-content" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.126233 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="45925902-ebc7-4aa8-8997-6046f16d968b" containerName="extract-content" Jan 29 09:23:16 crc kubenswrapper[4826]: E0129 09:23:16.126246 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45925902-ebc7-4aa8-8997-6046f16d968b" containerName="registry-server" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.126253 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="45925902-ebc7-4aa8-8997-6046f16d968b" containerName="registry-server" Jan 29 09:23:16 crc kubenswrapper[4826]: E0129 09:23:16.126350 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45925902-ebc7-4aa8-8997-6046f16d968b" containerName="extract-utilities" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.126363 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="45925902-ebc7-4aa8-8997-6046f16d968b" containerName="extract-utilities" Jan 29 09:23:16 crc kubenswrapper[4826]: E0129 09:23:16.126390 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e54d8ec-766b-4711-94e7-08fcfe836c67" containerName="tempest-tests-tempest-tests-runner" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.126401 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e54d8ec-766b-4711-94e7-08fcfe836c67" containerName="tempest-tests-tempest-tests-runner" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.126667 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e54d8ec-766b-4711-94e7-08fcfe836c67" containerName="tempest-tests-tempest-tests-runner" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.126688 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="45925902-ebc7-4aa8-8997-6046f16d968b" containerName="registry-server" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.127572 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.131807 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qlrsf" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.152440 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.214614 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d7841bd6-c003-4e44-85cd-1aaa6e3ed48d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.214739 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m4kf\" (UniqueName: \"kubernetes.io/projected/d7841bd6-c003-4e44-85cd-1aaa6e3ed48d-kube-api-access-8m4kf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d7841bd6-c003-4e44-85cd-1aaa6e3ed48d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.316022 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m4kf\" (UniqueName: \"kubernetes.io/projected/d7841bd6-c003-4e44-85cd-1aaa6e3ed48d-kube-api-access-8m4kf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d7841bd6-c003-4e44-85cd-1aaa6e3ed48d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.316167 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d7841bd6-c003-4e44-85cd-1aaa6e3ed48d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.316726 4826 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d7841bd6-c003-4e44-85cd-1aaa6e3ed48d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.339093 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m4kf\" (UniqueName: \"kubernetes.io/projected/d7841bd6-c003-4e44-85cd-1aaa6e3ed48d-kube-api-access-8m4kf\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d7841bd6-c003-4e44-85cd-1aaa6e3ed48d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.365055 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d7841bd6-c003-4e44-85cd-1aaa6e3ed48d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.455954 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 09:23:16 crc kubenswrapper[4826]: I0129 09:23:16.910134 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 09:23:17 crc kubenswrapper[4826]: I0129 09:23:17.787207 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d7841bd6-c003-4e44-85cd-1aaa6e3ed48d","Type":"ContainerStarted","Data":"0f5b22b1d82de9ace3cf3090ff0cbefb38462fc87b8645e4a5545c4922dce7bf"} Jan 29 09:23:18 crc kubenswrapper[4826]: I0129 09:23:18.799727 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d7841bd6-c003-4e44-85cd-1aaa6e3ed48d","Type":"ContainerStarted","Data":"24edad2ae92cfa5bfe129ccb116b1362f3466966f5ce0661d04a6c363ccd4292"} Jan 29 09:23:18 crc kubenswrapper[4826]: I0129 09:23:18.811524 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.567893384 podStartE2EDuration="2.811503155s" podCreationTimestamp="2026-01-29 09:23:16 +0000 UTC" firstStartedPulling="2026-01-29 09:23:16.915493949 +0000 UTC m=+9580.777287018" lastFinishedPulling="2026-01-29 09:23:18.15910372 +0000 UTC m=+9582.020896789" observedRunningTime="2026-01-29 09:23:18.811083224 +0000 UTC m=+9582.672876323" watchObservedRunningTime="2026-01-29 09:23:18.811503155 +0000 UTC m=+9582.673296224" Jan 29 09:23:25 crc kubenswrapper[4826]: I0129 09:23:25.809656 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:23:25 crc kubenswrapper[4826]: E0129 09:23:25.810425 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:23:38 crc kubenswrapper[4826]: I0129 09:23:38.809367 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:23:38 crc kubenswrapper[4826]: E0129 09:23:38.810142 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:23:49 crc kubenswrapper[4826]: I0129 09:23:49.808866 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:23:49 crc kubenswrapper[4826]: E0129 09:23:49.809751 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:24:00 crc kubenswrapper[4826]: I0129 09:24:00.811031 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:24:00 crc kubenswrapper[4826]: E0129 09:24:00.811799 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:24:15 crc kubenswrapper[4826]: I0129 09:24:15.809248 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:24:15 crc kubenswrapper[4826]: E0129 09:24:15.809990 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:24:25 crc kubenswrapper[4826]: I0129 09:24:25.878420 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2splk/must-gather-2kvp8"] Jan 29 09:24:25 crc kubenswrapper[4826]: I0129 09:24:25.881224 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/must-gather-2kvp8" Jan 29 09:24:25 crc kubenswrapper[4826]: I0129 09:24:25.883831 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2splk"/"default-dockercfg-l6rpd" Jan 29 09:24:25 crc kubenswrapper[4826]: I0129 09:24:25.887740 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2splk"/"kube-root-ca.crt" Jan 29 09:24:25 crc kubenswrapper[4826]: I0129 09:24:25.887971 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2splk"/"openshift-service-ca.crt" Jan 29 09:24:25 crc kubenswrapper[4826]: I0129 09:24:25.888820 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2splk/must-gather-2kvp8"] Jan 29 09:24:26 crc kubenswrapper[4826]: I0129 09:24:26.011827 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxn8x\" (UniqueName: \"kubernetes.io/projected/b0d19904-4548-49c4-a608-18918d6fbe76-kube-api-access-gxn8x\") pod \"must-gather-2kvp8\" (UID: \"b0d19904-4548-49c4-a608-18918d6fbe76\") " pod="openshift-must-gather-2splk/must-gather-2kvp8" Jan 29 09:24:26 crc kubenswrapper[4826]: I0129 09:24:26.011916 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0d19904-4548-49c4-a608-18918d6fbe76-must-gather-output\") pod \"must-gather-2kvp8\" (UID: \"b0d19904-4548-49c4-a608-18918d6fbe76\") " pod="openshift-must-gather-2splk/must-gather-2kvp8" Jan 29 09:24:26 crc kubenswrapper[4826]: I0129 09:24:26.114139 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxn8x\" (UniqueName: \"kubernetes.io/projected/b0d19904-4548-49c4-a608-18918d6fbe76-kube-api-access-gxn8x\") pod \"must-gather-2kvp8\" (UID: \"b0d19904-4548-49c4-a608-18918d6fbe76\") " pod="openshift-must-gather-2splk/must-gather-2kvp8" Jan 29 09:24:26 crc kubenswrapper[4826]: I0129 09:24:26.114268 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0d19904-4548-49c4-a608-18918d6fbe76-must-gather-output\") pod \"must-gather-2kvp8\" (UID: \"b0d19904-4548-49c4-a608-18918d6fbe76\") " pod="openshift-must-gather-2splk/must-gather-2kvp8" Jan 29 09:24:26 crc kubenswrapper[4826]: I0129 09:24:26.114811 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0d19904-4548-49c4-a608-18918d6fbe76-must-gather-output\") pod \"must-gather-2kvp8\" (UID: \"b0d19904-4548-49c4-a608-18918d6fbe76\") " pod="openshift-must-gather-2splk/must-gather-2kvp8" Jan 29 09:24:26 crc kubenswrapper[4826]: I0129 09:24:26.495413 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxn8x\" (UniqueName: \"kubernetes.io/projected/b0d19904-4548-49c4-a608-18918d6fbe76-kube-api-access-gxn8x\") pod \"must-gather-2kvp8\" (UID: \"b0d19904-4548-49c4-a608-18918d6fbe76\") " pod="openshift-must-gather-2splk/must-gather-2kvp8" Jan 29 09:24:26 crc kubenswrapper[4826]: I0129 09:24:26.514323 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/must-gather-2kvp8" Jan 29 09:24:27 crc kubenswrapper[4826]: I0129 09:24:27.062668 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:24:27 crc kubenswrapper[4826]: I0129 09:24:27.063022 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2splk/must-gather-2kvp8"] Jan 29 09:24:27 crc kubenswrapper[4826]: I0129 09:24:27.536372 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2splk/must-gather-2kvp8" event={"ID":"b0d19904-4548-49c4-a608-18918d6fbe76","Type":"ContainerStarted","Data":"98e2524ec5f87d7b37334b155d92a4f531ada06ba5592b2fb6cafc167126f29b"} Jan 29 09:24:28 crc kubenswrapper[4826]: I0129 09:24:28.809736 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:24:28 crc kubenswrapper[4826]: E0129 09:24:28.810007 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:24:31 crc kubenswrapper[4826]: I0129 09:24:31.601373 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2splk/must-gather-2kvp8" event={"ID":"b0d19904-4548-49c4-a608-18918d6fbe76","Type":"ContainerStarted","Data":"72aa304056fe087f3e76308c4be2f823506a2e3aa6a36d49650c9c4a2c018856"} Jan 29 09:24:32 crc kubenswrapper[4826]: I0129 09:24:32.610394 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2splk/must-gather-2kvp8" event={"ID":"b0d19904-4548-49c4-a608-18918d6fbe76","Type":"ContainerStarted","Data":"95323d346ecbcaf41c8db64ebddd57e73cb6413f1b547e86b464e0fa903fcdb0"} Jan 29 09:24:32 crc kubenswrapper[4826]: I0129 09:24:32.627525 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2splk/must-gather-2kvp8" podStartSLOduration=3.477279385 podStartE2EDuration="7.627504619s" podCreationTimestamp="2026-01-29 09:24:25 +0000 UTC" firstStartedPulling="2026-01-29 09:24:27.062362839 +0000 UTC m=+9650.924155908" lastFinishedPulling="2026-01-29 09:24:31.212588073 +0000 UTC m=+9655.074381142" observedRunningTime="2026-01-29 09:24:32.623751299 +0000 UTC m=+9656.485544378" watchObservedRunningTime="2026-01-29 09:24:32.627504619 +0000 UTC m=+9656.489297688" Jan 29 09:24:36 crc kubenswrapper[4826]: E0129 09:24:36.042598 4826 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.173:57064->38.102.83.173:41327: write tcp 38.102.83.173:57064->38.102.83.173:41327: write: broken pipe Jan 29 09:24:36 crc kubenswrapper[4826]: I0129 09:24:36.908441 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2splk/crc-debug-gmbx6"] Jan 29 09:24:36 crc kubenswrapper[4826]: I0129 09:24:36.910211 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/crc-debug-gmbx6" Jan 29 09:24:37 crc kubenswrapper[4826]: I0129 09:24:37.084358 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6a1a4f32-64a1-4ff6-bb77-b47bc7e57695-host\") pod \"crc-debug-gmbx6\" (UID: \"6a1a4f32-64a1-4ff6-bb77-b47bc7e57695\") " pod="openshift-must-gather-2splk/crc-debug-gmbx6" Jan 29 09:24:37 crc kubenswrapper[4826]: I0129 09:24:37.084924 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rrj6\" (UniqueName: \"kubernetes.io/projected/6a1a4f32-64a1-4ff6-bb77-b47bc7e57695-kube-api-access-6rrj6\") pod \"crc-debug-gmbx6\" (UID: \"6a1a4f32-64a1-4ff6-bb77-b47bc7e57695\") " pod="openshift-must-gather-2splk/crc-debug-gmbx6" Jan 29 09:24:37 crc kubenswrapper[4826]: I0129 09:24:37.187117 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rrj6\" (UniqueName: \"kubernetes.io/projected/6a1a4f32-64a1-4ff6-bb77-b47bc7e57695-kube-api-access-6rrj6\") pod \"crc-debug-gmbx6\" (UID: \"6a1a4f32-64a1-4ff6-bb77-b47bc7e57695\") " pod="openshift-must-gather-2splk/crc-debug-gmbx6" Jan 29 09:24:37 crc kubenswrapper[4826]: I0129 09:24:37.187359 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6a1a4f32-64a1-4ff6-bb77-b47bc7e57695-host\") pod \"crc-debug-gmbx6\" (UID: \"6a1a4f32-64a1-4ff6-bb77-b47bc7e57695\") " pod="openshift-must-gather-2splk/crc-debug-gmbx6" Jan 29 09:24:37 crc kubenswrapper[4826]: I0129 09:24:37.187512 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6a1a4f32-64a1-4ff6-bb77-b47bc7e57695-host\") pod \"crc-debug-gmbx6\" (UID: \"6a1a4f32-64a1-4ff6-bb77-b47bc7e57695\") " pod="openshift-must-gather-2splk/crc-debug-gmbx6" Jan 29 09:24:37 crc kubenswrapper[4826]: I0129 09:24:37.795333 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rrj6\" (UniqueName: \"kubernetes.io/projected/6a1a4f32-64a1-4ff6-bb77-b47bc7e57695-kube-api-access-6rrj6\") pod \"crc-debug-gmbx6\" (UID: \"6a1a4f32-64a1-4ff6-bb77-b47bc7e57695\") " pod="openshift-must-gather-2splk/crc-debug-gmbx6" Jan 29 09:24:37 crc kubenswrapper[4826]: I0129 09:24:37.836415 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/crc-debug-gmbx6" Jan 29 09:24:38 crc kubenswrapper[4826]: I0129 09:24:38.669655 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2splk/crc-debug-gmbx6" event={"ID":"6a1a4f32-64a1-4ff6-bb77-b47bc7e57695","Type":"ContainerStarted","Data":"eb051c33efaaa52d47a7d6602ffd0cfa7e482177ff0d51b680346c31e57aba0d"} Jan 29 09:24:43 crc kubenswrapper[4826]: I0129 09:24:43.809137 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:24:43 crc kubenswrapper[4826]: E0129 09:24:43.810149 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:24:50 crc kubenswrapper[4826]: I0129 09:24:50.789900 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2splk/crc-debug-gmbx6" event={"ID":"6a1a4f32-64a1-4ff6-bb77-b47bc7e57695","Type":"ContainerStarted","Data":"678fd57d9a6590def0e579feb9588fd0657ef1fd3e8d24b781285a9bbd628073"} Jan 29 09:24:50 crc kubenswrapper[4826]: I0129 09:24:50.816268 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2splk/crc-debug-gmbx6" podStartSLOduration=3.04557393 podStartE2EDuration="14.816248235s" podCreationTimestamp="2026-01-29 09:24:36 +0000 UTC" firstStartedPulling="2026-01-29 09:24:37.87005792 +0000 UTC m=+9661.731850989" lastFinishedPulling="2026-01-29 09:24:49.640732225 +0000 UTC m=+9673.502525294" observedRunningTime="2026-01-29 09:24:50.805343456 +0000 UTC m=+9674.667136525" watchObservedRunningTime="2026-01-29 09:24:50.816248235 +0000 UTC m=+9674.678041304" Jan 29 09:24:54 crc kubenswrapper[4826]: I0129 09:24:54.808596 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:24:54 crc kubenswrapper[4826]: E0129 09:24:54.810143 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:25:08 crc kubenswrapper[4826]: I0129 09:25:08.809492 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:25:11 crc kubenswrapper[4826]: I0129 09:25:11.011447 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"f4c5bf56ecf4d49f31d7cceee4ddf4d08b3ec9af55a5f1692d3a3420750d3329"} Jan 29 09:25:46 crc kubenswrapper[4826]: I0129 09:25:46.361727 4826 generic.go:334] "Generic (PLEG): container finished" podID="6a1a4f32-64a1-4ff6-bb77-b47bc7e57695" containerID="678fd57d9a6590def0e579feb9588fd0657ef1fd3e8d24b781285a9bbd628073" exitCode=0 Jan 29 09:25:46 crc kubenswrapper[4826]: I0129 09:25:46.361931 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2splk/crc-debug-gmbx6" event={"ID":"6a1a4f32-64a1-4ff6-bb77-b47bc7e57695","Type":"ContainerDied","Data":"678fd57d9a6590def0e579feb9588fd0657ef1fd3e8d24b781285a9bbd628073"} Jan 29 09:25:47 crc kubenswrapper[4826]: I0129 09:25:47.477250 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/crc-debug-gmbx6" Jan 29 09:25:47 crc kubenswrapper[4826]: I0129 09:25:47.513819 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2splk/crc-debug-gmbx6"] Jan 29 09:25:47 crc kubenswrapper[4826]: I0129 09:25:47.522582 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2splk/crc-debug-gmbx6"] Jan 29 09:25:47 crc kubenswrapper[4826]: I0129 09:25:47.628034 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rrj6\" (UniqueName: \"kubernetes.io/projected/6a1a4f32-64a1-4ff6-bb77-b47bc7e57695-kube-api-access-6rrj6\") pod \"6a1a4f32-64a1-4ff6-bb77-b47bc7e57695\" (UID: \"6a1a4f32-64a1-4ff6-bb77-b47bc7e57695\") " Jan 29 09:25:47 crc kubenswrapper[4826]: I0129 09:25:47.628134 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6a1a4f32-64a1-4ff6-bb77-b47bc7e57695-host\") pod \"6a1a4f32-64a1-4ff6-bb77-b47bc7e57695\" (UID: \"6a1a4f32-64a1-4ff6-bb77-b47bc7e57695\") " Jan 29 09:25:47 crc kubenswrapper[4826]: I0129 09:25:47.628290 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1a4f32-64a1-4ff6-bb77-b47bc7e57695-host" (OuterVolumeSpecName: "host") pod "6a1a4f32-64a1-4ff6-bb77-b47bc7e57695" (UID: "6a1a4f32-64a1-4ff6-bb77-b47bc7e57695"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:25:47 crc kubenswrapper[4826]: I0129 09:25:47.628824 4826 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6a1a4f32-64a1-4ff6-bb77-b47bc7e57695-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:47 crc kubenswrapper[4826]: I0129 09:25:47.634936 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1a4f32-64a1-4ff6-bb77-b47bc7e57695-kube-api-access-6rrj6" (OuterVolumeSpecName: "kube-api-access-6rrj6") pod "6a1a4f32-64a1-4ff6-bb77-b47bc7e57695" (UID: "6a1a4f32-64a1-4ff6-bb77-b47bc7e57695"). InnerVolumeSpecName "kube-api-access-6rrj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:47 crc kubenswrapper[4826]: I0129 09:25:47.732095 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rrj6\" (UniqueName: \"kubernetes.io/projected/6a1a4f32-64a1-4ff6-bb77-b47bc7e57695-kube-api-access-6rrj6\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:48 crc kubenswrapper[4826]: I0129 09:25:48.383058 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb051c33efaaa52d47a7d6602ffd0cfa7e482177ff0d51b680346c31e57aba0d" Jan 29 09:25:48 crc kubenswrapper[4826]: I0129 09:25:48.383169 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/crc-debug-gmbx6" Jan 29 09:25:48 crc kubenswrapper[4826]: I0129 09:25:48.733240 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2splk/crc-debug-7gtzd"] Jan 29 09:25:48 crc kubenswrapper[4826]: E0129 09:25:48.733722 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1a4f32-64a1-4ff6-bb77-b47bc7e57695" containerName="container-00" Jan 29 09:25:48 crc kubenswrapper[4826]: I0129 09:25:48.733738 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1a4f32-64a1-4ff6-bb77-b47bc7e57695" containerName="container-00" Jan 29 09:25:48 crc kubenswrapper[4826]: I0129 09:25:48.733968 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1a4f32-64a1-4ff6-bb77-b47bc7e57695" containerName="container-00" Jan 29 09:25:48 crc kubenswrapper[4826]: I0129 09:25:48.734797 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/crc-debug-7gtzd" Jan 29 09:25:48 crc kubenswrapper[4826]: I0129 09:25:48.829536 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a1a4f32-64a1-4ff6-bb77-b47bc7e57695" path="/var/lib/kubelet/pods/6a1a4f32-64a1-4ff6-bb77-b47bc7e57695/volumes" Jan 29 09:25:48 crc kubenswrapper[4826]: I0129 09:25:48.855642 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38394d35-253a-4acb-80cc-a154e9308bb7-host\") pod \"crc-debug-7gtzd\" (UID: \"38394d35-253a-4acb-80cc-a154e9308bb7\") " pod="openshift-must-gather-2splk/crc-debug-7gtzd" Jan 29 09:25:48 crc kubenswrapper[4826]: I0129 09:25:48.855956 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqsdl\" (UniqueName: \"kubernetes.io/projected/38394d35-253a-4acb-80cc-a154e9308bb7-kube-api-access-sqsdl\") pod \"crc-debug-7gtzd\" (UID: \"38394d35-253a-4acb-80cc-a154e9308bb7\") " pod="openshift-must-gather-2splk/crc-debug-7gtzd" Jan 29 09:25:48 crc kubenswrapper[4826]: I0129 09:25:48.959001 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqsdl\" (UniqueName: \"kubernetes.io/projected/38394d35-253a-4acb-80cc-a154e9308bb7-kube-api-access-sqsdl\") pod \"crc-debug-7gtzd\" (UID: \"38394d35-253a-4acb-80cc-a154e9308bb7\") " pod="openshift-must-gather-2splk/crc-debug-7gtzd" Jan 29 09:25:48 crc kubenswrapper[4826]: I0129 09:25:48.959186 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38394d35-253a-4acb-80cc-a154e9308bb7-host\") pod \"crc-debug-7gtzd\" (UID: \"38394d35-253a-4acb-80cc-a154e9308bb7\") " pod="openshift-must-gather-2splk/crc-debug-7gtzd" Jan 29 09:25:48 crc kubenswrapper[4826]: I0129 09:25:48.959865 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38394d35-253a-4acb-80cc-a154e9308bb7-host\") pod \"crc-debug-7gtzd\" (UID: \"38394d35-253a-4acb-80cc-a154e9308bb7\") " pod="openshift-must-gather-2splk/crc-debug-7gtzd" Jan 29 09:25:48 crc kubenswrapper[4826]: I0129 09:25:48.989167 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqsdl\" (UniqueName: \"kubernetes.io/projected/38394d35-253a-4acb-80cc-a154e9308bb7-kube-api-access-sqsdl\") pod \"crc-debug-7gtzd\" (UID: \"38394d35-253a-4acb-80cc-a154e9308bb7\") " pod="openshift-must-gather-2splk/crc-debug-7gtzd" Jan 29 09:25:49 crc kubenswrapper[4826]: I0129 09:25:49.060186 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/crc-debug-7gtzd" Jan 29 09:25:50 crc kubenswrapper[4826]: I0129 09:25:50.403550 4826 generic.go:334] "Generic (PLEG): container finished" podID="38394d35-253a-4acb-80cc-a154e9308bb7" containerID="d622b071f7971b3312771f1c9d529b6931d70e262afaf36706efb0af4c2d33a8" exitCode=0 Jan 29 09:25:50 crc kubenswrapper[4826]: I0129 09:25:50.403637 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2splk/crc-debug-7gtzd" event={"ID":"38394d35-253a-4acb-80cc-a154e9308bb7","Type":"ContainerDied","Data":"d622b071f7971b3312771f1c9d529b6931d70e262afaf36706efb0af4c2d33a8"} Jan 29 09:25:50 crc kubenswrapper[4826]: I0129 09:25:50.404197 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2splk/crc-debug-7gtzd" event={"ID":"38394d35-253a-4acb-80cc-a154e9308bb7","Type":"ContainerStarted","Data":"cc439f1e0e1406f6d85255913cbb6693527fae0b50b3d026151a9ddfbcff3137"} Jan 29 09:25:51 crc kubenswrapper[4826]: I0129 09:25:51.702430 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/crc-debug-7gtzd" Jan 29 09:25:51 crc kubenswrapper[4826]: I0129 09:25:51.851855 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38394d35-253a-4acb-80cc-a154e9308bb7-host\") pod \"38394d35-253a-4acb-80cc-a154e9308bb7\" (UID: \"38394d35-253a-4acb-80cc-a154e9308bb7\") " Jan 29 09:25:51 crc kubenswrapper[4826]: I0129 09:25:51.851992 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqsdl\" (UniqueName: \"kubernetes.io/projected/38394d35-253a-4acb-80cc-a154e9308bb7-kube-api-access-sqsdl\") pod \"38394d35-253a-4acb-80cc-a154e9308bb7\" (UID: \"38394d35-253a-4acb-80cc-a154e9308bb7\") " Jan 29 09:25:51 crc kubenswrapper[4826]: I0129 09:25:51.852005 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38394d35-253a-4acb-80cc-a154e9308bb7-host" (OuterVolumeSpecName: "host") pod "38394d35-253a-4acb-80cc-a154e9308bb7" (UID: "38394d35-253a-4acb-80cc-a154e9308bb7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:25:51 crc kubenswrapper[4826]: I0129 09:25:51.852415 4826 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38394d35-253a-4acb-80cc-a154e9308bb7-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:51 crc kubenswrapper[4826]: I0129 09:25:51.864535 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38394d35-253a-4acb-80cc-a154e9308bb7-kube-api-access-sqsdl" (OuterVolumeSpecName: "kube-api-access-sqsdl") pod "38394d35-253a-4acb-80cc-a154e9308bb7" (UID: "38394d35-253a-4acb-80cc-a154e9308bb7"). InnerVolumeSpecName "kube-api-access-sqsdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:51 crc kubenswrapper[4826]: I0129 09:25:51.954418 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqsdl\" (UniqueName: \"kubernetes.io/projected/38394d35-253a-4acb-80cc-a154e9308bb7-kube-api-access-sqsdl\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:52 crc kubenswrapper[4826]: I0129 09:25:52.429487 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2splk/crc-debug-7gtzd" event={"ID":"38394d35-253a-4acb-80cc-a154e9308bb7","Type":"ContainerDied","Data":"cc439f1e0e1406f6d85255913cbb6693527fae0b50b3d026151a9ddfbcff3137"} Jan 29 09:25:52 crc kubenswrapper[4826]: I0129 09:25:52.429541 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc439f1e0e1406f6d85255913cbb6693527fae0b50b3d026151a9ddfbcff3137" Jan 29 09:25:52 crc kubenswrapper[4826]: I0129 09:25:52.429604 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/crc-debug-7gtzd" Jan 29 09:25:53 crc kubenswrapper[4826]: I0129 09:25:53.179533 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2splk/crc-debug-7gtzd"] Jan 29 09:25:53 crc kubenswrapper[4826]: I0129 09:25:53.188436 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2splk/crc-debug-7gtzd"] Jan 29 09:25:54 crc kubenswrapper[4826]: I0129 09:25:54.336855 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2splk/crc-debug-gxz9p"] Jan 29 09:25:54 crc kubenswrapper[4826]: E0129 09:25:54.337429 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38394d35-253a-4acb-80cc-a154e9308bb7" containerName="container-00" Jan 29 09:25:54 crc kubenswrapper[4826]: I0129 09:25:54.337450 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="38394d35-253a-4acb-80cc-a154e9308bb7" containerName="container-00" Jan 29 09:25:54 crc kubenswrapper[4826]: I0129 09:25:54.337708 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="38394d35-253a-4acb-80cc-a154e9308bb7" containerName="container-00" Jan 29 09:25:54 crc kubenswrapper[4826]: I0129 09:25:54.338668 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/crc-debug-gxz9p" Jan 29 09:25:54 crc kubenswrapper[4826]: I0129 09:25:54.497878 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zmt7\" (UniqueName: \"kubernetes.io/projected/d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c-kube-api-access-8zmt7\") pod \"crc-debug-gxz9p\" (UID: \"d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c\") " pod="openshift-must-gather-2splk/crc-debug-gxz9p" Jan 29 09:25:54 crc kubenswrapper[4826]: I0129 09:25:54.498010 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c-host\") pod \"crc-debug-gxz9p\" (UID: \"d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c\") " pod="openshift-must-gather-2splk/crc-debug-gxz9p" Jan 29 09:25:54 crc kubenswrapper[4826]: I0129 09:25:54.599634 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zmt7\" (UniqueName: \"kubernetes.io/projected/d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c-kube-api-access-8zmt7\") pod \"crc-debug-gxz9p\" (UID: \"d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c\") " pod="openshift-must-gather-2splk/crc-debug-gxz9p" Jan 29 09:25:54 crc kubenswrapper[4826]: I0129 09:25:54.599757 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c-host\") pod \"crc-debug-gxz9p\" (UID: \"d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c\") " pod="openshift-must-gather-2splk/crc-debug-gxz9p" Jan 29 09:25:54 crc kubenswrapper[4826]: I0129 09:25:54.599899 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c-host\") pod \"crc-debug-gxz9p\" (UID: \"d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c\") " pod="openshift-must-gather-2splk/crc-debug-gxz9p" Jan 29 09:25:54 crc kubenswrapper[4826]: I0129 09:25:54.627731 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zmt7\" (UniqueName: \"kubernetes.io/projected/d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c-kube-api-access-8zmt7\") pod \"crc-debug-gxz9p\" (UID: \"d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c\") " pod="openshift-must-gather-2splk/crc-debug-gxz9p" Jan 29 09:25:54 crc kubenswrapper[4826]: I0129 09:25:54.658105 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/crc-debug-gxz9p" Jan 29 09:25:54 crc kubenswrapper[4826]: I0129 09:25:54.824357 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38394d35-253a-4acb-80cc-a154e9308bb7" path="/var/lib/kubelet/pods/38394d35-253a-4acb-80cc-a154e9308bb7/volumes" Jan 29 09:25:55 crc kubenswrapper[4826]: I0129 09:25:55.456685 4826 generic.go:334] "Generic (PLEG): container finished" podID="d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c" containerID="cd1d276a66caf342285b2fbe96446fefc5048e55d980624617d1c06ad79af65c" exitCode=0 Jan 29 09:25:55 crc kubenswrapper[4826]: I0129 09:25:55.456751 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2splk/crc-debug-gxz9p" event={"ID":"d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c","Type":"ContainerDied","Data":"cd1d276a66caf342285b2fbe96446fefc5048e55d980624617d1c06ad79af65c"} Jan 29 09:25:55 crc kubenswrapper[4826]: I0129 09:25:55.457012 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2splk/crc-debug-gxz9p" event={"ID":"d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c","Type":"ContainerStarted","Data":"eb5b1d9424bb3f84d64fed3e8ff828eefc14c22dc7fa747b8b06412deae2a3b4"} Jan 29 09:25:55 crc kubenswrapper[4826]: I0129 09:25:55.493610 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2splk/crc-debug-gxz9p"] Jan 29 09:25:55 crc kubenswrapper[4826]: I0129 09:25:55.522916 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2splk/crc-debug-gxz9p"] Jan 29 09:25:56 crc kubenswrapper[4826]: I0129 09:25:56.560390 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/crc-debug-gxz9p" Jan 29 09:25:56 crc kubenswrapper[4826]: I0129 09:25:56.645665 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c-host\") pod \"d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c\" (UID: \"d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c\") " Jan 29 09:25:56 crc kubenswrapper[4826]: I0129 09:25:56.645823 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c-host" (OuterVolumeSpecName: "host") pod "d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c" (UID: "d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 09:25:56 crc kubenswrapper[4826]: I0129 09:25:56.645981 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zmt7\" (UniqueName: \"kubernetes.io/projected/d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c-kube-api-access-8zmt7\") pod \"d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c\" (UID: \"d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c\") " Jan 29 09:25:56 crc kubenswrapper[4826]: I0129 09:25:56.646975 4826 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c-host\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:56 crc kubenswrapper[4826]: I0129 09:25:56.651241 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c-kube-api-access-8zmt7" (OuterVolumeSpecName: "kube-api-access-8zmt7") pod "d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c" (UID: "d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c"). InnerVolumeSpecName "kube-api-access-8zmt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:25:56 crc kubenswrapper[4826]: I0129 09:25:56.748911 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zmt7\" (UniqueName: \"kubernetes.io/projected/d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c-kube-api-access-8zmt7\") on node \"crc\" DevicePath \"\"" Jan 29 09:25:56 crc kubenswrapper[4826]: I0129 09:25:56.820268 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c" path="/var/lib/kubelet/pods/d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c/volumes" Jan 29 09:25:57 crc kubenswrapper[4826]: I0129 09:25:57.477368 4826 scope.go:117] "RemoveContainer" containerID="cd1d276a66caf342285b2fbe96446fefc5048e55d980624617d1c06ad79af65c" Jan 29 09:25:57 crc kubenswrapper[4826]: I0129 09:25:57.477423 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/crc-debug-gxz9p" Jan 29 09:27:35 crc kubenswrapper[4826]: I0129 09:27:35.656182 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:27:35 crc kubenswrapper[4826]: I0129 09:27:35.656771 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:27:46 crc kubenswrapper[4826]: I0129 09:27:46.304065 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_74d65e3a-57e3-4134-a934-f41a91ddaf16/init-config-reloader/0.log" Jan 29 09:27:46 crc kubenswrapper[4826]: I0129 09:27:46.507310 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_74d65e3a-57e3-4134-a934-f41a91ddaf16/init-config-reloader/0.log" Jan 29 09:27:46 crc kubenswrapper[4826]: I0129 09:27:46.556230 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_74d65e3a-57e3-4134-a934-f41a91ddaf16/config-reloader/0.log" Jan 29 09:27:46 crc kubenswrapper[4826]: I0129 09:27:46.563028 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_74d65e3a-57e3-4134-a934-f41a91ddaf16/alertmanager/0.log" Jan 29 09:27:46 crc kubenswrapper[4826]: I0129 09:27:46.742130 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b502c48d-ff95-44af-a9ad-06dc74aa731e/aodh-api/0.log" Jan 29 09:27:46 crc kubenswrapper[4826]: I0129 09:27:46.851166 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b502c48d-ff95-44af-a9ad-06dc74aa731e/aodh-evaluator/0.log" Jan 29 09:27:46 crc kubenswrapper[4826]: I0129 09:27:46.870169 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b502c48d-ff95-44af-a9ad-06dc74aa731e/aodh-listener/0.log" Jan 29 09:27:46 crc kubenswrapper[4826]: I0129 09:27:46.973153 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b502c48d-ff95-44af-a9ad-06dc74aa731e/aodh-notifier/0.log" Jan 29 09:27:47 crc kubenswrapper[4826]: I0129 09:27:47.107036 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7745d4877d-gkrhq_08790220-3de0-47f4-a6b2-87fcf15ecdfa/barbican-api-log/0.log" Jan 29 09:27:47 crc kubenswrapper[4826]: I0129 09:27:47.115424 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7745d4877d-gkrhq_08790220-3de0-47f4-a6b2-87fcf15ecdfa/barbican-api/0.log" Jan 29 09:27:47 crc kubenswrapper[4826]: I0129 09:27:47.370573 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-59b95dcdb6-fwmqw_b0b58a6f-0e9e-4bab-9492-50efc2437486/barbican-keystone-listener/0.log" Jan 29 09:27:47 crc kubenswrapper[4826]: I0129 09:27:47.641453 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7757bff99c-cgbsc_0a714542-5a9f-4e52-a120-aa5340d5c21e/barbican-worker/0.log" Jan 29 09:27:47 crc kubenswrapper[4826]: I0129 09:27:47.663928 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7757bff99c-cgbsc_0a714542-5a9f-4e52-a120-aa5340d5c21e/barbican-worker-log/0.log" Jan 29 09:27:47 crc kubenswrapper[4826]: I0129 09:27:47.784136 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-59b95dcdb6-fwmqw_b0b58a6f-0e9e-4bab-9492-50efc2437486/barbican-keystone-listener-log/0.log" Jan 29 09:27:47 crc kubenswrapper[4826]: I0129 09:27:47.908767 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-2rqtc_bcf40af3-eef2-454e-a418-dd1d61e7faf3/bootstrap-openstack-openstack-cell1/0.log" Jan 29 09:27:48 crc kubenswrapper[4826]: I0129 09:27:48.139433 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_36963e88-f319-40c6-94c1-b4c7aba80b44/ceilometer-central-agent/0.log" Jan 29 09:27:48 crc kubenswrapper[4826]: I0129 09:27:48.154798 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_36963e88-f319-40c6-94c1-b4c7aba80b44/ceilometer-notification-agent/0.log" Jan 29 09:27:48 crc kubenswrapper[4826]: I0129 09:27:48.222970 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_36963e88-f319-40c6-94c1-b4c7aba80b44/proxy-httpd/0.log" Jan 29 09:27:48 crc kubenswrapper[4826]: I0129 09:27:48.334246 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_36963e88-f319-40c6-94c1-b4c7aba80b44/sg-core/0.log" Jan 29 09:27:48 crc kubenswrapper[4826]: I0129 09:27:48.540585 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_681f50ce-bf2e-46d0-bb2d-4b2c196a9749/cinder-api-log/0.log" Jan 29 09:27:48 crc kubenswrapper[4826]: I0129 09:27:48.592676 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_681f50ce-bf2e-46d0-bb2d-4b2c196a9749/cinder-api/0.log" Jan 29 09:27:48 crc kubenswrapper[4826]: I0129 09:27:48.744954 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a/cinder-scheduler/0.log" Jan 29 09:27:48 crc kubenswrapper[4826]: I0129 09:27:48.840008 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5b8d2b24-361f-4bea-8c27-55a8e1b6bd5a/probe/0.log" Jan 29 09:27:49 crc kubenswrapper[4826]: I0129 09:27:49.000934 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-k6stv_31bcbe02-3680-427f-a469-ba23dc092343/configure-network-openstack-openstack-cell1/0.log" Jan 29 09:27:49 crc kubenswrapper[4826]: I0129 09:27:49.101666 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-86qsz_67a18cd8-1442-487e-bd2a-92692793a734/configure-os-openstack-openstack-cell1/0.log" Jan 29 09:27:49 crc kubenswrapper[4826]: I0129 09:27:49.317666 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-74bdfdf6c9-sjbjs_ebef110f-fd34-4dc3-b8b4-346cd06b189b/init/0.log" Jan 29 09:27:49 crc kubenswrapper[4826]: I0129 09:27:49.665840 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-74bdfdf6c9-sjbjs_ebef110f-fd34-4dc3-b8b4-346cd06b189b/init/0.log" Jan 29 09:27:49 crc kubenswrapper[4826]: I0129 09:27:49.704727 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-74bdfdf6c9-sjbjs_ebef110f-fd34-4dc3-b8b4-346cd06b189b/dnsmasq-dns/0.log" Jan 29 09:27:49 crc kubenswrapper[4826]: I0129 09:27:49.780841 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-sxx2f_ee5fb0b2-f4b6-4533-b873-b40fef6a9747/download-cache-openstack-openstack-cell1/0.log" Jan 29 09:27:49 crc kubenswrapper[4826]: I0129 09:27:49.995394 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3/glance-httpd/0.log" Jan 29 09:27:50 crc kubenswrapper[4826]: I0129 09:27:50.043931 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_85d6d46d-c3aa-4a0e-8e2c-e6fb0a83e4b3/glance-log/0.log" Jan 29 09:27:50 crc kubenswrapper[4826]: I0129 09:27:50.218608 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c43019dd-767e-444a-92b8-a9c78adbcb43/glance-log/0.log" Jan 29 09:27:50 crc kubenswrapper[4826]: I0129 09:27:50.226598 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c43019dd-767e-444a-92b8-a9c78adbcb43/glance-httpd/0.log" Jan 29 09:27:50 crc kubenswrapper[4826]: I0129 09:27:50.623994 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-745776875c-xh5h5_edf33c20-4fec-4155-8a2f-6f9dd59b12ab/heat-engine/0.log" Jan 29 09:27:50 crc kubenswrapper[4826]: I0129 09:27:50.956770 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7f4c4fc79c-mthpn_0cfd816e-cd1a-4072-9c0c-d25633a5bcf1/heat-api/0.log" Jan 29 09:27:51 crc kubenswrapper[4826]: I0129 09:27:51.008049 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6fc96bc4d8-55gpp_e716106f-e229-4e00-9ee5-278b120741a6/horizon/0.log" Jan 29 09:27:51 crc kubenswrapper[4826]: I0129 09:27:51.148926 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5db8584866-xvf8s_8f9f671c-8f28-4bfa-b31b-05f06322cbb4/heat-cfnapi/0.log" Jan 29 09:27:51 crc kubenswrapper[4826]: I0129 09:27:51.232148 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-2qqxd_07208b6c-07ec-458e-b7c5-6460916bb061/install-certs-openstack-openstack-cell1/0.log" Jan 29 09:27:51 crc kubenswrapper[4826]: I0129 09:27:51.463334 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6fc96bc4d8-55gpp_e716106f-e229-4e00-9ee5-278b120741a6/horizon-log/0.log" Jan 29 09:27:51 crc kubenswrapper[4826]: I0129 09:27:51.521216 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-2h8kn_ac81a767-6eb1-4270-9d1f-824d8b6ce16b/install-os-openstack-openstack-cell1/0.log" Jan 29 09:27:51 crc kubenswrapper[4826]: I0129 09:27:51.699766 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29494621-wq488_05abc85a-7397-460f-a033-6b21ecaf2ddb/keystone-cron/0.log" Jan 29 09:27:51 crc kubenswrapper[4826]: I0129 09:27:51.751013 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_22ad018d-99ad-4eb2-bd3c-ef284341d2d1/kube-state-metrics/0.log" Jan 29 09:27:52 crc kubenswrapper[4826]: I0129 09:27:52.047690 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-rd9vm_bf6c48ac-7491-4cff-a809-f164ac932d35/libvirt-openstack-openstack-cell1/0.log" Jan 29 09:27:52 crc kubenswrapper[4826]: I0129 09:27:52.193192 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bf9bfd559-x7h62_f31b43ea-4f64-437d-9bfe-7c16eced7589/keystone-api/0.log" Jan 29 09:27:53 crc kubenswrapper[4826]: I0129 09:27:53.095628 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5f8bfccb8f-fkvqn_45765b70-e905-4d85-8d65-bbacf291190a/neutron-httpd/0.log" Jan 29 09:27:53 crc kubenswrapper[4826]: I0129 09:27:53.116979 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-d5rbv_4a787b34-0474-44bf-9785-416b19422277/neutron-dhcp-openstack-openstack-cell1/0.log" Jan 29 09:27:53 crc kubenswrapper[4826]: I0129 09:27:53.350875 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5f8bfccb8f-fkvqn_45765b70-e905-4d85-8d65-bbacf291190a/neutron-api/0.log" Jan 29 09:27:53 crc kubenswrapper[4826]: I0129 09:27:53.353095 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-wsjhn_7c28e648-e232-40da-abc7-39b878b704c2/neutron-metadata-openstack-openstack-cell1/0.log" Jan 29 09:27:53 crc kubenswrapper[4826]: I0129 09:27:53.884491 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-ffxzr_506786bb-f168-420d-9f74-01304213b10b/neutron-sriov-openstack-openstack-cell1/0.log" Jan 29 09:27:54 crc kubenswrapper[4826]: I0129 09:27:54.257081 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b3143baa-65d4-4709-8c2e-57c85764d189/nova-api-log/0.log" Jan 29 09:27:54 crc kubenswrapper[4826]: I0129 09:27:54.447131 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b3143baa-65d4-4709-8c2e-57c85764d189/nova-api-api/0.log" Jan 29 09:27:54 crc kubenswrapper[4826]: I0129 09:27:54.582109 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_bb1d5abc-ba89-42e8-b750-84ee3c7ab606/nova-cell0-conductor-conductor/0.log" Jan 29 09:27:54 crc kubenswrapper[4826]: I0129 09:27:54.800632 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c740aa90-a4d5-4260-bfcd-82b659c58b82/nova-cell1-conductor-conductor/0.log" Jan 29 09:27:54 crc kubenswrapper[4826]: I0129 09:27:54.941694 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_392179e6-1366-45dd-9742-d270beeefad6/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 09:27:55 crc kubenswrapper[4826]: I0129 09:27:55.064451 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell7bvjn_8bacd48c-ec38-45a4-825d-0684192208bd/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Jan 29 09:27:55 crc kubenswrapper[4826]: I0129 09:27:55.167843 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-mclk4_df8a8f37-849b-4d34-8527-edc1fe6aa082/nova-cell1-openstack-openstack-cell1/0.log" Jan 29 09:27:55 crc kubenswrapper[4826]: I0129 09:27:55.415056 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_26e62da8-b996-4163-bb10-3afdba722b5a/nova-metadata-log/0.log" Jan 29 09:27:55 crc kubenswrapper[4826]: I0129 09:27:55.704018 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f0ffec67-acda-4aa6-924e-b94c3e09fc8f/nova-scheduler-scheduler/0.log" Jan 29 09:27:55 crc kubenswrapper[4826]: I0129 09:27:55.787047 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0a52085e-690a-44e3-a1f3-d4d668b6ff8e/mysql-bootstrap/0.log" Jan 29 09:27:56 crc kubenswrapper[4826]: I0129 09:27:56.033286 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0a52085e-690a-44e3-a1f3-d4d668b6ff8e/mysql-bootstrap/0.log" Jan 29 09:27:56 crc kubenswrapper[4826]: I0129 09:27:56.083692 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0a52085e-690a-44e3-a1f3-d4d668b6ff8e/galera/0.log" Jan 29 09:27:56 crc kubenswrapper[4826]: I0129 09:27:56.271730 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_26e62da8-b996-4163-bb10-3afdba722b5a/nova-metadata-metadata/0.log" Jan 29 09:27:56 crc kubenswrapper[4826]: I0129 09:27:56.335520 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae/mysql-bootstrap/0.log" Jan 29 09:27:56 crc kubenswrapper[4826]: I0129 09:27:56.525527 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae/galera/0.log" Jan 29 09:27:56 crc kubenswrapper[4826]: I0129 09:27:56.586588 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae/mysql-bootstrap/0.log" Jan 29 09:27:56 crc kubenswrapper[4826]: I0129 09:27:56.600245 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5f1a4fb9-7b51-4a6c-b594-8d8d98666063/openstackclient/0.log" Jan 29 09:27:56 crc kubenswrapper[4826]: I0129 09:27:56.867037 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f28c19ea-7404-4f03-b020-068bbd66be87/openstack-network-exporter/0.log" Jan 29 09:27:56 crc kubenswrapper[4826]: I0129 09:27:56.909216 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f28c19ea-7404-4f03-b020-068bbd66be87/ovn-northd/0.log" Jan 29 09:27:57 crc kubenswrapper[4826]: I0129 09:27:57.080125 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-zrxrk_4cf68c2a-d351-4f17-a5e2-da5006da2e03/ovn-openstack-openstack-cell1/0.log" Jan 29 09:27:57 crc kubenswrapper[4826]: I0129 09:27:57.142491 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0277bfc7-7497-454e-a4d0-efd51c1c50a4/openstack-network-exporter/0.log" Jan 29 09:27:57 crc kubenswrapper[4826]: I0129 09:27:57.475234 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0277bfc7-7497-454e-a4d0-efd51c1c50a4/ovsdbserver-nb/0.log" Jan 29 09:27:57 crc kubenswrapper[4826]: I0129 09:27:57.537660 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_26b5252c-28c5-44b9-a17f-6afb24926978/openstack-network-exporter/0.log" Jan 29 09:27:57 crc kubenswrapper[4826]: I0129 09:27:57.635078 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_26b5252c-28c5-44b9-a17f-6afb24926978/ovsdbserver-nb/0.log" Jan 29 09:27:57 crc kubenswrapper[4826]: I0129 09:27:57.754180 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_948956a4-cb1c-4cb0-bb88-a749d1ab5990/openstack-network-exporter/0.log" Jan 29 09:27:57 crc kubenswrapper[4826]: I0129 09:27:57.865680 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_948956a4-cb1c-4cb0-bb88-a749d1ab5990/ovsdbserver-nb/0.log" Jan 29 09:27:58 crc kubenswrapper[4826]: I0129 09:27:58.040017 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_eac3a17b-f262-4a99-a017-bdb7c57da317/openstack-network-exporter/0.log" Jan 29 09:27:58 crc kubenswrapper[4826]: I0129 09:27:58.074883 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_eac3a17b-f262-4a99-a017-bdb7c57da317/ovsdbserver-sb/0.log" Jan 29 09:27:58 crc kubenswrapper[4826]: I0129 09:27:58.245222 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_f6a63471-debe-4dc8-8eeb-8e9b115aef32/openstack-network-exporter/0.log" Jan 29 09:27:58 crc kubenswrapper[4826]: I0129 09:27:58.409968 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_f6a63471-debe-4dc8-8eeb-8e9b115aef32/ovsdbserver-sb/0.log" Jan 29 09:27:58 crc kubenswrapper[4826]: I0129 09:27:58.493743 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_9bb5c8e6-d9f0-45e3-9023-665dc8b3b323/openstack-network-exporter/0.log" Jan 29 09:27:58 crc kubenswrapper[4826]: I0129 09:27:58.528080 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_9bb5c8e6-d9f0-45e3-9023-665dc8b3b323/ovsdbserver-sb/0.log" Jan 29 09:27:58 crc kubenswrapper[4826]: I0129 09:27:58.851874 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-856c6b9568-k76hl_e36928a2-9c24-4179-9607-07b9fcbd334e/placement-api/0.log" Jan 29 09:27:58 crc kubenswrapper[4826]: I0129 09:27:58.957014 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-856c6b9568-k76hl_e36928a2-9c24-4179-9607-07b9fcbd334e/placement-log/0.log" Jan 29 09:27:59 crc kubenswrapper[4826]: I0129 09:27:59.003566 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-cjqkcf_0ebda7d4-dd61-45b8-96e8-fe8dcf171aa9/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Jan 29 09:27:59 crc kubenswrapper[4826]: I0129 09:27:59.178938 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6faa35a9-8847-44ab-b28a-abbbd186ca7c/init-config-reloader/0.log" Jan 29 09:27:59 crc kubenswrapper[4826]: I0129 09:27:59.404590 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6faa35a9-8847-44ab-b28a-abbbd186ca7c/config-reloader/0.log" Jan 29 09:27:59 crc kubenswrapper[4826]: I0129 09:27:59.406988 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6faa35a9-8847-44ab-b28a-abbbd186ca7c/init-config-reloader/0.log" Jan 29 09:27:59 crc kubenswrapper[4826]: I0129 09:27:59.411002 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6faa35a9-8847-44ab-b28a-abbbd186ca7c/prometheus/0.log" Jan 29 09:27:59 crc kubenswrapper[4826]: I0129 09:27:59.443846 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_6faa35a9-8847-44ab-b28a-abbbd186ca7c/thanos-sidecar/0.log" Jan 29 09:27:59 crc kubenswrapper[4826]: I0129 09:27:59.636626 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_07711939-da14-4da1-8017-25d0f0719763/setup-container/0.log" Jan 29 09:27:59 crc kubenswrapper[4826]: I0129 09:27:59.859753 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_07711939-da14-4da1-8017-25d0f0719763/setup-container/0.log" Jan 29 09:27:59 crc kubenswrapper[4826]: I0129 09:27:59.866444 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_07711939-da14-4da1-8017-25d0f0719763/rabbitmq/0.log" Jan 29 09:28:00 crc kubenswrapper[4826]: I0129 09:28:00.005347 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7/setup-container/0.log" Jan 29 09:28:00 crc kubenswrapper[4826]: I0129 09:28:00.245701 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7/setup-container/0.log" Jan 29 09:28:00 crc kubenswrapper[4826]: I0129 09:28:00.297167 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-p7s2n_b2da9029-df06-4171-8057-4a8e1908deb5/reboot-os-openstack-openstack-cell1/0.log" Jan 29 09:28:00 crc kubenswrapper[4826]: I0129 09:28:00.298478 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cc75d2c6-cca2-4c0d-b9c6-e0b8d138d1f7/rabbitmq/0.log" Jan 29 09:28:00 crc kubenswrapper[4826]: I0129 09:28:00.502066 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-7h28l_de556388-7d07-4ee5-9e5f-d47c47f8437e/ssh-known-hosts-openstack/0.log" Jan 29 09:28:00 crc kubenswrapper[4826]: I0129 09:28:00.636831 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-rzd7v_e6557b04-094c-42ff-ab7f-0f42b40fc942/run-os-openstack-openstack-cell1/0.log" Jan 29 09:28:01 crc kubenswrapper[4826]: I0129 09:28:01.181599 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6979dcc99b-fdxq4_441b644f-ce75-4397-a90e-f1d02c40da19/proxy-server/0.log" Jan 29 09:28:01 crc kubenswrapper[4826]: I0129 09:28:01.196141 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-jpvpj_e7a467a7-e057-4efe-ad7a-5dfdae2ea6ae/swift-ring-rebalance/0.log" Jan 29 09:28:01 crc kubenswrapper[4826]: I0129 09:28:01.307562 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6979dcc99b-fdxq4_441b644f-ce75-4397-a90e-f1d02c40da19/proxy-httpd/0.log" Jan 29 09:28:01 crc kubenswrapper[4826]: I0129 09:28:01.520499 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-fh64b_b73120ed-4019-4aed-883b-3fd780326a7e/telemetry-openstack-openstack-cell1/0.log" Jan 29 09:28:01 crc kubenswrapper[4826]: I0129 09:28:01.620532 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0e54d8ec-766b-4711-94e7-08fcfe836c67/tempest-tests-tempest-tests-runner/0.log" Jan 29 09:28:01 crc kubenswrapper[4826]: I0129 09:28:01.829428 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d7841bd6-c003-4e44-85cd-1aaa6e3ed48d/test-operator-logs-container/0.log" Jan 29 09:28:01 crc kubenswrapper[4826]: I0129 09:28:01.978281 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-jcfjp_c7145c04-cf5e-43b3-8934-0c6397272bb2/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Jan 29 09:28:02 crc kubenswrapper[4826]: I0129 09:28:02.181801 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-wqs9k_7daf0d5e-a5ef-4a39-8288-4f186ba60572/validate-network-openstack-openstack-cell1/0.log" Jan 29 09:28:05 crc kubenswrapper[4826]: I0129 09:28:05.655801 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:28:05 crc kubenswrapper[4826]: I0129 09:28:05.656058 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:28:17 crc kubenswrapper[4826]: I0129 09:28:17.648201 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d01c5a41-f62d-442e-ab5f-69d0abbfa549/memcached/0.log" Jan 29 09:28:34 crc kubenswrapper[4826]: I0129 09:28:34.186519 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-wctjq_968a346c-43bd-4d96-b609-71b838d9d5b8/manager/0.log" Jan 29 09:28:34 crc kubenswrapper[4826]: I0129 09:28:34.408458 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-q7fkx_c34d58a7-7472-43b5-a067-f8d98a83714c/manager/0.log" Jan 29 09:28:34 crc kubenswrapper[4826]: I0129 09:28:34.452113 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6_41595216-47a0-4aa0-8485-894c44ca5b07/util/0.log" Jan 29 09:28:34 crc kubenswrapper[4826]: I0129 09:28:34.653722 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6_41595216-47a0-4aa0-8485-894c44ca5b07/pull/0.log" Jan 29 09:28:34 crc kubenswrapper[4826]: I0129 09:28:34.659247 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6_41595216-47a0-4aa0-8485-894c44ca5b07/util/0.log" Jan 29 09:28:34 crc kubenswrapper[4826]: I0129 09:28:34.663735 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6_41595216-47a0-4aa0-8485-894c44ca5b07/pull/0.log" Jan 29 09:28:34 crc kubenswrapper[4826]: I0129 09:28:34.843520 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6_41595216-47a0-4aa0-8485-894c44ca5b07/util/0.log" Jan 29 09:28:34 crc kubenswrapper[4826]: I0129 09:28:34.858868 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6_41595216-47a0-4aa0-8485-894c44ca5b07/pull/0.log" Jan 29 09:28:34 crc kubenswrapper[4826]: I0129 09:28:34.896422 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da00ab39b03ae6839a9374e1f0e5cf4d34a49b875de165714f9039924d5pkp6_41595216-47a0-4aa0-8485-894c44ca5b07/extract/0.log" Jan 29 09:28:35 crc kubenswrapper[4826]: I0129 09:28:35.048948 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-rq6tp_b9652583-64a6-4726-986b-3d61a52ff7a9/manager/0.log" Jan 29 09:28:35 crc kubenswrapper[4826]: I0129 09:28:35.233732 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-r5zgz_5a0e4f93-9116-4e18-b4ad-5e6b15571199/manager/0.log" Jan 29 09:28:35 crc kubenswrapper[4826]: I0129 09:28:35.298017 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-dqb4n_2d8d4014-bbb6-40e0-b2bb-235adaae50a3/manager/0.log" Jan 29 09:28:35 crc kubenswrapper[4826]: I0129 09:28:35.473751 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-x7x52_0943ef83-b318-4a15-baf0-858ffb1eec9f/manager/0.log" Jan 29 09:28:35 crc kubenswrapper[4826]: I0129 09:28:35.655813 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:28:35 crc kubenswrapper[4826]: I0129 09:28:35.655866 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:28:35 crc kubenswrapper[4826]: I0129 09:28:35.655913 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 09:28:35 crc kubenswrapper[4826]: I0129 09:28:35.656777 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4c5bf56ecf4d49f31d7cceee4ddf4d08b3ec9af55a5f1692d3a3420750d3329"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:28:35 crc kubenswrapper[4826]: I0129 09:28:35.656833 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://f4c5bf56ecf4d49f31d7cceee4ddf4d08b3ec9af55a5f1692d3a3420750d3329" gracePeriod=600 Jan 29 09:28:35 crc kubenswrapper[4826]: I0129 09:28:35.702341 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-9j5d5_784b3ddb-6085-405e-a492-78fa2d30f902/manager/0.log" Jan 29 09:28:36 crc kubenswrapper[4826]: I0129 09:28:36.024932 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-dbcc7_a90b0148-229f-46b4-ac43-2f2d8a89c167/manager/0.log" Jan 29 09:28:36 crc kubenswrapper[4826]: I0129 09:28:36.072138 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="f4c5bf56ecf4d49f31d7cceee4ddf4d08b3ec9af55a5f1692d3a3420750d3329" exitCode=0 Jan 29 09:28:36 crc kubenswrapper[4826]: I0129 09:28:36.072428 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"f4c5bf56ecf4d49f31d7cceee4ddf4d08b3ec9af55a5f1692d3a3420750d3329"} Jan 29 09:28:36 crc kubenswrapper[4826]: I0129 09:28:36.072692 4826 scope.go:117] "RemoveContainer" containerID="b1925ae332138348b1564740311cdf70107a953d6f523c1ed54020f639f51a42" Jan 29 09:28:36 crc kubenswrapper[4826]: I0129 09:28:36.136972 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-j4bnt_790d8543-7995-41f5-a1d9-86424c85102b/manager/0.log" Jan 29 09:28:36 crc kubenswrapper[4826]: I0129 09:28:36.221064 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-8t8z8_6a1bcc88-899b-4ada-9560-3e388b5d1a82/manager/0.log" Jan 29 09:28:36 crc kubenswrapper[4826]: I0129 09:28:36.291897 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-qxvv5_e6027062-6010-4269-9798-6d31d88831ca/manager/0.log" Jan 29 09:28:36 crc kubenswrapper[4826]: I0129 09:28:36.507113 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-wdlx2_f1f53c69-e17d-44df-839f-9f354d1b5b24/manager/0.log" Jan 29 09:28:36 crc kubenswrapper[4826]: I0129 09:28:36.770840 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-r8fr7_f5d88392-3443-4b78-a27c-7594b71de46d/manager/0.log" Jan 29 09:28:36 crc kubenswrapper[4826]: I0129 09:28:36.828139 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-jrkdp_fa039621-ed69-4bc7-8d1c-233b03283aa0/manager/0.log" Jan 29 09:28:36 crc kubenswrapper[4826]: I0129 09:28:36.892927 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-d5d667db8-hsjcj_34844eed-1b85-49b6-bbcb-c4c5ccbcf7f6/manager/0.log" Jan 29 09:28:37 crc kubenswrapper[4826]: I0129 09:28:37.084142 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerStarted","Data":"978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371"} Jan 29 09:28:37 crc kubenswrapper[4826]: I0129 09:28:37.091261 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5c4cd4c8c8-zh2gm_4248339a-3124-4593-8364-7e09ae20bd06/operator/0.log" Jan 29 09:28:37 crc kubenswrapper[4826]: I0129 09:28:37.673502 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6dzlk_438c09e3-7fe5-47a9-b339-622b798a6cee/registry-server/0.log" Jan 29 09:28:37 crc kubenswrapper[4826]: I0129 09:28:37.707767 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-p9gf8_3a41b9d7-4fd6-4f71-af8f-06751f6cb0dd/manager/0.log" Jan 29 09:28:37 crc kubenswrapper[4826]: I0129 09:28:37.937397 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-9vkmm_2136722e-10e4-4fdd-8c1c-513be2cda722/manager/0.log" Jan 29 09:28:37 crc kubenswrapper[4826]: I0129 09:28:37.967867 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-q6pbm_6f966d5b-f4c8-4921-9161-096ca391b81f/operator/0.log" Jan 29 09:28:38 crc kubenswrapper[4826]: I0129 09:28:38.272585 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-hkgbs_5723d99a-cfa2-4f46-9e29-a7f075e7d5fa/manager/0.log" Jan 29 09:28:38 crc kubenswrapper[4826]: I0129 09:28:38.577887 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-g5v64_3a490a0b-5880-4b3a-847b-a5ffbdd2329b/manager/0.log" Jan 29 09:28:38 crc kubenswrapper[4826]: I0129 09:28:38.678951 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-xglxb_c81e6ebc-17e4-4b52-8e78-d7fc20195c4b/manager/0.log" Jan 29 09:28:39 crc kubenswrapper[4826]: I0129 09:28:39.325402 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-kk4wp_ca001b72-69f7-488e-8410-0f046fb810bb/manager/0.log" Jan 29 09:28:39 crc kubenswrapper[4826]: I0129 09:28:39.496836 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7b54f464f6-dfl95_e9a11c2b-0242-4fe7-8bbf-cf536de5f0b5/manager/0.log" Jan 29 09:28:58 crc kubenswrapper[4826]: I0129 09:28:58.941930 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xnm5r_7da7e397-19dd-4eaa-86bc-44e555785978/control-plane-machine-set-operator/0.log" Jan 29 09:28:59 crc kubenswrapper[4826]: I0129 09:28:59.097092 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-djctr_144d8289-078b-45cd-9539-901b6c72a980/machine-api-operator/0.log" Jan 29 09:28:59 crc kubenswrapper[4826]: I0129 09:28:59.099342 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-djctr_144d8289-078b-45cd-9539-901b6c72a980/kube-rbac-proxy/0.log" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.594790 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h8rdh"] Jan 29 09:29:02 crc kubenswrapper[4826]: E0129 09:29:02.595724 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c" containerName="container-00" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.595736 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c" containerName="container-00" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.595942 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3abf3be-6be1-40b4-a7dc-eacbc76b6f8c" containerName="container-00" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.600097 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.615819 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h8rdh"] Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.723262 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f1ec6f-83a3-4daa-877f-2661795188cf-catalog-content\") pod \"redhat-operators-h8rdh\" (UID: \"56f1ec6f-83a3-4daa-877f-2661795188cf\") " pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.723688 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f1ec6f-83a3-4daa-877f-2661795188cf-utilities\") pod \"redhat-operators-h8rdh\" (UID: \"56f1ec6f-83a3-4daa-877f-2661795188cf\") " pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.723853 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc28w\" (UniqueName: \"kubernetes.io/projected/56f1ec6f-83a3-4daa-877f-2661795188cf-kube-api-access-mc28w\") pod \"redhat-operators-h8rdh\" (UID: \"56f1ec6f-83a3-4daa-877f-2661795188cf\") " pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.825121 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc28w\" (UniqueName: \"kubernetes.io/projected/56f1ec6f-83a3-4daa-877f-2661795188cf-kube-api-access-mc28w\") pod \"redhat-operators-h8rdh\" (UID: \"56f1ec6f-83a3-4daa-877f-2661795188cf\") " pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.825443 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f1ec6f-83a3-4daa-877f-2661795188cf-catalog-content\") pod \"redhat-operators-h8rdh\" (UID: \"56f1ec6f-83a3-4daa-877f-2661795188cf\") " pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.825652 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f1ec6f-83a3-4daa-877f-2661795188cf-utilities\") pod \"redhat-operators-h8rdh\" (UID: \"56f1ec6f-83a3-4daa-877f-2661795188cf\") " pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.825955 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f1ec6f-83a3-4daa-877f-2661795188cf-utilities\") pod \"redhat-operators-h8rdh\" (UID: \"56f1ec6f-83a3-4daa-877f-2661795188cf\") " pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.825981 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f1ec6f-83a3-4daa-877f-2661795188cf-catalog-content\") pod \"redhat-operators-h8rdh\" (UID: \"56f1ec6f-83a3-4daa-877f-2661795188cf\") " pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.844074 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc28w\" (UniqueName: \"kubernetes.io/projected/56f1ec6f-83a3-4daa-877f-2661795188cf-kube-api-access-mc28w\") pod \"redhat-operators-h8rdh\" (UID: \"56f1ec6f-83a3-4daa-877f-2661795188cf\") " pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:02 crc kubenswrapper[4826]: I0129 09:29:02.922146 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:03 crc kubenswrapper[4826]: I0129 09:29:03.446386 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h8rdh"] Jan 29 09:29:04 crc kubenswrapper[4826]: I0129 09:29:04.326969 4826 generic.go:334] "Generic (PLEG): container finished" podID="56f1ec6f-83a3-4daa-877f-2661795188cf" containerID="56262965726b37033e40758dffb9f00c031d5535b74c6fdf0b9cd91e3e80f461" exitCode=0 Jan 29 09:29:04 crc kubenswrapper[4826]: I0129 09:29:04.327135 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h8rdh" event={"ID":"56f1ec6f-83a3-4daa-877f-2661795188cf","Type":"ContainerDied","Data":"56262965726b37033e40758dffb9f00c031d5535b74c6fdf0b9cd91e3e80f461"} Jan 29 09:29:04 crc kubenswrapper[4826]: I0129 09:29:04.327274 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h8rdh" event={"ID":"56f1ec6f-83a3-4daa-877f-2661795188cf","Type":"ContainerStarted","Data":"724093570a3c61cdd521797aff2994419ee676aef3941feb394b22edafc472ca"} Jan 29 09:29:05 crc kubenswrapper[4826]: I0129 09:29:05.338203 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h8rdh" event={"ID":"56f1ec6f-83a3-4daa-877f-2661795188cf","Type":"ContainerStarted","Data":"81867b758b5b3ec5618ac79f0c6981b7c37bb1e0b4fd27c902c73eb7b6efe7b0"} Jan 29 09:29:06 crc kubenswrapper[4826]: I0129 09:29:06.348338 4826 generic.go:334] "Generic (PLEG): container finished" podID="56f1ec6f-83a3-4daa-877f-2661795188cf" containerID="81867b758b5b3ec5618ac79f0c6981b7c37bb1e0b4fd27c902c73eb7b6efe7b0" exitCode=0 Jan 29 09:29:06 crc kubenswrapper[4826]: I0129 09:29:06.348420 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h8rdh" event={"ID":"56f1ec6f-83a3-4daa-877f-2661795188cf","Type":"ContainerDied","Data":"81867b758b5b3ec5618ac79f0c6981b7c37bb1e0b4fd27c902c73eb7b6efe7b0"} Jan 29 09:29:07 crc kubenswrapper[4826]: I0129 09:29:07.358575 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h8rdh" event={"ID":"56f1ec6f-83a3-4daa-877f-2661795188cf","Type":"ContainerStarted","Data":"129d2942d1a83b002e9431899478b72c9371316016a153b6ef15c2399dae88f6"} Jan 29 09:29:07 crc kubenswrapper[4826]: I0129 09:29:07.380860 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h8rdh" podStartSLOduration=2.905203364 podStartE2EDuration="5.380836633s" podCreationTimestamp="2026-01-29 09:29:02 +0000 UTC" firstStartedPulling="2026-01-29 09:29:04.3288868 +0000 UTC m=+9928.190679879" lastFinishedPulling="2026-01-29 09:29:06.804520079 +0000 UTC m=+9930.666313148" observedRunningTime="2026-01-29 09:29:07.377432742 +0000 UTC m=+9931.239225831" watchObservedRunningTime="2026-01-29 09:29:07.380836633 +0000 UTC m=+9931.242629702" Jan 29 09:29:12 crc kubenswrapper[4826]: I0129 09:29:12.923054 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:12 crc kubenswrapper[4826]: I0129 09:29:12.923598 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:12 crc kubenswrapper[4826]: I0129 09:29:12.991279 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:13 crc kubenswrapper[4826]: I0129 09:29:13.460129 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:13 crc kubenswrapper[4826]: I0129 09:29:13.517982 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h8rdh"] Jan 29 09:29:13 crc kubenswrapper[4826]: I0129 09:29:13.688750 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-tk5nw_f1a39541-4e51-4239-ae9e-e58ef3ff63b9/cert-manager-controller/0.log" Jan 29 09:29:13 crc kubenswrapper[4826]: I0129 09:29:13.819244 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-nqvpr_d57ce65b-8ed9-4f74-b907-978d1ef8911c/cert-manager-cainjector/0.log" Jan 29 09:29:13 crc kubenswrapper[4826]: I0129 09:29:13.894043 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-tnjwc_38c02f33-7056-4cf2-834d-765936d1be36/cert-manager-webhook/0.log" Jan 29 09:29:15 crc kubenswrapper[4826]: I0129 09:29:15.432509 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h8rdh" podUID="56f1ec6f-83a3-4daa-877f-2661795188cf" containerName="registry-server" containerID="cri-o://129d2942d1a83b002e9431899478b72c9371316016a153b6ef15c2399dae88f6" gracePeriod=2 Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.445813 4826 generic.go:334] "Generic (PLEG): container finished" podID="56f1ec6f-83a3-4daa-877f-2661795188cf" containerID="129d2942d1a83b002e9431899478b72c9371316016a153b6ef15c2399dae88f6" exitCode=0 Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.445879 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h8rdh" event={"ID":"56f1ec6f-83a3-4daa-877f-2661795188cf","Type":"ContainerDied","Data":"129d2942d1a83b002e9431899478b72c9371316016a153b6ef15c2399dae88f6"} Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.446572 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h8rdh" event={"ID":"56f1ec6f-83a3-4daa-877f-2661795188cf","Type":"ContainerDied","Data":"724093570a3c61cdd521797aff2994419ee676aef3941feb394b22edafc472ca"} Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.446588 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="724093570a3c61cdd521797aff2994419ee676aef3941feb394b22edafc472ca" Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.448669 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.517672 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f1ec6f-83a3-4daa-877f-2661795188cf-catalog-content\") pod \"56f1ec6f-83a3-4daa-877f-2661795188cf\" (UID: \"56f1ec6f-83a3-4daa-877f-2661795188cf\") " Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.517839 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc28w\" (UniqueName: \"kubernetes.io/projected/56f1ec6f-83a3-4daa-877f-2661795188cf-kube-api-access-mc28w\") pod \"56f1ec6f-83a3-4daa-877f-2661795188cf\" (UID: \"56f1ec6f-83a3-4daa-877f-2661795188cf\") " Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.517907 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f1ec6f-83a3-4daa-877f-2661795188cf-utilities\") pod \"56f1ec6f-83a3-4daa-877f-2661795188cf\" (UID: \"56f1ec6f-83a3-4daa-877f-2661795188cf\") " Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.519082 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f1ec6f-83a3-4daa-877f-2661795188cf-utilities" (OuterVolumeSpecName: "utilities") pod "56f1ec6f-83a3-4daa-877f-2661795188cf" (UID: "56f1ec6f-83a3-4daa-877f-2661795188cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.532707 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f1ec6f-83a3-4daa-877f-2661795188cf-kube-api-access-mc28w" (OuterVolumeSpecName: "kube-api-access-mc28w") pod "56f1ec6f-83a3-4daa-877f-2661795188cf" (UID: "56f1ec6f-83a3-4daa-877f-2661795188cf"). InnerVolumeSpecName "kube-api-access-mc28w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.620858 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc28w\" (UniqueName: \"kubernetes.io/projected/56f1ec6f-83a3-4daa-877f-2661795188cf-kube-api-access-mc28w\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.620894 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f1ec6f-83a3-4daa-877f-2661795188cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.631738 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f1ec6f-83a3-4daa-877f-2661795188cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56f1ec6f-83a3-4daa-877f-2661795188cf" (UID: "56f1ec6f-83a3-4daa-877f-2661795188cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:29:16 crc kubenswrapper[4826]: I0129 09:29:16.723272 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f1ec6f-83a3-4daa-877f-2661795188cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:29:17 crc kubenswrapper[4826]: I0129 09:29:17.454142 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h8rdh" Jan 29 09:29:17 crc kubenswrapper[4826]: I0129 09:29:17.476699 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h8rdh"] Jan 29 09:29:17 crc kubenswrapper[4826]: I0129 09:29:17.491909 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h8rdh"] Jan 29 09:29:18 crc kubenswrapper[4826]: I0129 09:29:18.820410 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f1ec6f-83a3-4daa-877f-2661795188cf" path="/var/lib/kubelet/pods/56f1ec6f-83a3-4daa-877f-2661795188cf/volumes" Jan 29 09:29:26 crc kubenswrapper[4826]: I0129 09:29:26.115610 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-cmm6v_03cc87ed-54ec-46aa-8ec1-457a68eeaf8a/nmstate-console-plugin/0.log" Jan 29 09:29:26 crc kubenswrapper[4826]: I0129 09:29:26.302678 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5dnl7_7acf9634-de8f-42e2-b40b-7b7fb4f354c9/nmstate-handler/0.log" Jan 29 09:29:26 crc kubenswrapper[4826]: I0129 09:29:26.406712 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-pbp9h_4297d4dd-0999-4ab5-87a0-f0190de20a82/kube-rbac-proxy/0.log" Jan 29 09:29:26 crc kubenswrapper[4826]: I0129 09:29:26.486074 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-pbp9h_4297d4dd-0999-4ab5-87a0-f0190de20a82/nmstate-metrics/0.log" Jan 29 09:29:26 crc kubenswrapper[4826]: I0129 09:29:26.596357 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-kvqn2_a80a15e8-5d05-41b2-b558-b0e73d1114c9/nmstate-operator/0.log" Jan 29 09:29:26 crc kubenswrapper[4826]: I0129 09:29:26.702287 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-tb8l6_57edd5c1-a09e-4c1d-a6dc-ae07d9e5ea8d/nmstate-webhook/0.log" Jan 29 09:29:41 crc kubenswrapper[4826]: I0129 09:29:41.436882 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-k97cs_5fc63d8d-d2f6-48da-95e1-47ce4038127e/prometheus-operator/0.log" Jan 29 09:29:41 crc kubenswrapper[4826]: I0129 09:29:41.649934 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56696b5f9-8r95k_a45663bb-b6c8-40cd-8f5c-9e3c08b2480a/prometheus-operator-admission-webhook/0.log" Jan 29 09:29:41 crc kubenswrapper[4826]: I0129 09:29:41.686517 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh_15a35e48-2620-4093-a986-0d7b1ecde3c5/prometheus-operator-admission-webhook/0.log" Jan 29 09:29:41 crc kubenswrapper[4826]: I0129 09:29:41.856017 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7r5fl_04f8a571-0326-413e-92ca-d9d706b91187/operator/0.log" Jan 29 09:29:41 crc kubenswrapper[4826]: I0129 09:29:41.895209 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-snp7s_c4081b90-9d2f-4aa8-b1f9-f16819f5f36b/perses-operator/0.log" Jan 29 09:29:55 crc kubenswrapper[4826]: I0129 09:29:55.943942 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-s2gzv_b4f5f505-0bfd-4f06-95bc-1402bcbfd09a/kube-rbac-proxy/0.log" Jan 29 09:29:56 crc kubenswrapper[4826]: I0129 09:29:56.221196 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-m8t69_2eda5181-f4b2-4a86-bf3c-8ba839c80d00/frr-k8s-webhook-server/0.log" Jan 29 09:29:56 crc kubenswrapper[4826]: I0129 09:29:56.326901 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-s2gzv_b4f5f505-0bfd-4f06-95bc-1402bcbfd09a/controller/0.log" Jan 29 09:29:56 crc kubenswrapper[4826]: I0129 09:29:56.430816 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/cp-frr-files/0.log" Jan 29 09:29:56 crc kubenswrapper[4826]: I0129 09:29:56.562101 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/cp-metrics/0.log" Jan 29 09:29:56 crc kubenswrapper[4826]: I0129 09:29:56.569342 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/cp-reloader/0.log" Jan 29 09:29:56 crc kubenswrapper[4826]: I0129 09:29:56.577462 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/cp-frr-files/0.log" Jan 29 09:29:56 crc kubenswrapper[4826]: I0129 09:29:56.617547 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/cp-reloader/0.log" Jan 29 09:29:56 crc kubenswrapper[4826]: I0129 09:29:56.798129 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/cp-frr-files/0.log" Jan 29 09:29:56 crc kubenswrapper[4826]: I0129 09:29:56.830406 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/cp-reloader/0.log" Jan 29 09:29:56 crc kubenswrapper[4826]: I0129 09:29:56.868360 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/cp-metrics/0.log" Jan 29 09:29:56 crc kubenswrapper[4826]: I0129 09:29:56.868771 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/cp-metrics/0.log" Jan 29 09:29:56 crc kubenswrapper[4826]: I0129 09:29:56.993163 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/cp-frr-files/0.log" Jan 29 09:29:57 crc kubenswrapper[4826]: I0129 09:29:57.004873 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/cp-reloader/0.log" Jan 29 09:29:57 crc kubenswrapper[4826]: I0129 09:29:57.049974 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/cp-metrics/0.log" Jan 29 09:29:57 crc kubenswrapper[4826]: I0129 09:29:57.077018 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/controller/0.log" Jan 29 09:29:57 crc kubenswrapper[4826]: I0129 09:29:57.199211 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/frr-metrics/0.log" Jan 29 09:29:57 crc kubenswrapper[4826]: I0129 09:29:57.251417 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/kube-rbac-proxy/0.log" Jan 29 09:29:57 crc kubenswrapper[4826]: I0129 09:29:57.301085 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/kube-rbac-proxy-frr/0.log" Jan 29 09:29:57 crc kubenswrapper[4826]: I0129 09:29:57.460422 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/reloader/0.log" Jan 29 09:29:57 crc kubenswrapper[4826]: I0129 09:29:57.573407 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5bdcdf948b-6jpmp_06706f9d-60f4-4e7b-8038-d7fcfc82999a/manager/0.log" Jan 29 09:29:57 crc kubenswrapper[4826]: I0129 09:29:57.737835 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-599f7c78cf-nrhfg_c52567fd-c599-41cf-89f2-9331a467c3bd/webhook-server/0.log" Jan 29 09:29:57 crc kubenswrapper[4826]: I0129 09:29:57.861733 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-h89x7_3405170d-fc3f-46a7-a936-c811885cc266/kube-rbac-proxy/0.log" Jan 29 09:29:58 crc kubenswrapper[4826]: I0129 09:29:58.847673 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-h89x7_3405170d-fc3f-46a7-a936-c811885cc266/speaker/0.log" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.179436 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw"] Jan 29 09:30:00 crc kubenswrapper[4826]: E0129 09:30:00.179940 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f1ec6f-83a3-4daa-877f-2661795188cf" containerName="extract-content" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.179959 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f1ec6f-83a3-4daa-877f-2661795188cf" containerName="extract-content" Jan 29 09:30:00 crc kubenswrapper[4826]: E0129 09:30:00.179976 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f1ec6f-83a3-4daa-877f-2661795188cf" containerName="registry-server" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.179983 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f1ec6f-83a3-4daa-877f-2661795188cf" containerName="registry-server" Jan 29 09:30:00 crc kubenswrapper[4826]: E0129 09:30:00.180024 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f1ec6f-83a3-4daa-877f-2661795188cf" containerName="extract-utilities" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.180034 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f1ec6f-83a3-4daa-877f-2661795188cf" containerName="extract-utilities" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.180258 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f1ec6f-83a3-4daa-877f-2661795188cf" containerName="registry-server" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.181321 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.184131 4826 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.184579 4826 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.198456 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw"] Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.258154 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s68x\" (UniqueName: \"kubernetes.io/projected/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-kube-api-access-8s68x\") pod \"collect-profiles-29494650-vp4gw\" (UID: \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.258637 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-config-volume\") pod \"collect-profiles-29494650-vp4gw\" (UID: \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.258756 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-secret-volume\") pod \"collect-profiles-29494650-vp4gw\" (UID: \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.360479 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s68x\" (UniqueName: \"kubernetes.io/projected/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-kube-api-access-8s68x\") pod \"collect-profiles-29494650-vp4gw\" (UID: \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.361088 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-config-volume\") pod \"collect-profiles-29494650-vp4gw\" (UID: \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.361145 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-secret-volume\") pod \"collect-profiles-29494650-vp4gw\" (UID: \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.361924 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-config-volume\") pod \"collect-profiles-29494650-vp4gw\" (UID: \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.379725 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-secret-volume\") pod \"collect-profiles-29494650-vp4gw\" (UID: \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.386824 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s68x\" (UniqueName: \"kubernetes.io/projected/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-kube-api-access-8s68x\") pod \"collect-profiles-29494650-vp4gw\" (UID: \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.510624 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" Jan 29 09:30:00 crc kubenswrapper[4826]: I0129 09:30:00.826699 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zsbwk_80682433-6c24-45a9-a54b-8db2233f2870/frr/0.log" Jan 29 09:30:01 crc kubenswrapper[4826]: I0129 09:30:01.053471 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw"] Jan 29 09:30:01 crc kubenswrapper[4826]: I0129 09:30:01.869690 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" event={"ID":"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07","Type":"ContainerStarted","Data":"0bac170b82806ed79ca749d0b46925acc3bcd064c4772c8842fbc032ccd335bc"} Jan 29 09:30:01 crc kubenswrapper[4826]: I0129 09:30:01.870007 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" event={"ID":"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07","Type":"ContainerStarted","Data":"38a34c0b1f9bd09d8549ec4c965b1c50d6e8801e4258fab2d9bc4c41b469a127"} Jan 29 09:30:01 crc kubenswrapper[4826]: I0129 09:30:01.900762 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" podStartSLOduration=1.900744237 podStartE2EDuration="1.900744237s" podCreationTimestamp="2026-01-29 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 09:30:01.887326678 +0000 UTC m=+9985.749119767" watchObservedRunningTime="2026-01-29 09:30:01.900744237 +0000 UTC m=+9985.762537306" Jan 29 09:30:02 crc kubenswrapper[4826]: I0129 09:30:02.880532 4826 generic.go:334] "Generic (PLEG): container finished" podID="d5bfe0bf-bbe6-439c-88cb-54021e2a3a07" containerID="0bac170b82806ed79ca749d0b46925acc3bcd064c4772c8842fbc032ccd335bc" exitCode=0 Jan 29 09:30:02 crc kubenswrapper[4826]: I0129 09:30:02.880636 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" event={"ID":"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07","Type":"ContainerDied","Data":"0bac170b82806ed79ca749d0b46925acc3bcd064c4772c8842fbc032ccd335bc"} Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.233875 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.370919 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-secret-volume\") pod \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\" (UID: \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\") " Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.371109 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s68x\" (UniqueName: \"kubernetes.io/projected/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-kube-api-access-8s68x\") pod \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\" (UID: \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\") " Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.371136 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-config-volume\") pod \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\" (UID: \"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07\") " Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.371788 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-config-volume" (OuterVolumeSpecName: "config-volume") pod "d5bfe0bf-bbe6-439c-88cb-54021e2a3a07" (UID: "d5bfe0bf-bbe6-439c-88cb-54021e2a3a07"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.372274 4826 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.378350 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-kube-api-access-8s68x" (OuterVolumeSpecName: "kube-api-access-8s68x") pod "d5bfe0bf-bbe6-439c-88cb-54021e2a3a07" (UID: "d5bfe0bf-bbe6-439c-88cb-54021e2a3a07"). InnerVolumeSpecName "kube-api-access-8s68x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.378438 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d5bfe0bf-bbe6-439c-88cb-54021e2a3a07" (UID: "d5bfe0bf-bbe6-439c-88cb-54021e2a3a07"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.474560 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s68x\" (UniqueName: \"kubernetes.io/projected/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-kube-api-access-8s68x\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.474603 4826 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5bfe0bf-bbe6-439c-88cb-54021e2a3a07-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.903135 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" event={"ID":"d5bfe0bf-bbe6-439c-88cb-54021e2a3a07","Type":"ContainerDied","Data":"38a34c0b1f9bd09d8549ec4c965b1c50d6e8801e4258fab2d9bc4c41b469a127"} Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.903244 4826 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38a34c0b1f9bd09d8549ec4c965b1c50d6e8801e4258fab2d9bc4c41b469a127" Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.903185 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494650-vp4gw" Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.971523 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7"] Jan 29 09:30:04 crc kubenswrapper[4826]: I0129 09:30:04.982647 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494605-xqpp7"] Jan 29 09:30:06 crc kubenswrapper[4826]: I0129 09:30:06.822413 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a7c15c-7c85-4698-92eb-9041de234300" path="/var/lib/kubelet/pods/89a7c15c-7c85-4698-92eb-9041de234300/volumes" Jan 29 09:30:12 crc kubenswrapper[4826]: I0129 09:30:12.287880 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn_a1efbdc4-cb00-47f3-a174-62be62cea868/util/0.log" Jan 29 09:30:12 crc kubenswrapper[4826]: I0129 09:30:12.792725 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn_a1efbdc4-cb00-47f3-a174-62be62cea868/pull/0.log" Jan 29 09:30:12 crc kubenswrapper[4826]: I0129 09:30:12.822913 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn_a1efbdc4-cb00-47f3-a174-62be62cea868/pull/0.log" Jan 29 09:30:12 crc kubenswrapper[4826]: I0129 09:30:12.824118 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn_a1efbdc4-cb00-47f3-a174-62be62cea868/util/0.log" Jan 29 09:30:12 crc kubenswrapper[4826]: I0129 09:30:12.958999 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn_a1efbdc4-cb00-47f3-a174-62be62cea868/util/0.log" Jan 29 09:30:13 crc kubenswrapper[4826]: I0129 09:30:13.009758 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn_a1efbdc4-cb00-47f3-a174-62be62cea868/extract/0.log" Jan 29 09:30:13 crc kubenswrapper[4826]: I0129 09:30:13.010194 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2s2bn_a1efbdc4-cb00-47f3-a174-62be62cea868/pull/0.log" Jan 29 09:30:13 crc kubenswrapper[4826]: I0129 09:30:13.147444 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5_615a47fa-75b7-4c22-9e69-cc70f9e3a132/util/0.log" Jan 29 09:30:13 crc kubenswrapper[4826]: I0129 09:30:13.341018 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5_615a47fa-75b7-4c22-9e69-cc70f9e3a132/pull/0.log" Jan 29 09:30:13 crc kubenswrapper[4826]: I0129 09:30:13.352637 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5_615a47fa-75b7-4c22-9e69-cc70f9e3a132/pull/0.log" Jan 29 09:30:13 crc kubenswrapper[4826]: I0129 09:30:13.362247 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5_615a47fa-75b7-4c22-9e69-cc70f9e3a132/util/0.log" Jan 29 09:30:13 crc kubenswrapper[4826]: I0129 09:30:13.505212 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5_615a47fa-75b7-4c22-9e69-cc70f9e3a132/util/0.log" Jan 29 09:30:13 crc kubenswrapper[4826]: I0129 09:30:13.506391 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5_615a47fa-75b7-4c22-9e69-cc70f9e3a132/pull/0.log" Jan 29 09:30:13 crc kubenswrapper[4826]: I0129 09:30:13.508265 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lmbx5_615a47fa-75b7-4c22-9e69-cc70f9e3a132/extract/0.log" Jan 29 09:30:13 crc kubenswrapper[4826]: I0129 09:30:13.688510 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn_75ef4038-043e-4847-b71b-818782b647ab/util/0.log" Jan 29 09:30:13 crc kubenswrapper[4826]: I0129 09:30:13.878551 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn_75ef4038-043e-4847-b71b-818782b647ab/util/0.log" Jan 29 09:30:13 crc kubenswrapper[4826]: I0129 09:30:13.888123 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn_75ef4038-043e-4847-b71b-818782b647ab/pull/0.log" Jan 29 09:30:13 crc kubenswrapper[4826]: I0129 09:30:13.928676 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn_75ef4038-043e-4847-b71b-818782b647ab/pull/0.log" Jan 29 09:30:14 crc kubenswrapper[4826]: I0129 09:30:14.095757 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn_75ef4038-043e-4847-b71b-818782b647ab/util/0.log" Jan 29 09:30:14 crc kubenswrapper[4826]: I0129 09:30:14.096134 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn_75ef4038-043e-4847-b71b-818782b647ab/pull/0.log" Jan 29 09:30:14 crc kubenswrapper[4826]: I0129 09:30:14.118790 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5d7mrn_75ef4038-043e-4847-b71b-818782b647ab/extract/0.log" Jan 29 09:30:14 crc kubenswrapper[4826]: I0129 09:30:14.335590 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr_357050f5-02bf-4697-9e17-3d7389a90a6d/util/0.log" Jan 29 09:30:14 crc kubenswrapper[4826]: I0129 09:30:14.479774 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr_357050f5-02bf-4697-9e17-3d7389a90a6d/util/0.log" Jan 29 09:30:14 crc kubenswrapper[4826]: I0129 09:30:14.491499 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr_357050f5-02bf-4697-9e17-3d7389a90a6d/pull/0.log" Jan 29 09:30:14 crc kubenswrapper[4826]: I0129 09:30:14.502458 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr_357050f5-02bf-4697-9e17-3d7389a90a6d/pull/0.log" Jan 29 09:30:14 crc kubenswrapper[4826]: I0129 09:30:14.685336 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr_357050f5-02bf-4697-9e17-3d7389a90a6d/extract/0.log" Jan 29 09:30:14 crc kubenswrapper[4826]: I0129 09:30:14.695213 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr_357050f5-02bf-4697-9e17-3d7389a90a6d/pull/0.log" Jan 29 09:30:14 crc kubenswrapper[4826]: I0129 09:30:14.697801 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gggjr_357050f5-02bf-4697-9e17-3d7389a90a6d/util/0.log" Jan 29 09:30:14 crc kubenswrapper[4826]: I0129 09:30:14.850916 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5xd8c_ece9f354-f8c6-4108-af2b-9fc51ea418a3/extract-utilities/0.log" Jan 29 09:30:15 crc kubenswrapper[4826]: I0129 09:30:15.047052 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5xd8c_ece9f354-f8c6-4108-af2b-9fc51ea418a3/extract-content/0.log" Jan 29 09:30:15 crc kubenswrapper[4826]: I0129 09:30:15.064767 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5xd8c_ece9f354-f8c6-4108-af2b-9fc51ea418a3/extract-utilities/0.log" Jan 29 09:30:15 crc kubenswrapper[4826]: I0129 09:30:15.102250 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5xd8c_ece9f354-f8c6-4108-af2b-9fc51ea418a3/extract-content/0.log" Jan 29 09:30:15 crc kubenswrapper[4826]: I0129 09:30:15.242733 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5xd8c_ece9f354-f8c6-4108-af2b-9fc51ea418a3/extract-content/0.log" Jan 29 09:30:15 crc kubenswrapper[4826]: I0129 09:30:15.309436 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5xd8c_ece9f354-f8c6-4108-af2b-9fc51ea418a3/extract-utilities/0.log" Jan 29 09:30:15 crc kubenswrapper[4826]: I0129 09:30:15.493764 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdm6z_26778232-3c9d-4c90-9f32-a7a0dc0e87b4/extract-utilities/0.log" Jan 29 09:30:15 crc kubenswrapper[4826]: I0129 09:30:15.749099 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdm6z_26778232-3c9d-4c90-9f32-a7a0dc0e87b4/extract-utilities/0.log" Jan 29 09:30:15 crc kubenswrapper[4826]: I0129 09:30:15.803138 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdm6z_26778232-3c9d-4c90-9f32-a7a0dc0e87b4/extract-content/0.log" Jan 29 09:30:15 crc kubenswrapper[4826]: I0129 09:30:15.810876 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdm6z_26778232-3c9d-4c90-9f32-a7a0dc0e87b4/extract-content/0.log" Jan 29 09:30:16 crc kubenswrapper[4826]: I0129 09:30:16.153870 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdm6z_26778232-3c9d-4c90-9f32-a7a0dc0e87b4/extract-content/0.log" Jan 29 09:30:16 crc kubenswrapper[4826]: I0129 09:30:16.154104 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdm6z_26778232-3c9d-4c90-9f32-a7a0dc0e87b4/extract-utilities/0.log" Jan 29 09:30:16 crc kubenswrapper[4826]: I0129 09:30:16.439950 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8g6tv_720ff77a-51ee-49c7-8678-4d6d9f179942/marketplace-operator/0.log" Jan 29 09:30:16 crc kubenswrapper[4826]: I0129 09:30:16.741131 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kpmf6_083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6/extract-utilities/0.log" Jan 29 09:30:16 crc kubenswrapper[4826]: I0129 09:30:16.837675 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kpmf6_083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6/extract-utilities/0.log" Jan 29 09:30:16 crc kubenswrapper[4826]: I0129 09:30:16.912784 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kpmf6_083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6/extract-content/0.log" Jan 29 09:30:16 crc kubenswrapper[4826]: I0129 09:30:16.976776 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kpmf6_083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6/extract-content/0.log" Jan 29 09:30:17 crc kubenswrapper[4826]: I0129 09:30:17.210275 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kpmf6_083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6/extract-content/0.log" Jan 29 09:30:17 crc kubenswrapper[4826]: I0129 09:30:17.270329 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kpmf6_083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6/extract-utilities/0.log" Jan 29 09:30:17 crc kubenswrapper[4826]: I0129 09:30:17.445341 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5xd8c_ece9f354-f8c6-4108-af2b-9fc51ea418a3/registry-server/0.log" Jan 29 09:30:17 crc kubenswrapper[4826]: I0129 09:30:17.583898 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-577mf_58fea2da-3284-4b38-883f-665355002814/extract-utilities/0.log" Jan 29 09:30:17 crc kubenswrapper[4826]: I0129 09:30:17.729982 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-577mf_58fea2da-3284-4b38-883f-665355002814/extract-utilities/0.log" Jan 29 09:30:17 crc kubenswrapper[4826]: I0129 09:30:17.772922 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kpmf6_083c6f06-4fdf-41b8-9fbd-c6076fe0d6e6/registry-server/0.log" Jan 29 09:30:17 crc kubenswrapper[4826]: I0129 09:30:17.774568 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-577mf_58fea2da-3284-4b38-883f-665355002814/extract-content/0.log" Jan 29 09:30:17 crc kubenswrapper[4826]: I0129 09:30:17.803969 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-577mf_58fea2da-3284-4b38-883f-665355002814/extract-content/0.log" Jan 29 09:30:18 crc kubenswrapper[4826]: I0129 09:30:18.114225 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-577mf_58fea2da-3284-4b38-883f-665355002814/extract-content/0.log" Jan 29 09:30:18 crc kubenswrapper[4826]: I0129 09:30:18.122888 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-577mf_58fea2da-3284-4b38-883f-665355002814/extract-utilities/0.log" Jan 29 09:30:18 crc kubenswrapper[4826]: I0129 09:30:18.172533 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gdm6z_26778232-3c9d-4c90-9f32-a7a0dc0e87b4/registry-server/0.log" Jan 29 09:30:19 crc kubenswrapper[4826]: I0129 09:30:19.401170 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-577mf_58fea2da-3284-4b38-883f-665355002814/registry-server/0.log" Jan 29 09:30:26 crc kubenswrapper[4826]: I0129 09:30:26.393633 4826 scope.go:117] "RemoveContainer" containerID="a587129ea619384752147ca0b6245ff3245e14509fbd28a391fe14e6ccd35c56" Jan 29 09:30:32 crc kubenswrapper[4826]: I0129 09:30:32.241567 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-k97cs_5fc63d8d-d2f6-48da-95e1-47ce4038127e/prometheus-operator/0.log" Jan 29 09:30:32 crc kubenswrapper[4826]: I0129 09:30:32.261620 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56696b5f9-8r95k_a45663bb-b6c8-40cd-8f5c-9e3c08b2480a/prometheus-operator-admission-webhook/0.log" Jan 29 09:30:32 crc kubenswrapper[4826]: I0129 09:30:32.262292 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56696b5f9-vkvkh_15a35e48-2620-4093-a986-0d7b1ecde3c5/prometheus-operator-admission-webhook/0.log" Jan 29 09:30:32 crc kubenswrapper[4826]: I0129 09:30:32.404648 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7r5fl_04f8a571-0326-413e-92ca-d9d706b91187/operator/0.log" Jan 29 09:30:32 crc kubenswrapper[4826]: I0129 09:30:32.476183 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-snp7s_c4081b90-9d2f-4aa8-b1f9-f16819f5f36b/perses-operator/0.log" Jan 29 09:31:05 crc kubenswrapper[4826]: I0129 09:31:05.656025 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:31:05 crc kubenswrapper[4826]: I0129 09:31:05.656580 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:31:26 crc kubenswrapper[4826]: I0129 09:31:26.450309 4826 scope.go:117] "RemoveContainer" containerID="678fd57d9a6590def0e579feb9588fd0657ef1fd3e8d24b781285a9bbd628073" Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.709759 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fzw4p"] Jan 29 09:31:33 crc kubenswrapper[4826]: E0129 09:31:33.710842 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5bfe0bf-bbe6-439c-88cb-54021e2a3a07" containerName="collect-profiles" Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.710862 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5bfe0bf-bbe6-439c-88cb-54021e2a3a07" containerName="collect-profiles" Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.711093 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5bfe0bf-bbe6-439c-88cb-54021e2a3a07" containerName="collect-profiles" Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.712824 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.743882 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fzw4p"] Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.836162 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08ff44f-de66-4fa1-b0de-4fa269199ddf-catalog-content\") pod \"community-operators-fzw4p\" (UID: \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\") " pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.836320 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmtbx\" (UniqueName: \"kubernetes.io/projected/b08ff44f-de66-4fa1-b0de-4fa269199ddf-kube-api-access-fmtbx\") pod \"community-operators-fzw4p\" (UID: \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\") " pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.836397 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08ff44f-de66-4fa1-b0de-4fa269199ddf-utilities\") pod \"community-operators-fzw4p\" (UID: \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\") " pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.938857 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmtbx\" (UniqueName: \"kubernetes.io/projected/b08ff44f-de66-4fa1-b0de-4fa269199ddf-kube-api-access-fmtbx\") pod \"community-operators-fzw4p\" (UID: \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\") " pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.938985 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08ff44f-de66-4fa1-b0de-4fa269199ddf-utilities\") pod \"community-operators-fzw4p\" (UID: \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\") " pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.939111 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08ff44f-de66-4fa1-b0de-4fa269199ddf-catalog-content\") pod \"community-operators-fzw4p\" (UID: \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\") " pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.939680 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08ff44f-de66-4fa1-b0de-4fa269199ddf-utilities\") pod \"community-operators-fzw4p\" (UID: \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\") " pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.939876 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08ff44f-de66-4fa1-b0de-4fa269199ddf-catalog-content\") pod \"community-operators-fzw4p\" (UID: \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\") " pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:33 crc kubenswrapper[4826]: I0129 09:31:33.964432 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmtbx\" (UniqueName: \"kubernetes.io/projected/b08ff44f-de66-4fa1-b0de-4fa269199ddf-kube-api-access-fmtbx\") pod \"community-operators-fzw4p\" (UID: \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\") " pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:34 crc kubenswrapper[4826]: I0129 09:31:34.040288 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:34 crc kubenswrapper[4826]: I0129 09:31:34.645524 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fzw4p"] Jan 29 09:31:34 crc kubenswrapper[4826]: I0129 09:31:34.793116 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzw4p" event={"ID":"b08ff44f-de66-4fa1-b0de-4fa269199ddf","Type":"ContainerStarted","Data":"8393bf3be41c3216f6e13f5df753da4915409ac62b7c22b7f425be8bf458c7ce"} Jan 29 09:31:35 crc kubenswrapper[4826]: I0129 09:31:35.656677 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:31:35 crc kubenswrapper[4826]: I0129 09:31:35.656950 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:31:35 crc kubenswrapper[4826]: I0129 09:31:35.802628 4826 generic.go:334] "Generic (PLEG): container finished" podID="b08ff44f-de66-4fa1-b0de-4fa269199ddf" containerID="aa60ad0caf9946ea5a7ea4fa4f41624de8fe21f64793713facaabe42ae44b786" exitCode=0 Jan 29 09:31:35 crc kubenswrapper[4826]: I0129 09:31:35.802667 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzw4p" event={"ID":"b08ff44f-de66-4fa1-b0de-4fa269199ddf","Type":"ContainerDied","Data":"aa60ad0caf9946ea5a7ea4fa4f41624de8fe21f64793713facaabe42ae44b786"} Jan 29 09:31:35 crc kubenswrapper[4826]: I0129 09:31:35.804898 4826 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 09:31:36 crc kubenswrapper[4826]: I0129 09:31:36.845040 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzw4p" event={"ID":"b08ff44f-de66-4fa1-b0de-4fa269199ddf","Type":"ContainerStarted","Data":"c23d701628271bdb09df6a3259ad26320679bb327554a386fc471f95ae76553f"} Jan 29 09:31:37 crc kubenswrapper[4826]: I0129 09:31:37.859682 4826 generic.go:334] "Generic (PLEG): container finished" podID="b08ff44f-de66-4fa1-b0de-4fa269199ddf" containerID="c23d701628271bdb09df6a3259ad26320679bb327554a386fc471f95ae76553f" exitCode=0 Jan 29 09:31:37 crc kubenswrapper[4826]: I0129 09:31:37.859802 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzw4p" event={"ID":"b08ff44f-de66-4fa1-b0de-4fa269199ddf","Type":"ContainerDied","Data":"c23d701628271bdb09df6a3259ad26320679bb327554a386fc471f95ae76553f"} Jan 29 09:31:38 crc kubenswrapper[4826]: I0129 09:31:38.872149 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzw4p" event={"ID":"b08ff44f-de66-4fa1-b0de-4fa269199ddf","Type":"ContainerStarted","Data":"cc4abb95645b8f67c35ba17e08dac50980d70f0a6ba4e46200567b2a94148bec"} Jan 29 09:31:38 crc kubenswrapper[4826]: I0129 09:31:38.901705 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fzw4p" podStartSLOduration=3.410922928 podStartE2EDuration="5.901683882s" podCreationTimestamp="2026-01-29 09:31:33 +0000 UTC" firstStartedPulling="2026-01-29 09:31:35.804528122 +0000 UTC m=+10079.666321211" lastFinishedPulling="2026-01-29 09:31:38.295289096 +0000 UTC m=+10082.157082165" observedRunningTime="2026-01-29 09:31:38.889362513 +0000 UTC m=+10082.751155592" watchObservedRunningTime="2026-01-29 09:31:38.901683882 +0000 UTC m=+10082.763476971" Jan 29 09:31:44 crc kubenswrapper[4826]: I0129 09:31:44.041004 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:44 crc kubenswrapper[4826]: I0129 09:31:44.041898 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:44 crc kubenswrapper[4826]: I0129 09:31:44.107681 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:45 crc kubenswrapper[4826]: I0129 09:31:45.045642 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:45 crc kubenswrapper[4826]: I0129 09:31:45.109339 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fzw4p"] Jan 29 09:31:46 crc kubenswrapper[4826]: I0129 09:31:46.994597 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fzw4p" podUID="b08ff44f-de66-4fa1-b0de-4fa269199ddf" containerName="registry-server" containerID="cri-o://cc4abb95645b8f67c35ba17e08dac50980d70f0a6ba4e46200567b2a94148bec" gracePeriod=2 Jan 29 09:31:48 crc kubenswrapper[4826]: I0129 09:31:48.024212 4826 generic.go:334] "Generic (PLEG): container finished" podID="b08ff44f-de66-4fa1-b0de-4fa269199ddf" containerID="cc4abb95645b8f67c35ba17e08dac50980d70f0a6ba4e46200567b2a94148bec" exitCode=0 Jan 29 09:31:48 crc kubenswrapper[4826]: I0129 09:31:48.024245 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzw4p" event={"ID":"b08ff44f-de66-4fa1-b0de-4fa269199ddf","Type":"ContainerDied","Data":"cc4abb95645b8f67c35ba17e08dac50980d70f0a6ba4e46200567b2a94148bec"} Jan 29 09:31:48 crc kubenswrapper[4826]: I0129 09:31:48.705213 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:48 crc kubenswrapper[4826]: I0129 09:31:48.769698 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08ff44f-de66-4fa1-b0de-4fa269199ddf-utilities\") pod \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\" (UID: \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\") " Jan 29 09:31:48 crc kubenswrapper[4826]: I0129 09:31:48.769755 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08ff44f-de66-4fa1-b0de-4fa269199ddf-catalog-content\") pod \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\" (UID: \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\") " Jan 29 09:31:48 crc kubenswrapper[4826]: I0129 09:31:48.769817 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmtbx\" (UniqueName: \"kubernetes.io/projected/b08ff44f-de66-4fa1-b0de-4fa269199ddf-kube-api-access-fmtbx\") pod \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\" (UID: \"b08ff44f-de66-4fa1-b0de-4fa269199ddf\") " Jan 29 09:31:48 crc kubenswrapper[4826]: I0129 09:31:48.770817 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b08ff44f-de66-4fa1-b0de-4fa269199ddf-utilities" (OuterVolumeSpecName: "utilities") pod "b08ff44f-de66-4fa1-b0de-4fa269199ddf" (UID: "b08ff44f-de66-4fa1-b0de-4fa269199ddf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:31:48 crc kubenswrapper[4826]: I0129 09:31:48.775673 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b08ff44f-de66-4fa1-b0de-4fa269199ddf-kube-api-access-fmtbx" (OuterVolumeSpecName: "kube-api-access-fmtbx") pod "b08ff44f-de66-4fa1-b0de-4fa269199ddf" (UID: "b08ff44f-de66-4fa1-b0de-4fa269199ddf"). InnerVolumeSpecName "kube-api-access-fmtbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:31:48 crc kubenswrapper[4826]: I0129 09:31:48.823021 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b08ff44f-de66-4fa1-b0de-4fa269199ddf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b08ff44f-de66-4fa1-b0de-4fa269199ddf" (UID: "b08ff44f-de66-4fa1-b0de-4fa269199ddf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:31:48 crc kubenswrapper[4826]: I0129 09:31:48.872565 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b08ff44f-de66-4fa1-b0de-4fa269199ddf-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:31:48 crc kubenswrapper[4826]: I0129 09:31:48.872599 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b08ff44f-de66-4fa1-b0de-4fa269199ddf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:31:48 crc kubenswrapper[4826]: I0129 09:31:48.872608 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmtbx\" (UniqueName: \"kubernetes.io/projected/b08ff44f-de66-4fa1-b0de-4fa269199ddf-kube-api-access-fmtbx\") on node \"crc\" DevicePath \"\"" Jan 29 09:31:49 crc kubenswrapper[4826]: I0129 09:31:49.038154 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzw4p" event={"ID":"b08ff44f-de66-4fa1-b0de-4fa269199ddf","Type":"ContainerDied","Data":"8393bf3be41c3216f6e13f5df753da4915409ac62b7c22b7f425be8bf458c7ce"} Jan 29 09:31:49 crc kubenswrapper[4826]: I0129 09:31:49.038213 4826 scope.go:117] "RemoveContainer" containerID="cc4abb95645b8f67c35ba17e08dac50980d70f0a6ba4e46200567b2a94148bec" Jan 29 09:31:49 crc kubenswrapper[4826]: I0129 09:31:49.038237 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fzw4p" Jan 29 09:31:49 crc kubenswrapper[4826]: I0129 09:31:49.069362 4826 scope.go:117] "RemoveContainer" containerID="c23d701628271bdb09df6a3259ad26320679bb327554a386fc471f95ae76553f" Jan 29 09:31:49 crc kubenswrapper[4826]: I0129 09:31:49.081993 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fzw4p"] Jan 29 09:31:49 crc kubenswrapper[4826]: I0129 09:31:49.091835 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fzw4p"] Jan 29 09:31:49 crc kubenswrapper[4826]: I0129 09:31:49.096869 4826 scope.go:117] "RemoveContainer" containerID="aa60ad0caf9946ea5a7ea4fa4f41624de8fe21f64793713facaabe42ae44b786" Jan 29 09:31:50 crc kubenswrapper[4826]: I0129 09:31:50.823085 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b08ff44f-de66-4fa1-b0de-4fa269199ddf" path="/var/lib/kubelet/pods/b08ff44f-de66-4fa1-b0de-4fa269199ddf/volumes" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.450900 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bnqm4"] Jan 29 09:32:03 crc kubenswrapper[4826]: E0129 09:32:03.452037 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08ff44f-de66-4fa1-b0de-4fa269199ddf" containerName="registry-server" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.452055 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08ff44f-de66-4fa1-b0de-4fa269199ddf" containerName="registry-server" Jan 29 09:32:03 crc kubenswrapper[4826]: E0129 09:32:03.452096 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08ff44f-de66-4fa1-b0de-4fa269199ddf" containerName="extract-utilities" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.452106 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08ff44f-de66-4fa1-b0de-4fa269199ddf" containerName="extract-utilities" Jan 29 09:32:03 crc kubenswrapper[4826]: E0129 09:32:03.452132 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08ff44f-de66-4fa1-b0de-4fa269199ddf" containerName="extract-content" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.452139 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08ff44f-de66-4fa1-b0de-4fa269199ddf" containerName="extract-content" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.452427 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08ff44f-de66-4fa1-b0de-4fa269199ddf" containerName="registry-server" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.454261 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.480382 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnqm4"] Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.585047 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9np9\" (UniqueName: \"kubernetes.io/projected/aa90cd1a-19de-4c98-ac67-4747764e0d96-kube-api-access-x9np9\") pod \"redhat-marketplace-bnqm4\" (UID: \"aa90cd1a-19de-4c98-ac67-4747764e0d96\") " pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.585113 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa90cd1a-19de-4c98-ac67-4747764e0d96-utilities\") pod \"redhat-marketplace-bnqm4\" (UID: \"aa90cd1a-19de-4c98-ac67-4747764e0d96\") " pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.585317 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa90cd1a-19de-4c98-ac67-4747764e0d96-catalog-content\") pod \"redhat-marketplace-bnqm4\" (UID: \"aa90cd1a-19de-4c98-ac67-4747764e0d96\") " pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.687454 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9np9\" (UniqueName: \"kubernetes.io/projected/aa90cd1a-19de-4c98-ac67-4747764e0d96-kube-api-access-x9np9\") pod \"redhat-marketplace-bnqm4\" (UID: \"aa90cd1a-19de-4c98-ac67-4747764e0d96\") " pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.687504 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa90cd1a-19de-4c98-ac67-4747764e0d96-utilities\") pod \"redhat-marketplace-bnqm4\" (UID: \"aa90cd1a-19de-4c98-ac67-4747764e0d96\") " pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.687613 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa90cd1a-19de-4c98-ac67-4747764e0d96-catalog-content\") pod \"redhat-marketplace-bnqm4\" (UID: \"aa90cd1a-19de-4c98-ac67-4747764e0d96\") " pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.688114 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa90cd1a-19de-4c98-ac67-4747764e0d96-catalog-content\") pod \"redhat-marketplace-bnqm4\" (UID: \"aa90cd1a-19de-4c98-ac67-4747764e0d96\") " pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.688897 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa90cd1a-19de-4c98-ac67-4747764e0d96-utilities\") pod \"redhat-marketplace-bnqm4\" (UID: \"aa90cd1a-19de-4c98-ac67-4747764e0d96\") " pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.714479 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9np9\" (UniqueName: \"kubernetes.io/projected/aa90cd1a-19de-4c98-ac67-4747764e0d96-kube-api-access-x9np9\") pod \"redhat-marketplace-bnqm4\" (UID: \"aa90cd1a-19de-4c98-ac67-4747764e0d96\") " pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:03 crc kubenswrapper[4826]: I0129 09:32:03.782562 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:04 crc kubenswrapper[4826]: I0129 09:32:04.287881 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnqm4"] Jan 29 09:32:04 crc kubenswrapper[4826]: W0129 09:32:04.289453 4826 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa90cd1a_19de_4c98_ac67_4747764e0d96.slice/crio-d016e891a8db8a2d9360c56f401a4685ee93659228cb29bbf88c13739640b0d4 WatchSource:0}: Error finding container d016e891a8db8a2d9360c56f401a4685ee93659228cb29bbf88c13739640b0d4: Status 404 returned error can't find the container with id d016e891a8db8a2d9360c56f401a4685ee93659228cb29bbf88c13739640b0d4 Jan 29 09:32:05 crc kubenswrapper[4826]: I0129 09:32:05.201918 4826 generic.go:334] "Generic (PLEG): container finished" podID="aa90cd1a-19de-4c98-ac67-4747764e0d96" containerID="275570401e63cb2ff65ee46cb9275feb4a070dfbf990fcc5925c699b7f849496" exitCode=0 Jan 29 09:32:05 crc kubenswrapper[4826]: I0129 09:32:05.202561 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnqm4" event={"ID":"aa90cd1a-19de-4c98-ac67-4747764e0d96","Type":"ContainerDied","Data":"275570401e63cb2ff65ee46cb9275feb4a070dfbf990fcc5925c699b7f849496"} Jan 29 09:32:05 crc kubenswrapper[4826]: I0129 09:32:05.204788 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnqm4" event={"ID":"aa90cd1a-19de-4c98-ac67-4747764e0d96","Type":"ContainerStarted","Data":"d016e891a8db8a2d9360c56f401a4685ee93659228cb29bbf88c13739640b0d4"} Jan 29 09:32:05 crc kubenswrapper[4826]: I0129 09:32:05.656163 4826 patch_prober.go:28] interesting pod/machine-config-daemon-llzmh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 09:32:05 crc kubenswrapper[4826]: I0129 09:32:05.656287 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 09:32:05 crc kubenswrapper[4826]: I0129 09:32:05.656506 4826 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" Jan 29 09:32:05 crc kubenswrapper[4826]: I0129 09:32:05.658982 4826 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371"} pod="openshift-machine-config-operator/machine-config-daemon-llzmh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 09:32:05 crc kubenswrapper[4826]: I0129 09:32:05.659064 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerName="machine-config-daemon" containerID="cri-o://978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" gracePeriod=600 Jan 29 09:32:05 crc kubenswrapper[4826]: I0129 09:32:05.795509 4826 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae" containerName="galera" probeResult="failure" output="command timed out" Jan 29 09:32:05 crc kubenswrapper[4826]: I0129 09:32:05.796458 4826 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b5622f4e-cfec-4dfb-9fd3-10c1b1a9a4ae" containerName="galera" probeResult="failure" output="command timed out" Jan 29 09:32:05 crc kubenswrapper[4826]: E0129 09:32:05.816325 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:32:06 crc kubenswrapper[4826]: I0129 09:32:06.242646 4826 generic.go:334] "Generic (PLEG): container finished" podID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" exitCode=0 Jan 29 09:32:06 crc kubenswrapper[4826]: I0129 09:32:06.242705 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" event={"ID":"6ea2651e-31ea-4e99-8bcd-2f8e9687df2f","Type":"ContainerDied","Data":"978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371"} Jan 29 09:32:06 crc kubenswrapper[4826]: I0129 09:32:06.242759 4826 scope.go:117] "RemoveContainer" containerID="f4c5bf56ecf4d49f31d7cceee4ddf4d08b3ec9af55a5f1692d3a3420750d3329" Jan 29 09:32:06 crc kubenswrapper[4826]: I0129 09:32:06.243829 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:32:06 crc kubenswrapper[4826]: E0129 09:32:06.244465 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:32:07 crc kubenswrapper[4826]: I0129 09:32:07.260153 4826 generic.go:334] "Generic (PLEG): container finished" podID="aa90cd1a-19de-4c98-ac67-4747764e0d96" containerID="9593ca8ac8ea64e1d01fdd504561cfbc0c733c30b9de9ad385bc39bf47e84db3" exitCode=0 Jan 29 09:32:07 crc kubenswrapper[4826]: I0129 09:32:07.260207 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnqm4" event={"ID":"aa90cd1a-19de-4c98-ac67-4747764e0d96","Type":"ContainerDied","Data":"9593ca8ac8ea64e1d01fdd504561cfbc0c733c30b9de9ad385bc39bf47e84db3"} Jan 29 09:32:09 crc kubenswrapper[4826]: I0129 09:32:09.281840 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnqm4" event={"ID":"aa90cd1a-19de-4c98-ac67-4747764e0d96","Type":"ContainerStarted","Data":"7e9944f19c165f3af62724466dacfc41f5077dee5ab8f3a35d6fc3da80511bd0"} Jan 29 09:32:13 crc kubenswrapper[4826]: I0129 09:32:13.783215 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:13 crc kubenswrapper[4826]: I0129 09:32:13.786076 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:13 crc kubenswrapper[4826]: I0129 09:32:13.856930 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:13 crc kubenswrapper[4826]: I0129 09:32:13.877219 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bnqm4" podStartSLOduration=8.43368538 podStartE2EDuration="10.877200214s" podCreationTimestamp="2026-01-29 09:32:03 +0000 UTC" firstStartedPulling="2026-01-29 09:32:05.203970346 +0000 UTC m=+10109.065763415" lastFinishedPulling="2026-01-29 09:32:07.64748518 +0000 UTC m=+10111.509278249" observedRunningTime="2026-01-29 09:32:09.302824224 +0000 UTC m=+10113.164617293" watchObservedRunningTime="2026-01-29 09:32:13.877200214 +0000 UTC m=+10117.738993283" Jan 29 09:32:14 crc kubenswrapper[4826]: I0129 09:32:14.419994 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:14 crc kubenswrapper[4826]: I0129 09:32:14.492335 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnqm4"] Jan 29 09:32:16 crc kubenswrapper[4826]: I0129 09:32:16.355919 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bnqm4" podUID="aa90cd1a-19de-4c98-ac67-4747764e0d96" containerName="registry-server" containerID="cri-o://7e9944f19c165f3af62724466dacfc41f5077dee5ab8f3a35d6fc3da80511bd0" gracePeriod=2 Jan 29 09:32:16 crc kubenswrapper[4826]: I0129 09:32:16.828668 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.031476 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa90cd1a-19de-4c98-ac67-4747764e0d96-catalog-content\") pod \"aa90cd1a-19de-4c98-ac67-4747764e0d96\" (UID: \"aa90cd1a-19de-4c98-ac67-4747764e0d96\") " Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.031787 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa90cd1a-19de-4c98-ac67-4747764e0d96-utilities\") pod \"aa90cd1a-19de-4c98-ac67-4747764e0d96\" (UID: \"aa90cd1a-19de-4c98-ac67-4747764e0d96\") " Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.031827 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9np9\" (UniqueName: \"kubernetes.io/projected/aa90cd1a-19de-4c98-ac67-4747764e0d96-kube-api-access-x9np9\") pod \"aa90cd1a-19de-4c98-ac67-4747764e0d96\" (UID: \"aa90cd1a-19de-4c98-ac67-4747764e0d96\") " Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.033094 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa90cd1a-19de-4c98-ac67-4747764e0d96-utilities" (OuterVolumeSpecName: "utilities") pod "aa90cd1a-19de-4c98-ac67-4747764e0d96" (UID: "aa90cd1a-19de-4c98-ac67-4747764e0d96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.045164 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa90cd1a-19de-4c98-ac67-4747764e0d96-kube-api-access-x9np9" (OuterVolumeSpecName: "kube-api-access-x9np9") pod "aa90cd1a-19de-4c98-ac67-4747764e0d96" (UID: "aa90cd1a-19de-4c98-ac67-4747764e0d96"). InnerVolumeSpecName "kube-api-access-x9np9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.083804 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa90cd1a-19de-4c98-ac67-4747764e0d96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa90cd1a-19de-4c98-ac67-4747764e0d96" (UID: "aa90cd1a-19de-4c98-ac67-4747764e0d96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.134988 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa90cd1a-19de-4c98-ac67-4747764e0d96-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.135034 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9np9\" (UniqueName: \"kubernetes.io/projected/aa90cd1a-19de-4c98-ac67-4747764e0d96-kube-api-access-x9np9\") on node \"crc\" DevicePath \"\"" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.135050 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa90cd1a-19de-4c98-ac67-4747764e0d96-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.365786 4826 generic.go:334] "Generic (PLEG): container finished" podID="aa90cd1a-19de-4c98-ac67-4747764e0d96" containerID="7e9944f19c165f3af62724466dacfc41f5077dee5ab8f3a35d6fc3da80511bd0" exitCode=0 Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.365825 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnqm4" event={"ID":"aa90cd1a-19de-4c98-ac67-4747764e0d96","Type":"ContainerDied","Data":"7e9944f19c165f3af62724466dacfc41f5077dee5ab8f3a35d6fc3da80511bd0"} Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.365850 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnqm4" event={"ID":"aa90cd1a-19de-4c98-ac67-4747764e0d96","Type":"ContainerDied","Data":"d016e891a8db8a2d9360c56f401a4685ee93659228cb29bbf88c13739640b0d4"} Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.365886 4826 scope.go:117] "RemoveContainer" containerID="7e9944f19c165f3af62724466dacfc41f5077dee5ab8f3a35d6fc3da80511bd0" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.366010 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bnqm4" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.412858 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnqm4"] Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.433936 4826 scope.go:117] "RemoveContainer" containerID="9593ca8ac8ea64e1d01fdd504561cfbc0c733c30b9de9ad385bc39bf47e84db3" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.435184 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnqm4"] Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.449950 4826 scope.go:117] "RemoveContainer" containerID="275570401e63cb2ff65ee46cb9275feb4a070dfbf990fcc5925c699b7f849496" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.497050 4826 scope.go:117] "RemoveContainer" containerID="7e9944f19c165f3af62724466dacfc41f5077dee5ab8f3a35d6fc3da80511bd0" Jan 29 09:32:17 crc kubenswrapper[4826]: E0129 09:32:17.501036 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9944f19c165f3af62724466dacfc41f5077dee5ab8f3a35d6fc3da80511bd0\": container with ID starting with 7e9944f19c165f3af62724466dacfc41f5077dee5ab8f3a35d6fc3da80511bd0 not found: ID does not exist" containerID="7e9944f19c165f3af62724466dacfc41f5077dee5ab8f3a35d6fc3da80511bd0" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.501071 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9944f19c165f3af62724466dacfc41f5077dee5ab8f3a35d6fc3da80511bd0"} err="failed to get container status \"7e9944f19c165f3af62724466dacfc41f5077dee5ab8f3a35d6fc3da80511bd0\": rpc error: code = NotFound desc = could not find container \"7e9944f19c165f3af62724466dacfc41f5077dee5ab8f3a35d6fc3da80511bd0\": container with ID starting with 7e9944f19c165f3af62724466dacfc41f5077dee5ab8f3a35d6fc3da80511bd0 not found: ID does not exist" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.501095 4826 scope.go:117] "RemoveContainer" containerID="9593ca8ac8ea64e1d01fdd504561cfbc0c733c30b9de9ad385bc39bf47e84db3" Jan 29 09:32:17 crc kubenswrapper[4826]: E0129 09:32:17.501375 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9593ca8ac8ea64e1d01fdd504561cfbc0c733c30b9de9ad385bc39bf47e84db3\": container with ID starting with 9593ca8ac8ea64e1d01fdd504561cfbc0c733c30b9de9ad385bc39bf47e84db3 not found: ID does not exist" containerID="9593ca8ac8ea64e1d01fdd504561cfbc0c733c30b9de9ad385bc39bf47e84db3" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.501398 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9593ca8ac8ea64e1d01fdd504561cfbc0c733c30b9de9ad385bc39bf47e84db3"} err="failed to get container status \"9593ca8ac8ea64e1d01fdd504561cfbc0c733c30b9de9ad385bc39bf47e84db3\": rpc error: code = NotFound desc = could not find container \"9593ca8ac8ea64e1d01fdd504561cfbc0c733c30b9de9ad385bc39bf47e84db3\": container with ID starting with 9593ca8ac8ea64e1d01fdd504561cfbc0c733c30b9de9ad385bc39bf47e84db3 not found: ID does not exist" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.501411 4826 scope.go:117] "RemoveContainer" containerID="275570401e63cb2ff65ee46cb9275feb4a070dfbf990fcc5925c699b7f849496" Jan 29 09:32:17 crc kubenswrapper[4826]: E0129 09:32:17.501630 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"275570401e63cb2ff65ee46cb9275feb4a070dfbf990fcc5925c699b7f849496\": container with ID starting with 275570401e63cb2ff65ee46cb9275feb4a070dfbf990fcc5925c699b7f849496 not found: ID does not exist" containerID="275570401e63cb2ff65ee46cb9275feb4a070dfbf990fcc5925c699b7f849496" Jan 29 09:32:17 crc kubenswrapper[4826]: I0129 09:32:17.501645 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275570401e63cb2ff65ee46cb9275feb4a070dfbf990fcc5925c699b7f849496"} err="failed to get container status \"275570401e63cb2ff65ee46cb9275feb4a070dfbf990fcc5925c699b7f849496\": rpc error: code = NotFound desc = could not find container \"275570401e63cb2ff65ee46cb9275feb4a070dfbf990fcc5925c699b7f849496\": container with ID starting with 275570401e63cb2ff65ee46cb9275feb4a070dfbf990fcc5925c699b7f849496 not found: ID does not exist" Jan 29 09:32:18 crc kubenswrapper[4826]: I0129 09:32:18.821257 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa90cd1a-19de-4c98-ac67-4747764e0d96" path="/var/lib/kubelet/pods/aa90cd1a-19de-4c98-ac67-4747764e0d96/volumes" Jan 29 09:32:20 crc kubenswrapper[4826]: I0129 09:32:20.809004 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:32:20 crc kubenswrapper[4826]: E0129 09:32:20.810505 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:32:26 crc kubenswrapper[4826]: I0129 09:32:26.785178 4826 scope.go:117] "RemoveContainer" containerID="d622b071f7971b3312771f1c9d529b6931d70e262afaf36706efb0af4c2d33a8" Jan 29 09:32:34 crc kubenswrapper[4826]: I0129 09:32:34.812979 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:32:34 crc kubenswrapper[4826]: E0129 09:32:34.826418 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:32:47 crc kubenswrapper[4826]: I0129 09:32:47.809047 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:32:47 crc kubenswrapper[4826]: E0129 09:32:47.810886 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:32:52 crc kubenswrapper[4826]: I0129 09:32:52.753091 4826 generic.go:334] "Generic (PLEG): container finished" podID="b0d19904-4548-49c4-a608-18918d6fbe76" containerID="72aa304056fe087f3e76308c4be2f823506a2e3aa6a36d49650c9c4a2c018856" exitCode=0 Jan 29 09:32:52 crc kubenswrapper[4826]: I0129 09:32:52.753168 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2splk/must-gather-2kvp8" event={"ID":"b0d19904-4548-49c4-a608-18918d6fbe76","Type":"ContainerDied","Data":"72aa304056fe087f3e76308c4be2f823506a2e3aa6a36d49650c9c4a2c018856"} Jan 29 09:32:52 crc kubenswrapper[4826]: I0129 09:32:52.754402 4826 scope.go:117] "RemoveContainer" containerID="72aa304056fe087f3e76308c4be2f823506a2e3aa6a36d49650c9c4a2c018856" Jan 29 09:32:53 crc kubenswrapper[4826]: I0129 09:32:53.230325 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2splk_must-gather-2kvp8_b0d19904-4548-49c4-a608-18918d6fbe76/gather/0.log" Jan 29 09:32:58 crc kubenswrapper[4826]: I0129 09:32:58.809106 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:32:58 crc kubenswrapper[4826]: E0129 09:32:58.809914 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.185817 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2splk/must-gather-2kvp8"] Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.188113 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2splk/must-gather-2kvp8" podUID="b0d19904-4548-49c4-a608-18918d6fbe76" containerName="copy" containerID="cri-o://95323d346ecbcaf41c8db64ebddd57e73cb6413f1b547e86b464e0fa903fcdb0" gracePeriod=2 Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.199598 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2splk/must-gather-2kvp8"] Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.707660 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2splk_must-gather-2kvp8_b0d19904-4548-49c4-a608-18918d6fbe76/copy/0.log" Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.708373 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/must-gather-2kvp8" Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.783991 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxn8x\" (UniqueName: \"kubernetes.io/projected/b0d19904-4548-49c4-a608-18918d6fbe76-kube-api-access-gxn8x\") pod \"b0d19904-4548-49c4-a608-18918d6fbe76\" (UID: \"b0d19904-4548-49c4-a608-18918d6fbe76\") " Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.784194 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0d19904-4548-49c4-a608-18918d6fbe76-must-gather-output\") pod \"b0d19904-4548-49c4-a608-18918d6fbe76\" (UID: \"b0d19904-4548-49c4-a608-18918d6fbe76\") " Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.797249 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d19904-4548-49c4-a608-18918d6fbe76-kube-api-access-gxn8x" (OuterVolumeSpecName: "kube-api-access-gxn8x") pod "b0d19904-4548-49c4-a608-18918d6fbe76" (UID: "b0d19904-4548-49c4-a608-18918d6fbe76"). InnerVolumeSpecName "kube-api-access-gxn8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.881599 4826 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2splk_must-gather-2kvp8_b0d19904-4548-49c4-a608-18918d6fbe76/copy/0.log" Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.881952 4826 generic.go:334] "Generic (PLEG): container finished" podID="b0d19904-4548-49c4-a608-18918d6fbe76" containerID="95323d346ecbcaf41c8db64ebddd57e73cb6413f1b547e86b464e0fa903fcdb0" exitCode=143 Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.882001 4826 scope.go:117] "RemoveContainer" containerID="95323d346ecbcaf41c8db64ebddd57e73cb6413f1b547e86b464e0fa903fcdb0" Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.882129 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2splk/must-gather-2kvp8" Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.887335 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxn8x\" (UniqueName: \"kubernetes.io/projected/b0d19904-4548-49c4-a608-18918d6fbe76-kube-api-access-gxn8x\") on node \"crc\" DevicePath \"\"" Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.908150 4826 scope.go:117] "RemoveContainer" containerID="72aa304056fe087f3e76308c4be2f823506a2e3aa6a36d49650c9c4a2c018856" Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.984103 4826 scope.go:117] "RemoveContainer" containerID="95323d346ecbcaf41c8db64ebddd57e73cb6413f1b547e86b464e0fa903fcdb0" Jan 29 09:33:03 crc kubenswrapper[4826]: E0129 09:33:03.984748 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95323d346ecbcaf41c8db64ebddd57e73cb6413f1b547e86b464e0fa903fcdb0\": container with ID starting with 95323d346ecbcaf41c8db64ebddd57e73cb6413f1b547e86b464e0fa903fcdb0 not found: ID does not exist" containerID="95323d346ecbcaf41c8db64ebddd57e73cb6413f1b547e86b464e0fa903fcdb0" Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.984815 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95323d346ecbcaf41c8db64ebddd57e73cb6413f1b547e86b464e0fa903fcdb0"} err="failed to get container status \"95323d346ecbcaf41c8db64ebddd57e73cb6413f1b547e86b464e0fa903fcdb0\": rpc error: code = NotFound desc = could not find container \"95323d346ecbcaf41c8db64ebddd57e73cb6413f1b547e86b464e0fa903fcdb0\": container with ID starting with 95323d346ecbcaf41c8db64ebddd57e73cb6413f1b547e86b464e0fa903fcdb0 not found: ID does not exist" Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.984849 4826 scope.go:117] "RemoveContainer" containerID="72aa304056fe087f3e76308c4be2f823506a2e3aa6a36d49650c9c4a2c018856" Jan 29 09:33:03 crc kubenswrapper[4826]: E0129 09:33:03.985448 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72aa304056fe087f3e76308c4be2f823506a2e3aa6a36d49650c9c4a2c018856\": container with ID starting with 72aa304056fe087f3e76308c4be2f823506a2e3aa6a36d49650c9c4a2c018856 not found: ID does not exist" containerID="72aa304056fe087f3e76308c4be2f823506a2e3aa6a36d49650c9c4a2c018856" Jan 29 09:33:03 crc kubenswrapper[4826]: I0129 09:33:03.985495 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72aa304056fe087f3e76308c4be2f823506a2e3aa6a36d49650c9c4a2c018856"} err="failed to get container status \"72aa304056fe087f3e76308c4be2f823506a2e3aa6a36d49650c9c4a2c018856\": rpc error: code = NotFound desc = could not find container \"72aa304056fe087f3e76308c4be2f823506a2e3aa6a36d49650c9c4a2c018856\": container with ID starting with 72aa304056fe087f3e76308c4be2f823506a2e3aa6a36d49650c9c4a2c018856 not found: ID does not exist" Jan 29 09:33:04 crc kubenswrapper[4826]: I0129 09:33:04.002894 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d19904-4548-49c4-a608-18918d6fbe76-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b0d19904-4548-49c4-a608-18918d6fbe76" (UID: "b0d19904-4548-49c4-a608-18918d6fbe76"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:33:04 crc kubenswrapper[4826]: I0129 09:33:04.091314 4826 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0d19904-4548-49c4-a608-18918d6fbe76-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 09:33:04 crc kubenswrapper[4826]: I0129 09:33:04.820289 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d19904-4548-49c4-a608-18918d6fbe76" path="/var/lib/kubelet/pods/b0d19904-4548-49c4-a608-18918d6fbe76/volumes" Jan 29 09:33:12 crc kubenswrapper[4826]: I0129 09:33:12.809397 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:33:12 crc kubenswrapper[4826]: E0129 09:33:12.810255 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:33:25 crc kubenswrapper[4826]: I0129 09:33:25.808521 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:33:25 crc kubenswrapper[4826]: E0129 09:33:25.809475 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:33:39 crc kubenswrapper[4826]: I0129 09:33:39.809751 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:33:39 crc kubenswrapper[4826]: E0129 09:33:39.810832 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:33:54 crc kubenswrapper[4826]: I0129 09:33:54.809427 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:33:54 crc kubenswrapper[4826]: E0129 09:33:54.810184 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:34:05 crc kubenswrapper[4826]: I0129 09:34:05.810991 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:34:05 crc kubenswrapper[4826]: E0129 09:34:05.811874 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:34:19 crc kubenswrapper[4826]: I0129 09:34:19.809592 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:34:19 crc kubenswrapper[4826]: E0129 09:34:19.810185 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.756733 4826 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-65qsq"] Jan 29 09:34:27 crc kubenswrapper[4826]: E0129 09:34:27.757743 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa90cd1a-19de-4c98-ac67-4747764e0d96" containerName="extract-content" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.757757 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa90cd1a-19de-4c98-ac67-4747764e0d96" containerName="extract-content" Jan 29 09:34:27 crc kubenswrapper[4826]: E0129 09:34:27.757782 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d19904-4548-49c4-a608-18918d6fbe76" containerName="gather" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.757792 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d19904-4548-49c4-a608-18918d6fbe76" containerName="gather" Jan 29 09:34:27 crc kubenswrapper[4826]: E0129 09:34:27.757811 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa90cd1a-19de-4c98-ac67-4747764e0d96" containerName="extract-utilities" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.757821 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa90cd1a-19de-4c98-ac67-4747764e0d96" containerName="extract-utilities" Jan 29 09:34:27 crc kubenswrapper[4826]: E0129 09:34:27.757841 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa90cd1a-19de-4c98-ac67-4747764e0d96" containerName="registry-server" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.757848 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa90cd1a-19de-4c98-ac67-4747764e0d96" containerName="registry-server" Jan 29 09:34:27 crc kubenswrapper[4826]: E0129 09:34:27.757862 4826 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d19904-4548-49c4-a608-18918d6fbe76" containerName="copy" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.757870 4826 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d19904-4548-49c4-a608-18918d6fbe76" containerName="copy" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.758098 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d19904-4548-49c4-a608-18918d6fbe76" containerName="gather" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.758113 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d19904-4548-49c4-a608-18918d6fbe76" containerName="copy" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.758124 4826 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa90cd1a-19de-4c98-ac67-4747764e0d96" containerName="registry-server" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.759923 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.770937 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65qsq"] Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.836014 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/236ba5ff-d6d0-47e9-b2f5-0dce05f124df-utilities\") pod \"certified-operators-65qsq\" (UID: \"236ba5ff-d6d0-47e9-b2f5-0dce05f124df\") " pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.836075 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/236ba5ff-d6d0-47e9-b2f5-0dce05f124df-catalog-content\") pod \"certified-operators-65qsq\" (UID: \"236ba5ff-d6d0-47e9-b2f5-0dce05f124df\") " pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.836259 4826 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzckf\" (UniqueName: \"kubernetes.io/projected/236ba5ff-d6d0-47e9-b2f5-0dce05f124df-kube-api-access-fzckf\") pod \"certified-operators-65qsq\" (UID: \"236ba5ff-d6d0-47e9-b2f5-0dce05f124df\") " pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.938428 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/236ba5ff-d6d0-47e9-b2f5-0dce05f124df-utilities\") pod \"certified-operators-65qsq\" (UID: \"236ba5ff-d6d0-47e9-b2f5-0dce05f124df\") " pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.938486 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/236ba5ff-d6d0-47e9-b2f5-0dce05f124df-catalog-content\") pod \"certified-operators-65qsq\" (UID: \"236ba5ff-d6d0-47e9-b2f5-0dce05f124df\") " pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.938575 4826 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzckf\" (UniqueName: \"kubernetes.io/projected/236ba5ff-d6d0-47e9-b2f5-0dce05f124df-kube-api-access-fzckf\") pod \"certified-operators-65qsq\" (UID: \"236ba5ff-d6d0-47e9-b2f5-0dce05f124df\") " pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.938963 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/236ba5ff-d6d0-47e9-b2f5-0dce05f124df-utilities\") pod \"certified-operators-65qsq\" (UID: \"236ba5ff-d6d0-47e9-b2f5-0dce05f124df\") " pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.939067 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/236ba5ff-d6d0-47e9-b2f5-0dce05f124df-catalog-content\") pod \"certified-operators-65qsq\" (UID: \"236ba5ff-d6d0-47e9-b2f5-0dce05f124df\") " pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:27 crc kubenswrapper[4826]: I0129 09:34:27.962457 4826 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzckf\" (UniqueName: \"kubernetes.io/projected/236ba5ff-d6d0-47e9-b2f5-0dce05f124df-kube-api-access-fzckf\") pod \"certified-operators-65qsq\" (UID: \"236ba5ff-d6d0-47e9-b2f5-0dce05f124df\") " pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:28 crc kubenswrapper[4826]: I0129 09:34:28.086504 4826 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:28 crc kubenswrapper[4826]: I0129 09:34:28.641442 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65qsq"] Jan 29 09:34:28 crc kubenswrapper[4826]: I0129 09:34:28.703268 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65qsq" event={"ID":"236ba5ff-d6d0-47e9-b2f5-0dce05f124df","Type":"ContainerStarted","Data":"e592d86f0acd71c8b91bbd7c4af496c6f8d11d7a2c3ec57b4ad9ae9109fda57c"} Jan 29 09:34:29 crc kubenswrapper[4826]: I0129 09:34:29.712899 4826 generic.go:334] "Generic (PLEG): container finished" podID="236ba5ff-d6d0-47e9-b2f5-0dce05f124df" containerID="f37c5400446721545cf79be825f8c05ae3faf49ab31742ef2d9d413328eb1fce" exitCode=0 Jan 29 09:34:29 crc kubenswrapper[4826]: I0129 09:34:29.712988 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65qsq" event={"ID":"236ba5ff-d6d0-47e9-b2f5-0dce05f124df","Type":"ContainerDied","Data":"f37c5400446721545cf79be825f8c05ae3faf49ab31742ef2d9d413328eb1fce"} Jan 29 09:34:33 crc kubenswrapper[4826]: I0129 09:34:33.754288 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65qsq" event={"ID":"236ba5ff-d6d0-47e9-b2f5-0dce05f124df","Type":"ContainerStarted","Data":"e7a1add96bf6d73c801abf695b4e063eb6c864cacb2da0cad75753563f84f2c4"} Jan 29 09:34:33 crc kubenswrapper[4826]: I0129 09:34:33.809107 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:34:33 crc kubenswrapper[4826]: E0129 09:34:33.809496 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:34:34 crc kubenswrapper[4826]: I0129 09:34:34.768364 4826 generic.go:334] "Generic (PLEG): container finished" podID="236ba5ff-d6d0-47e9-b2f5-0dce05f124df" containerID="e7a1add96bf6d73c801abf695b4e063eb6c864cacb2da0cad75753563f84f2c4" exitCode=0 Jan 29 09:34:34 crc kubenswrapper[4826]: I0129 09:34:34.768418 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65qsq" event={"ID":"236ba5ff-d6d0-47e9-b2f5-0dce05f124df","Type":"ContainerDied","Data":"e7a1add96bf6d73c801abf695b4e063eb6c864cacb2da0cad75753563f84f2c4"} Jan 29 09:34:35 crc kubenswrapper[4826]: I0129 09:34:35.779618 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65qsq" event={"ID":"236ba5ff-d6d0-47e9-b2f5-0dce05f124df","Type":"ContainerStarted","Data":"f4e845c9ec4c592b7ad110230c863b9e3ac37cb364955a12e093feff421bc5a2"} Jan 29 09:34:35 crc kubenswrapper[4826]: I0129 09:34:35.820590 4826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-65qsq" podStartSLOduration=3.369253701 podStartE2EDuration="8.820569369s" podCreationTimestamp="2026-01-29 09:34:27 +0000 UTC" firstStartedPulling="2026-01-29 09:34:29.715382069 +0000 UTC m=+10253.577175138" lastFinishedPulling="2026-01-29 09:34:35.166697747 +0000 UTC m=+10259.028490806" observedRunningTime="2026-01-29 09:34:35.801250174 +0000 UTC m=+10259.663043253" watchObservedRunningTime="2026-01-29 09:34:35.820569369 +0000 UTC m=+10259.682362458" Jan 29 09:34:38 crc kubenswrapper[4826]: I0129 09:34:38.087677 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:38 crc kubenswrapper[4826]: I0129 09:34:38.088662 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:38 crc kubenswrapper[4826]: I0129 09:34:38.128109 4826 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:47 crc kubenswrapper[4826]: I0129 09:34:47.809789 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:34:47 crc kubenswrapper[4826]: E0129 09:34:47.811901 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.142409 4826 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-65qsq" Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.233759 4826 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65qsq"] Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.289926 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5xd8c"] Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.290428 4826 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5xd8c" podUID="ece9f354-f8c6-4108-af2b-9fc51ea418a3" containerName="registry-server" containerID="cri-o://2ffc73d46ea2b794b661ba835cddef0fcb314575bbe6a219fb4c1156660ef82c" gracePeriod=2 Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.801481 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.941511 4826 generic.go:334] "Generic (PLEG): container finished" podID="ece9f354-f8c6-4108-af2b-9fc51ea418a3" containerID="2ffc73d46ea2b794b661ba835cddef0fcb314575bbe6a219fb4c1156660ef82c" exitCode=0 Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.942642 4826 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5xd8c" Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.943263 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xd8c" event={"ID":"ece9f354-f8c6-4108-af2b-9fc51ea418a3","Type":"ContainerDied","Data":"2ffc73d46ea2b794b661ba835cddef0fcb314575bbe6a219fb4c1156660ef82c"} Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.943322 4826 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5xd8c" event={"ID":"ece9f354-f8c6-4108-af2b-9fc51ea418a3","Type":"ContainerDied","Data":"ad4cc99e29d65b069e37c5ec71b050d53a650293f04ca4aba7aa967b3bc75c84"} Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.943345 4826 scope.go:117] "RemoveContainer" containerID="2ffc73d46ea2b794b661ba835cddef0fcb314575bbe6a219fb4c1156660ef82c" Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.982215 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece9f354-f8c6-4108-af2b-9fc51ea418a3-utilities\") pod \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\" (UID: \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\") " Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.982274 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece9f354-f8c6-4108-af2b-9fc51ea418a3-catalog-content\") pod \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\" (UID: \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\") " Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.982478 4826 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc2bd\" (UniqueName: \"kubernetes.io/projected/ece9f354-f8c6-4108-af2b-9fc51ea418a3-kube-api-access-bc2bd\") pod \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\" (UID: \"ece9f354-f8c6-4108-af2b-9fc51ea418a3\") " Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.983679 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece9f354-f8c6-4108-af2b-9fc51ea418a3-utilities" (OuterVolumeSpecName: "utilities") pod "ece9f354-f8c6-4108-af2b-9fc51ea418a3" (UID: "ece9f354-f8c6-4108-af2b-9fc51ea418a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:34:48 crc kubenswrapper[4826]: I0129 09:34:48.993447 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece9f354-f8c6-4108-af2b-9fc51ea418a3-kube-api-access-bc2bd" (OuterVolumeSpecName: "kube-api-access-bc2bd") pod "ece9f354-f8c6-4108-af2b-9fc51ea418a3" (UID: "ece9f354-f8c6-4108-af2b-9fc51ea418a3"). InnerVolumeSpecName "kube-api-access-bc2bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.000207 4826 scope.go:117] "RemoveContainer" containerID="0e0a6b7092da0c6471e4f5c673e89210bfdef6862fdf6337d8e8e2368dc183e3" Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.081052 4826 scope.go:117] "RemoveContainer" containerID="09ad6f08d7191ab076f73ba31265b23b6baac0957bdef5ed4cdb58759020e7d8" Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.084987 4826 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ece9f354-f8c6-4108-af2b-9fc51ea418a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.085013 4826 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc2bd\" (UniqueName: \"kubernetes.io/projected/ece9f354-f8c6-4108-af2b-9fc51ea418a3-kube-api-access-bc2bd\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.109982 4826 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece9f354-f8c6-4108-af2b-9fc51ea418a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ece9f354-f8c6-4108-af2b-9fc51ea418a3" (UID: "ece9f354-f8c6-4108-af2b-9fc51ea418a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.117691 4826 scope.go:117] "RemoveContainer" containerID="2ffc73d46ea2b794b661ba835cddef0fcb314575bbe6a219fb4c1156660ef82c" Jan 29 09:34:49 crc kubenswrapper[4826]: E0129 09:34:49.118271 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ffc73d46ea2b794b661ba835cddef0fcb314575bbe6a219fb4c1156660ef82c\": container with ID starting with 2ffc73d46ea2b794b661ba835cddef0fcb314575bbe6a219fb4c1156660ef82c not found: ID does not exist" containerID="2ffc73d46ea2b794b661ba835cddef0fcb314575bbe6a219fb4c1156660ef82c" Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.118339 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ffc73d46ea2b794b661ba835cddef0fcb314575bbe6a219fb4c1156660ef82c"} err="failed to get container status \"2ffc73d46ea2b794b661ba835cddef0fcb314575bbe6a219fb4c1156660ef82c\": rpc error: code = NotFound desc = could not find container \"2ffc73d46ea2b794b661ba835cddef0fcb314575bbe6a219fb4c1156660ef82c\": container with ID starting with 2ffc73d46ea2b794b661ba835cddef0fcb314575bbe6a219fb4c1156660ef82c not found: ID does not exist" Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.118368 4826 scope.go:117] "RemoveContainer" containerID="0e0a6b7092da0c6471e4f5c673e89210bfdef6862fdf6337d8e8e2368dc183e3" Jan 29 09:34:49 crc kubenswrapper[4826]: E0129 09:34:49.118669 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e0a6b7092da0c6471e4f5c673e89210bfdef6862fdf6337d8e8e2368dc183e3\": container with ID starting with 0e0a6b7092da0c6471e4f5c673e89210bfdef6862fdf6337d8e8e2368dc183e3 not found: ID does not exist" containerID="0e0a6b7092da0c6471e4f5c673e89210bfdef6862fdf6337d8e8e2368dc183e3" Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.118711 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0a6b7092da0c6471e4f5c673e89210bfdef6862fdf6337d8e8e2368dc183e3"} err="failed to get container status \"0e0a6b7092da0c6471e4f5c673e89210bfdef6862fdf6337d8e8e2368dc183e3\": rpc error: code = NotFound desc = could not find container \"0e0a6b7092da0c6471e4f5c673e89210bfdef6862fdf6337d8e8e2368dc183e3\": container with ID starting with 0e0a6b7092da0c6471e4f5c673e89210bfdef6862fdf6337d8e8e2368dc183e3 not found: ID does not exist" Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.118736 4826 scope.go:117] "RemoveContainer" containerID="09ad6f08d7191ab076f73ba31265b23b6baac0957bdef5ed4cdb58759020e7d8" Jan 29 09:34:49 crc kubenswrapper[4826]: E0129 09:34:49.119028 4826 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09ad6f08d7191ab076f73ba31265b23b6baac0957bdef5ed4cdb58759020e7d8\": container with ID starting with 09ad6f08d7191ab076f73ba31265b23b6baac0957bdef5ed4cdb58759020e7d8 not found: ID does not exist" containerID="09ad6f08d7191ab076f73ba31265b23b6baac0957bdef5ed4cdb58759020e7d8" Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.119052 4826 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09ad6f08d7191ab076f73ba31265b23b6baac0957bdef5ed4cdb58759020e7d8"} err="failed to get container status \"09ad6f08d7191ab076f73ba31265b23b6baac0957bdef5ed4cdb58759020e7d8\": rpc error: code = NotFound desc = could not find container \"09ad6f08d7191ab076f73ba31265b23b6baac0957bdef5ed4cdb58759020e7d8\": container with ID starting with 09ad6f08d7191ab076f73ba31265b23b6baac0957bdef5ed4cdb58759020e7d8 not found: ID does not exist" Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.186869 4826 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ece9f354-f8c6-4108-af2b-9fc51ea418a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.308073 4826 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5xd8c"] Jan 29 09:34:49 crc kubenswrapper[4826]: I0129 09:34:49.317222 4826 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5xd8c"] Jan 29 09:34:49 crc kubenswrapper[4826]: E0129 09:34:49.479198 4826 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podece9f354_f8c6_4108_af2b_9fc51ea418a3.slice/crio-ad4cc99e29d65b069e37c5ec71b050d53a650293f04ca4aba7aa967b3bc75c84\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podece9f354_f8c6_4108_af2b_9fc51ea418a3.slice\": RecentStats: unable to find data in memory cache]" Jan 29 09:34:50 crc kubenswrapper[4826]: I0129 09:34:50.820170 4826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece9f354-f8c6-4108-af2b-9fc51ea418a3" path="/var/lib/kubelet/pods/ece9f354-f8c6-4108-af2b-9fc51ea418a3/volumes" Jan 29 09:34:59 crc kubenswrapper[4826]: I0129 09:34:59.809004 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:34:59 crc kubenswrapper[4826]: E0129 09:34:59.812461 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:35:14 crc kubenswrapper[4826]: I0129 09:35:14.808959 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:35:14 crc kubenswrapper[4826]: E0129 09:35:14.809949 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:35:25 crc kubenswrapper[4826]: I0129 09:35:25.810094 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:35:25 crc kubenswrapper[4826]: E0129 09:35:25.811188 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:35:27 crc kubenswrapper[4826]: I0129 09:35:27.457782 4826 scope.go:117] "RemoveContainer" containerID="129d2942d1a83b002e9431899478b72c9371316016a153b6ef15c2399dae88f6" Jan 29 09:35:27 crc kubenswrapper[4826]: I0129 09:35:27.481378 4826 scope.go:117] "RemoveContainer" containerID="81867b758b5b3ec5618ac79f0c6981b7c37bb1e0b4fd27c902c73eb7b6efe7b0" Jan 29 09:35:27 crc kubenswrapper[4826]: I0129 09:35:27.503871 4826 scope.go:117] "RemoveContainer" containerID="56262965726b37033e40758dffb9f00c031d5535b74c6fdf0b9cd91e3e80f461" Jan 29 09:35:36 crc kubenswrapper[4826]: I0129 09:35:36.815275 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:35:36 crc kubenswrapper[4826]: E0129 09:35:36.816168 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:35:50 crc kubenswrapper[4826]: I0129 09:35:50.809928 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:35:50 crc kubenswrapper[4826]: E0129 09:35:50.810890 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:36:01 crc kubenswrapper[4826]: I0129 09:36:01.808831 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:36:01 crc kubenswrapper[4826]: E0129 09:36:01.809637 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:36:16 crc kubenswrapper[4826]: I0129 09:36:16.818545 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:36:16 crc kubenswrapper[4826]: E0129 09:36:16.819313 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:36:27 crc kubenswrapper[4826]: I0129 09:36:27.808952 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:36:27 crc kubenswrapper[4826]: E0129 09:36:27.809832 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:36:41 crc kubenswrapper[4826]: I0129 09:36:41.809071 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:36:41 crc kubenswrapper[4826]: E0129 09:36:41.809883 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" Jan 29 09:36:52 crc kubenswrapper[4826]: I0129 09:36:52.809505 4826 scope.go:117] "RemoveContainer" containerID="978454da693700a960438b7b180d36be5cd81248b67377a4a1dd542f057d2371" Jan 29 09:36:52 crc kubenswrapper[4826]: E0129 09:36:52.810174 4826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-llzmh_openshift-machine-config-operator(6ea2651e-31ea-4e99-8bcd-2f8e9687df2f)\"" pod="openshift-machine-config-operator/machine-config-daemon-llzmh" podUID="6ea2651e-31ea-4e99-8bcd-2f8e9687df2f" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136625107024453 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136625110017362 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136600275016512 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136600275015462 5ustar corecore